CN115571152B - Safety early warning method, device, system, equipment and medium for non-motor vehicle - Google Patents

Safety early warning method, device, system, equipment and medium for non-motor vehicle Download PDF

Info

Publication number
CN115571152B
CN115571152B CN202211249750.3A CN202211249750A CN115571152B CN 115571152 B CN115571152 B CN 115571152B CN 202211249750 A CN202211249750 A CN 202211249750A CN 115571152 B CN115571152 B CN 115571152B
Authority
CN
China
Prior art keywords
motor vehicle
moment
moving object
pedestrian
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211249750.3A
Other languages
Chinese (zh)
Other versions
CN115571152A (en
Inventor
黄和平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Comprehensive Transportation And Municipal Engineering Design And Research Institute Co ltd
Shenzhen Qiyang Special Equipment Technology Engineering Co ltd
Original Assignee
Shenzhen Qiyang Special Equipment Technology Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qiyang Special Equipment Technology Engineering Co ltd filed Critical Shenzhen Qiyang Special Equipment Technology Engineering Co ltd
Priority to CN202211249750.3A priority Critical patent/CN115571152B/en
Publication of CN115571152A publication Critical patent/CN115571152A/en
Application granted granted Critical
Publication of CN115571152B publication Critical patent/CN115571152B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2300/00Indexing codes relating to the type of vehicle
    • B60W2300/36Cycles; Motorcycles; Scooters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/402Type
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4041Position
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4042Longitudinal speed
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a safety early warning method, a device, electronic equipment and a storage medium of a non-motor vehicle, wherein when a human body model is reconstructed, the depth image of a human body is not required to be acquired, and the depth information of the human body is not required to be calculated, so that compared with a device scanning reconstruction method and a binocular/multiview vision reconstruction method, the safety early warning method is low in cost, simple and convenient in mode, does not depend on equipment, and is more convenient to popularize widely; meanwhile, the generated three-dimensional model can be directly used without algorithm continuity correction, so that the three-dimensional model is more convenient to use; in addition, compared with a single-frame image reconstruction algorithm, the method inputs the contour images corresponding to the two pictures on the positive side into the human body parameter identification model to carry out regression of body type parameters, so that the problem that part of characteristics of the single picture are missing is avoided, and the construction accuracy of the model is greatly improved.

Description

Safety early warning method, device, system, equipment and medium for non-motor vehicle
Technical Field
The invention belongs to the technical field of traffic safety, and particularly relates to a safety early warning method, device, equipment and medium for a non-motor vehicle.
Background
At present, electric bicycles, tricycles and bicycles become common riding tools for the elderly to travel, but as the body functions of the elderly are declined, the action response is relatively slow, and the traffic safety consciousness and the law conservation consciousness of the elderly are relatively weak, various traffic illegal behaviors such as retrograde and red light running are quite common; meanwhile, along with the improvement of the living standard of China, the whole society advocates an aging-suitable design, wherein the aging-suitable design refers to a corresponding design (comprising barrier-free design, emergency treatment system and the like) which is made by fully considering the physical function and action characteristics of the old in public buildings such as houses, markets, hospitals and schools so as to meet the living and traveling demands of people who enter the old or are about to enter the old.
In the prior art, no proper aging design of the walking tool for the old exists, and safety early warning can not be carried out when the old rides, so that the problem to be solved is urgent how to realize traffic early warning when the old rides based on the safety helmet by combining the national condition that the electric vehicle and the tricycle are required to be ridden all over the country and the safety helmet is required to be worn.
Disclosure of Invention
The invention aims to provide a safety early warning method, device, system, equipment and medium for a non-motor vehicle, which are used for solving the problem that the prior art cannot perform safety early warning on old people when riding.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect, a method for safety warning of a non-motor vehicle is provided, applied to a non-motor vehicle equipped with a safety helmet, wherein the safety helmet has a warning device built therein, and the method comprises:
acquiring traffic data of a target area in any lane at a road intersection at a first moment and a second moment, wherein the traffic data comprises speeds and position coordinates of all moving objects in the target area and two-dimensional images of the target area, the two-dimensional images are shot by a camera at the road intersection, the position coordinates of any moving object are radar coordinates, and the any moving object is a motor vehicle, a non-motor vehicle or a pedestrian;
performing image recognition on the two-dimensional image corresponding to the first moment and the two-dimensional image corresponding to the second moment to respectively obtain a first traffic target and a second traffic target, wherein the first traffic target is each traffic participant in the two-dimensional image corresponding to the first moment, the second traffic target is each traffic participant in the two-dimensional image corresponding to the second moment, and any traffic participant is a motor vehicle, a non-motor vehicle or a pedestrian;
Based on the first traffic target, the position coordinates of each moving object at the first moment and the two-dimensional image corresponding to the first moment, carrying out coordinate calibration and type matching on each moving object in traffic data corresponding to the first moment to respectively obtain first coordinates of each moving object in a reference coordinate system and object types of each moving object in the traffic data corresponding to the first moment;
based on the second traffic target, the position coordinates of each moving object at the second moment and the two-dimensional image corresponding to the second moment, carrying out coordinate calibration and type matching on each moving object in traffic data corresponding to the second moment, and respectively obtaining second coordinates of each moving object in a reference coordinate system and object types of each moving object in the traffic data corresponding to the second moment;
according to the object type, the first coordinates and the speed of each moving object in the traffic data corresponding to the first moment and the object type, the second coordinates and the speed of each moving object in the traffic data corresponding to the second moment, the moving directions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area and the positions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area are obtained;
For any non-motor vehicle in the target area, generating safety pre-warning information of the any non-motor vehicle based on the position of the any non-motor vehicle, the movement direction of the any non-motor vehicle, the position of each motor vehicle, the movement direction of each motor vehicle and the position of each pedestrian;
and sending the safety early warning information to any safety helmet corresponding to the non-motor vehicle, so that an early warning device in the safety helmet sends out early warning prompt based on the safety early warning information.
Based on the disclosure, the method firstly acquires the speed and position coordinates of each moving object in the target area in any lane at different moments and the two-dimensional images of the target area at different moments; then, the data can be fused and calculated to obtain dangerous behavior information of the non-motor vehicle in the target area, namely, each traffic participant (such as motor vehicle, pedestrian or non-motor vehicle) in the corresponding two-dimensional image at different moments is firstly identified, so that each traffic participant in the corresponding two-dimensional image at different moments is taken as a traffic target, then, the target matching can be carried out based on the corresponding traffic targets at different moments, namely, the position coordinates of the moving object are converted into coordinates under a reference coordinate system, so that the traffic targets and the moving object which are the same at the same moment and the position under the reference coordinate system are taken as the same object, and the object types of each moving object can be obtained in the step because the types of the traffic targets are identified; then, the motion directions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area can be obtained based on the object type, the speed and the coordinates of the moving object at different moments and the coordinates of the reference coordinate system, and the positions of each non-motor vehicle, each motor vehicle and each pedestrian at preset moments can be predicted; finally, for any non-motor vehicle, safety precaution can be carried out according to the corresponding movement direction, the predicted position, the position of each motor vehicle, the movement direction of each motor vehicle and the position of each pedestrian, namely, whether collision, retrograde movement, red light running and other actions occur or not is judged, and safety precaution information is sent to a safety helmet corresponding to the non-motor vehicle, so that safety precaution prompt is carried out based on the safety helmet.
Through the design, the invention can detect the traffic data of the traffic participants, and performs fusion analysis calculation based on the traffic data to identify the object information of the traffic participants and predict the movement direction and the position of the traffic participants, so that the safety early warning of the non-motor vehicle can be realized based on the movement direction and the predicted position of the traffic participants, and the safety early warning information is generated and sent to the safety helmet matched with the non-motor vehicle, so that the safety helmet carries out traffic safety early warning prompt on old drivers after receiving the safety early warning information, thereby achieving the purpose of reducing the occurrence probability of safety accidents.
In one possible design, based on the first traffic target, the position coordinates of each moving object at the first time and the two-dimensional image corresponding to the first time, performing coordinate calibration and type matching on each moving object in the traffic data corresponding to the first time, to obtain the first coordinates of each moving object in the reference coordinate system in the traffic data corresponding to the first time, and the object types of each moving object, respectively, including:
acquiring a global image of the target area at a first moment, wherein the global image is a top view image of the target area;
Calculating a coordinate conversion matrix between each pixel point coordinate in a target image and each pixel point coordinate in the global image based on the global image, wherein a coordinate system corresponding to the global image is the reference coordinate system, and the target image is a two-dimensional image corresponding to a first moment;
acquiring image position coordinates corresponding to each first traffic target in a target image based on the first traffic targets, and converting the image position coordinates of each first traffic target into reference coordinate system coordinates based on the coordinate conversion matrix;
acquiring a coordinate conversion relation between a radar coordinate system and a global image corresponding coordinate system, and converting the position coordinates of each moving object in the corresponding traffic data at the first moment into reference coordinate system coordinates based on the coordinate conversion relation;
for any moving object, the reference coordinate system coordinate of the any moving object is used as the first coordinate of the any moving object, and the first traffic target corresponding to the any moving object is obtained based on the reference coordinate system coordinate of the any moving object in a matching mode, so that the object type of the any moving object is obtained based on the first traffic target corresponding to the any moving object.
In one possible design, according to the object type, the first coordinates and the speed of each moving object in the traffic data corresponding to the first moment, and the object type, the second coordinates and the speed of each moving object in the traffic data corresponding to the second moment, the moving directions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area and the positions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area at the preset moment are obtained, including:
according to the object type and the first coordinates of each moving object in the traffic data corresponding to the first moment and the object type and the second coordinates of each moving object in the traffic data corresponding to the second moment, the moving directions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area are obtained;
based on the object type and the speed of each moving object in the traffic data corresponding to the first moment and the object type and the speed of each moving object in the traffic data corresponding to the second moment, the acceleration of each non-motor vehicle, each motor vehicle and each pedestrian in the target area is obtained;
according to the acceleration of each non-motor vehicle, each motor vehicle and each pedestrian and the speed of each non-motor vehicle, each motor vehicle and each pedestrian at a second moment, obtaining the displacement of each non-motor vehicle, each motor vehicle and each pedestrian from the second moment to the preset moment;
And obtaining third coordinates of the non-motor vehicles, the motor vehicles and the pedestrians under the reference coordinate system at a third moment based on the displacement, the second coordinates and the movement direction of the non-motor vehicles, the motor vehicles and the pedestrians, so as to obtain the positions of the non-motor vehicles, the motor vehicles and the pedestrians based on the third coordinates of the non-motor vehicles, the motor vehicles and the pedestrians.
In one possible design, the method according to claim 3, wherein generating the safety warning information of the any non-motor vehicle based on the position of the any non-motor vehicle, the movement direction of the any non-motor vehicle, the position of each motor vehicle, the movement direction of each motor vehicle, and the position of each pedestrian, comprises:
taking the motion direction of each motor vehicle as a calibration direction, and judging whether the motion direction of any non-motor vehicle is the same as the calibration direction;
if not, judging that any non-motor vehicle is in a retrograde state, and generating retrograde early warning information; and
judging whether the third coordinate of any non-motor vehicle is the same as the third coordinate of any motor vehicle and/or the third coordinate of any pedestrian;
If the collision pre-warning information is the same, collision pre-warning information is generated;
and forming the safety early warning information by using the retrograde early warning information and/or the collision early warning information.
In one possible design, generating the safety warning information of the any non-motor vehicle based on the position of the any non-motor vehicle, the movement direction of the any non-motor vehicle, the position of each motor vehicle, the movement direction of each motor vehicle, and the position of each pedestrian includes:
acquiring a calibration image and phase information of signal lamps corresponding to any lane, wherein the calibration image covers the road intersection and any lane, and a coordinate system corresponding to the calibration image is also the reference coordinate system;
judging whether the third coordinate of any non-motor vehicle is in a calibration area, wherein the calibration area is an area which only represents a road intersection in the calibration image;
if yes, judging whether the phase information is red light information or not;
if yes, red light running early warning information is generated, and the red light running early warning information is used as the safety early warning information.
In a second aspect, a safety precaution device for a non-motor vehicle is provided, comprising:
The system comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for acquiring traffic data of a target area in any lane at a road intersection at a first moment and a second moment, wherein the traffic data comprises speeds and position coordinates of all moving objects in the target area and two-dimensional images of the target area, the two-dimensional images are shot by a camera at the road intersection, the position coordinates of any moving object are radar coordinates, and the any moving object is a motor vehicle, a non-motor vehicle or a pedestrian;
the system comprises an image identification unit, a first traffic target and a second traffic target, wherein the image identification unit is used for carrying out image identification on a two-dimensional image corresponding to a first moment and a two-dimensional image corresponding to a second moment to respectively obtain the first traffic target and the second traffic target, the first traffic target is each traffic participant in the two-dimensional image corresponding to the first moment, the second traffic target is each traffic participant in the two-dimensional image corresponding to the second moment, and any traffic participant is a motor vehicle, a non-motor vehicle or a pedestrian;
the matching unit is used for carrying out coordinate calibration and type matching on each moving object in the traffic data corresponding to the first moment based on the first traffic target, the position coordinates of each moving object at the first moment and the two-dimensional image corresponding to the first moment, and respectively obtaining the first coordinates of each moving object in the reference coordinate system and the object types of each moving object in the traffic data corresponding to the first moment;
The matching unit is further used for carrying out coordinate calibration and type matching on each moving object in the traffic data corresponding to the second moment based on the second traffic target, the position coordinates of each moving object at the second moment and the two-dimensional image corresponding to the second moment, so as to respectively obtain second coordinates of each moving object in the reference coordinate system and object types of each moving object in the traffic data corresponding to the second moment;
the position prediction unit is used for obtaining the movement directions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area according to the object type, the first coordinate and the speed of each moving object in the traffic data corresponding to the first moment and the object type, the second coordinate and the speed of each moving object in the traffic data corresponding to the second moment, and the positions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area when the movement directions are at the preset moment;
the early warning unit is used for generating safety early warning information of any non-motor vehicle in the target area based on the position of the any non-motor vehicle, the movement direction of the any non-motor vehicle, the position of each motor vehicle, the movement direction of each motor vehicle and the position of each pedestrian;
And the sending unit sends the safety early warning information to any safety helmet corresponding to the non-motor vehicle, so that an early warning device in the safety helmet sends out early warning prompt based on the safety early warning information.
In a third aspect, a safety early warning system of a non-motor vehicle is provided, which comprises a front end sensing terminal, an edge computing terminal and a safety helmet, wherein the safety helmet is configured on the non-motor vehicle, and an early warning device is arranged in the safety helmet;
the front-end sensing terminal is used for collecting traffic data of a target area in any lane at a road intersection at a first moment and a second moment, and sending the traffic data of the target area at the first moment and the second moment to the edge computing terminal, wherein the traffic data comprises speeds and position coordinates of all moving objects in the target area and two-dimensional images of the target area, the two-dimensional images are shot by a camera at the road intersection, the position coordinates of any moving object are radar coordinates, and the any moving object is an automobile, a non-automobile or a pedestrian;
the edge computing terminal is used for receiving traffic data at the first moment and the second moment and generating safety early warning information of any non-motor vehicle in a target area by using the safety early warning method of the non-motor vehicle possibly designed in the first aspect or any one of the first aspect;
The edge computing terminal is also used for sending the safety early warning information to a safety helmet corresponding to any non-motor vehicle;
the safety helmet is used for sending out early warning prompts based on the safety early warning information.
In one possible design, the edge computing terminal is communicatively coupled to the safety helmet using an RSU roadside unit, and a V2X communication protocol is employed between the RSU roadside unit and the safety helmet.
In a fourth aspect, another safety precaution device for a non-motor vehicle is provided, taking the device as an electronic device, and the safety precaution device includes a memory, a processor and a transceiver that are sequentially communicatively connected, where the memory is used to store a computer program, the transceiver is used to send and receive a message, and the processor is used to read the computer program, and execute a safety precaution method for the non-motor vehicle as in the first aspect or any one of the possible designs of the first aspect.
In a fifth aspect, there is provided a storage medium having instructions stored thereon which, when executed on a computer, perform a method of security pre-warning of the non-motor vehicle as may be devised in the first aspect or any one of the first aspects.
In a sixth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of safety precaution of a non-motor vehicle as may be devised in accordance with the first aspect or any one of the first aspects.
The beneficial effects are that:
(1) The invention can detect traffic data of traffic participants, and performs fusion analysis and calculation based on the traffic data to identify object information of the traffic participants and predict the movement direction and the position of the traffic participants, thereby realizing safety early warning of non-motor vehicles based on the movement direction and the predicted position of the traffic participants, generating safety early warning information and sending the safety early warning information to a safety helmet matched with the non-motor vehicles, and after the safety helmet receives the safety early warning information, carrying out traffic safety early warning prompt on old drivers so as to achieve the aim of reducing the occurrence probability of safety accidents.
Drawings
Fig. 1 is a schematic flow chart of steps of a safety pre-warning method for a non-motor vehicle according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a safety warning device for a non-motor vehicle according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a safety warning system for a non-motor vehicle according to an embodiment of the present invention;
FIG. 4 is a schematic structural view of a safety helmet according to an embodiment of the present invention;
FIG. 5 is a schematic illustration of communication between a safety helmet and an RSU roadside unit according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Reference numerals: 10-a safety helmet; a 20-V2X communication module; 30-horn; 40-an antenna; a 50-rechargeable battery; 60-charging interface.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the present invention will be briefly described below with reference to the accompanying drawings and the description of the embodiments or the prior art, and it is obvious that the following description of the structure of the drawings is only some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art. It should be noted that the description of these examples is for aiding in understanding the present invention, but is not intended to limit the present invention.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention.
It should be understood that for the term "and/or" that may appear herein, it is merely one association relationship that describes an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a alone, B alone, and both a and B; for the term "/and" that may appear herein, which is descriptive of another associative object relationship, it means that there may be two relationships, e.g., a/and B, it may be expressed that: a alone, a alone and B alone; in addition, for the character "/" that may appear herein, it is generally indicated that the context associated object is an "or" relationship.
Examples:
referring to fig. 1, the safety early warning method of the non-motor vehicle provided in the first aspect of the present embodiment is applied to traffic early warning of the non-motor vehicle matched with a safety helmet, wherein an early warning device is arranged in the safety helmet, and the early warning device plays a role in prompting traffic early warning of old drivers; in the practical application process, the method can identify the movement direction of the traffic participants on the road and predict the movement position of the traffic participants through the traffic data of the traffic participants, so that the safety early warning of the non-motor vehicle can be realized based on the movement direction and the predicted position of the traffic participants, and the safety early warning information is generated and sent to the safety helmet matched with the non-motor vehicle, thereby realizing the safety early warning for the old people in riding and prompting the old people to pay attention to traffic safety; optionally, the method provided in this embodiment may be, but not limited to, running on the edge computing terminal side, and it is to be understood that the foregoing execution body is not limited to the embodiment of the present application, and accordingly, the running steps of the method may be, but not limited to, as shown in the following steps S1 to S7.
S1, acquiring traffic data of a target area in any lane at a road intersection at a first moment and a second moment, wherein the traffic data comprise speeds and position coordinates of all moving objects in the target area and two-dimensional images of the target area, the two-dimensional images are shot by a camera at the road intersection, the position coordinates of any moving object are radar coordinates, and the any moving object is a motor vehicle, a non-motor vehicle or a pedestrian; in specific application, the second moment is after the first moment, for example, 10 points on 21 days of 7 months of 2022 are taken as the first moment, so as to collect traffic data of a target area, and then after 2 seconds, the traffic data of the target area is collected again by taking 2 seconds of 10 points on 21 days of 7 months of 2022 as the second moment; of course, the interval duration between the two moments can be specifically set according to actual use; meanwhile, the front-end sensing equipment is adopted to collect traffic data on a road, the corresponding target area is a common detection area of the front-end sensing equipment, the front-end sensing equipment can include geomagnetic coils, cameras and millimeter wave radars (also can be laser radars or laser radars and millimeter wave radars) for example, the geomagnetic coils are used for sensing whether objects pass through the target area or not and can serve as a camera and millimeter wave radar awakening device, when the objects are detected to pass through, the cameras and the millimeter wave radars are awakened to work, the cameras shoot images of the target area at first time and second time and serve as two-dimensional images, the millimeter wave radars detect the position, speed and other information of each moving object in the target area, and the detection is more accurate; in addition, in the traffic data corresponding to the first and second moments, the position coordinates of each moving object are radar coordinates, and are obtained directly through a GPS (Global Positioning System ) and IMU (Inertial Measurement Unit) inertial sensors on the radar.
After the traffic data of the first moment and the second moment of the target area are obtained, the traffic data corresponding to the first moment and the second moment can be fused and calculated, so that the movement direction of each moving object in the target area can be identified, and the position of each moving object at the future moment can be predicted; in the implementation, the millimeter wave radar can only detect the position and the speed of the identified moving object when acquiring traffic data, but cannot identify the type of each moving object, namely a non-motor vehicle or a pedestrian, so that the embodiment needs to identify traffic participants in an image based on a two-dimensional image in the traffic data, and then combines the position of each moving object in the traffic data to realize the matching of each traffic participant identified in the two-dimensional image and each moving object, thereby obtaining the object type of each moving object; wherein the image recognition process is as shown in step S2 below.
S2, carrying out image recognition on the two-dimensional image corresponding to the first moment and the two-dimensional image corresponding to the second moment to respectively obtain a first traffic target and a second traffic target, wherein the first traffic target is each traffic participant in the two-dimensional image corresponding to the first moment, the second traffic target is each traffic participant in the two-dimensional image corresponding to the second moment, and any traffic participant is a motor vehicle, a non-motor vehicle or a pedestrian; in specific application, the neural network can be used for performing image recognition on the two-dimensional images corresponding to the first moment and the second moment, so that the types of all traffic participants in the two-dimensional images are obtained, and the recognized traffic participants are used as traffic targets; alternatively, example neural networks may include, but are not limited to: CNN (Convolutional Neural Networks, convolutional neural network) networks, DNN deep neural networks, or RNN (recurrent neural networks).
After the two-dimensional images corresponding to the first moment and the second moment are obtained, each traffic participant can be matched with a moving object in traffic data based on the identified traffic participant, namely, the object type of each moving object in the traffic data is obtained, after the type of each moving object is obtained, which moving objects are non-motor vehicles and which moving objects are pedestrians can be obtained, and therefore, the position prediction of the non-motor vehicles, the motor vehicles and the pedestrians can be carried out, wherein the matching process of the moving objects is as shown in the following step S3 and step S4.
S3, calibrating coordinates and matching types of all the moving objects in the traffic data corresponding to the first moment based on the first traffic target, the position coordinates of all the moving objects corresponding to the first moment and the two-dimensional image corresponding to the first moment, and respectively obtaining first coordinates of all the moving objects in a reference coordinate system and object types of all the moving objects in the traffic data corresponding to the first moment.
S4, calibrating coordinates and matching types of all the moving objects in the traffic data corresponding to the second moment based on the second traffic target, the position coordinates of all the moving objects at the second moment and the two-dimensional image corresponding to the second moment, and respectively obtaining second coordinates of all the moving objects in the reference coordinate system and object types of all the moving objects in the traffic data corresponding to the second moment.
In the specific application, a first traffic target identified by a two-dimensional image corresponding to a first moment is used for matching the types of all moving objects in traffic data corresponding to the first moment, and a second traffic target identified by a two-dimensional image corresponding to a second moment is used for matching the types of all moving objects in traffic data corresponding to the second moment; in this embodiment, at the same time, for matching of moving objects, the image position coordinates of each traffic object in the image and the position coordinates (that is, the radar coordinates) of each moving object are simultaneously converted into a reference coordinate system to obtain the reference coordinate system coordinates corresponding to each traffic object and the reference coordinate system coordinates corresponding to each moving object, so that, when matching is performed, for any moving object, only the image position coordinates corresponding to the reference coordinate system coordinates of the any moving object need to be searched, and finally, the type of the traffic object corresponding to the searched image position coordinates can be used as the object type of the any moving object; since the two matching processes are identical, a specific matching process will be described in detail below by taking each moving object in the traffic data corresponding to the first time as an example, as shown in steps S31 to S35 below.
S31, acquiring a global image of the target area at a first moment, wherein the global image is a overlook image of the target area; in specific application, the coordinate system corresponding to the global image is a reference coordinate system, and the global image can be obtained by unmanned aerial vehicle aerial photography; meanwhile, the global image is an image with traffic targets removed (i.e. vehicles and pedestrians are removed), so that the pedestrians and vehicles in the image can be avoided, and the conversion between the coordinate system and the reference coordinate system in the corresponding two-dimensional image at the first moment is affected.
S32, calculating a coordinate conversion matrix between each pixel point coordinate in a target image and each pixel point coordinate in the global image based on the global image, wherein a coordinate system corresponding to the global image is the reference coordinate system, and the target image is a two-dimensional image corresponding to a first moment; when the method is applied specifically, the coordinate transformation matrix is obtained according to the following principle: selecting a plurality of first calibration points in the global image, determining a second calibration point which is the same as the characterization position of each first calibration point in the target image, and finally, calculating a coordinate conversion matrix according to the coordinates of the first calibration point and the second calibration point; optionally, the process of calculating the coordinate transformation matrix is shown in the following steps S32a to S32 d.
S32a, selecting at least four first calibration points in a global image, wherein each first calibration point is a pixel point used for representing a calibration object in a target area in the global image; in particular applications, the markers may be, but are not limited to, sign lines and/or electric warning bars for road crossings, and in this embodiment, sign lines are preferred.
S32b, determining a second calibration point which is the same as the characterization position of each first calibration point in the target image based on at least four first calibration points; when the marker line is used as a marker, the end angle, the corner or the intersection point between the two marker lines of each marker line can be used as a marker point, so that the marker point can be conveniently marked in the global image; similarly, when mapping the calibration points, the second calibration points can be marked in the target image based on the positions represented by the first calibration points, for example, if the first calibration point A in the global image is the end point of the left end of the tail part of the right-turning arrow, the same lane is screened out in the target image, and the end point of the left end of the tail part of the same right-turning arrow is used as the second calibration point corresponding to the first calibration point A.
After the first calibration points in the global image are completed and the corresponding second calibration points are mapped in the target image, the coordinates of each first calibration point and each second calibration point may be obtained, so as to calculate a coordinate transformation matrix based on the coordinates of the first calibration point and the second calibration point, as shown in step S32c below.
S32c, acquiring the coordinates of each first calibration point according to the global image, acquiring the coordinates of each second calibration point according to the target image, and calculating to obtain a coordinate conversion matrix based on the coordinates of each first calibration point and the coordinates of each second calibration point; when the method is specifically applied, the first step is as follows: firstly, constructing a homography matrix, wherein the homography matrix is a matrix of three rows and three columns, and meanwhile, the first calibration point and the second calibration point meet the following relation:
Figure BDA0003887597550000121
in the above formula, P is a homography matrix, h 1 ,h 2 ,...,h 9 Then it is a matrix element, also the parameter to be solved, x i ,y i Respectively representing the abscissa and the ordinate of the ith first calibration point, u m ,v m Respectively representing the abscissa and the ordinate of the mth second calibration point, wherein i and M are in one-to-one correspondence, i.e. when i is 1, M is also 1, the 1 st first calibration point is mapped into the global image, so as to obtain the first second calibration point corresponding to the first calibration point, of course, the rest values have the same meaning, and are not repeated herein, meanwhile, i=1, 2, N, m=1, 2, M, N represents the total number of the first calibration points, M represents the total number of the second calibration points, and N is greater than or equal to 4, M is greater than or equal to 4, and n=m.
Based on homography matrix, coordinates of each first calibration point and coordinates of each second calibration point, constructing coordinate conversion equation between coordinate system corresponding to target image and reference coordinate system, and solving coordinate conversion equation by singular value decomposition method so as to use solution of coordinate conversion equation as matrix element in homography matrix.
In specific application, the coordinate conversion equation is as follows:
Figure BDA0003887597550000122
meanwhile, substituting the coordinates of different first calibration points and second calibration points into the coordinate conversion equation to obtain equations with the same number of the first calibration points and the second calibration points, extracting each item in each equation to form a matrix equation, and solving the equation by using a singular value decomposition method to obtain matrix elements in the homography matrix.
The following lists the matrix equations taking 4 first calibration points and 4 second calibration points as examples, as follows:
Figure BDA0003887597550000131
in the matrix equation, the coordinates of each first calibration point and each second calibration point are known, so that the singular value decomposition method is utilized to solve the matrix equation to obtain h 1 ,h 2 ,...,h 9 Finally, substituting the matrix elements into the homography matrixAnd obtaining a coordinate transformation matrix.
After the coordinate transformation matrix is obtained, the coordinates of each first traffic target in the target image in the image may be transformed into coordinates corresponding to the reference coordinate system based on the coordinate transformation matrix, as shown in step S33 below.
S33, acquiring image position coordinates corresponding to each first traffic target in a target image based on the first traffic targets, and converting the image position coordinates of each first traffic target into coordinates of a reference coordinate system based on the coordinate conversion matrix; in specific application, it is assumed that each first traffic target is: the position coordinates of the first traffic targets in the target image are as follows: (X4, Y4), (X5, Y5), (X6, Y6) and (X7, Y7), the coordinates (X4, Y4), (X5, Y5), (X6, Y6) and (X7, Y7) are then set to the step S33. And converting the first traffic targets into coordinates corresponding to a reference coordinate system, so that each first traffic target is mapped into a global image.
Similarly, for each moving object in the traffic data corresponding to the first moment, the radar coordinates (i.e., the position coordinates) of each moving object need to be mapped into the reference coordinate system, so that the matching of the moving object type is performed based on the mapped coordinates later, as shown in step S34.
S34, acquiring a coordinate conversion relation between a radar coordinate system and a global image corresponding coordinate system, and converting the position coordinates of each moving object in the corresponding traffic data at the first moment into reference coordinate system coordinates based on the coordinate conversion relation; in a specific application, the coordinate conversion relation may be but not limited to be preset into the edge computing terminal, where, for example, a radar position point is taken as an origin, a positive east direction of the origin is taken as an x-axis positive direction, a positive north direction of the origin is taken as a y-axis positive direction, and a radar coordinate system is established, and the example coordinate conversion relation may be but not limited to be: xr=xc+kx0cost-ky 0sint, yr=yc+kx0sint+ky 0cost, where x0 and y0 represent the abscissa and ordinate of any moving object in the radar coordinate, t is the radar azimuth, xc and Yc are the calibrated abscissa and the calibrated ordinate, xr is the abscissa of any moving object in the global image, yr is the ordinate of any moving object in the global image, and k represents the conversion coefficient, which is a constant.
In the implementation, a calibration lane can be determined in the global image based on the position of the millimeter wave radar, the calibration lane is marked in the global image in advance by a user, then, a plurality of Xr and Yr can be obtained by continuously adjusting the values of t, xc and Yc, meanwhile, when the pixel points corresponding to the Xr and Yr obtained by adjustment are positioned in the calibration lane of the global image, the adjustment can be finished, and at the moment, the Xr and Yr positioned in the calibration lane are used as the coordinates of any moving object in a reference coordinate system; with this principle, after all the radar coordinates of each moving object are converted into coordinates of the reference coordinate system, the type matching of each moving object can be performed, as shown in step S35 below.
S35, for any moving object, using the reference coordinate system coordinate of the any moving object as a first coordinate of the any moving object, and obtaining a first traffic target corresponding to the any moving object based on the reference coordinate system coordinate of the any moving object in a matching way so as to obtain the object type of the any moving object based on the first traffic target corresponding to the any moving object; the specific application is described on the basis of the foregoing examples: assuming that each moving object is a moving object D1, a moving object D2, a moving object E1, and a moving object F1, where coordinates of a reference coordinate system corresponding to the motor vehicle A1 in the first traffic target are the same as coordinates of a reference coordinate system corresponding to the moving object D2, then the moving object D2 and the motor vehicle A1 are the same object, that is, the type of the moving object D2 is the motor vehicle A1, and similarly, the type matching process of the other moving objects is also the same, which is not described herein again.
Meanwhile, the type matching and coordinate calibration process of each moving object in the traffic data corresponding to the second moment can be referred to in the foregoing steps S31 to S35, and will not be described herein.
After the types of the moving objects in the traffic data corresponding to the first moment and the moving objects in the traffic data corresponding to the second moment are matched, the motor vehicles, the non-motor vehicles and the pedestrians in the traffic data corresponding to the first moment and the second moment can be obtained, so that the position prediction of each motor vehicle, each non-motor vehicle and the pedestrians can be performed later, wherein the position prediction is shown in the following step S5.
S5, according to the object type, the first coordinates and the speed of each moving object in the traffic data corresponding to the first moment and the object type, the second coordinates and the speed of each moving object in the traffic data corresponding to the second moment, the moving directions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area and the positions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area are obtained; in a specific application, the following steps S51 to S54 are adopted to obtain the movement directions of each non-motor vehicle, each motor vehicle and each pedestrian, and the positions of each non-motor vehicle, each motor vehicle and each pedestrian at the preset time.
S51, according to the object type and the first coordinates of each moving object in the traffic data corresponding to the first moment and the object type and the second coordinates of each moving object in the traffic data corresponding to the second moment, the moving directions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area are obtained; when the method is specifically applied, for any motor vehicle, a first coordinate corresponding to a first moment is taken as a starting point, a second coordinate corresponding to a second moment is taken as an end point, a connection line is carried out in a global image, and a direction indicated by the connection line is the movement direction of the any motor vehicle; furthermore, as the first coordinate and the second coordinate of any motor vehicle are known, specific parameters of the motion direction of the motor vehicle, such as an included angle with an x-axis and an included angle with a y-axis in a global image, can be obtained; similarly, the motion direction obtaining principles of the other vehicles, non-vehicles and pedestrians are the same as the foregoing examples, and will not be repeated here.
S52, obtaining the acceleration of each non-motor vehicle, each motor vehicle and each pedestrian in the target area based on the object type and the speed of each moving object in the traffic data corresponding to the first moment and the object type and the speed of each moving object in the traffic data corresponding to the second moment; in specific application, the acceleration calculation formula is as follows: a= (V2-V1)/(t 2-t 1), since the object types of the respective moving objects are known in the traffic data corresponding to the first time and the second time, the same vehicle, non-vehicle and pedestrian can be obtained, the speed V1 at the first time and the speed V2 at the second time are also known, and the time difference between the first time t1 and the second time t2 is also known, so that the accelerations of the respective non-vehicle, the respective vehicle and the respective pedestrian can be calculated based on the aforementioned acceleration calculation formula.
After the accelerations of each non-motor vehicle, each motor vehicle, and each pedestrian are obtained, the displacements thereof at the preset time can be obtained by using the accelerations, thereby realizing the prediction of the positions, as shown in step S53 below.
S53, according to the acceleration of each non-motor vehicle, each motor vehicle and each pedestrian and the speed of each non-motor vehicle, each motor vehicle and each pedestrian at a second moment, obtaining the displacement of each non-motor vehicle, each motor vehicle and each pedestrian from the second moment to the preset moment; in specific application, the preset time is for example 2s, 4s or 5s after the second time, which can be specifically defined according to the actual early warning requirement, so that for any motor vehicle, the speed at the second time is taken as the initial speed, and then the displacement calculation is performed according to the acceleration calculated by any motor vehicle and by using a displacement calculation formula; similarly, the displacement calculation of each of the remaining vehicles, non-vehicles and pedestrians is also the same, and will not be described herein.
After the displacements of each non-motor vehicle, each motor vehicle, and each pedestrian are obtained, the corresponding positions can be determined in the global image based on the movement directions thereof, as shown in step S54 below.
S54, obtaining third coordinates of each non-motor vehicle, each motor vehicle and each pedestrian under the reference coordinate system at a third moment based on the displacement, the second coordinates and the movement direction of each non-motor vehicle, each motor vehicle and each pedestrian, so as to obtain the positions of each non-motor vehicle, each motor vehicle and each pedestrian based on the third coordinates of each non-motor vehicle, each motor vehicle and each pedestrian; in specific application, a proportional relation (such as 1m corresponds to 1 pixel point) between the actual displacement and a single pixel point in the global image can be set at the edge computing terminal, so that for any motor vehicle, the coordinate after the movement of the motor vehicle by L distance (L is obtained based on the displacement of any motor vehicle and the proportional relation between the displacement and the single pixel point) from the second coordinate serving as a starting point can be determined in the global image based on the corresponding movement direction and the displacement of any motor vehicle, and therefore, the third coordinate of any motor vehicle can be obtained to realize the prediction of the position of any motor vehicle; similarly, the third coordinates of the other non-motor vehicles, motor vehicles and pedestrians are obtained according to the same principle as the foregoing examples, and will not be described herein.
After the positions and the movement directions of the non-motor vehicles, the motor vehicles and the pedestrians are obtained, the safety warning of the non-motor vehicles can be performed as shown in the following step S6.
S6, for any non-motor vehicle in the target area, generating safety pre-warning information of any non-motor vehicle based on the position of the any non-motor vehicle, the movement direction of any non-motor vehicle, the position of each motor vehicle, the movement direction of each motor vehicle and the position of each pedestrian; in specific application, for any non-motor vehicle, the safety pre-warning process is shown in the following steps S61 to S65.
S61, judging whether the motion direction of any non-motor vehicle is the same as the calibration direction by taking the motion direction of each motor vehicle as the calibration direction.
S62, if not, judging that any non-motor vehicle is in a reverse running state, and generating reverse running early warning information; and
s63, judging whether the third coordinate of any non-motor vehicle is the same as the third coordinate of any motor vehicle and/or the third coordinate of any pedestrian.
S64, if the collision pre-warning information is the same, collision pre-warning information is generated.
S65, the retrograde early warning information and/or the collision early warning information are utilized to form the safety early warning information.
In specific application, in general, the motor vehicles do not perform reverse running, so that the embodiment takes the movement direction of each motor vehicle as the calibration direction, thereby judging whether any non-motor vehicle is in reverse running or not; meanwhile, if the third coordinates of any non-motor vehicle and the motor vehicle and/or the pedestrian are the same, the collision will be indicated, and therefore, collision early warning information can be generated.
Meanwhile, the embodiment can also realize the early warning of running the red light of any non-motor vehicle, and the early warning process is shown in the following steps S66-S69.
S66, acquiring a calibration image and phase information of signal lamps corresponding to any lane, wherein the calibration image covers the road intersection and any lane, and a coordinate system corresponding to the calibration image is also the reference coordinate system; when the method is specifically applied, the calibration image is a overlook image of the whole road intersection and any lane, and the phase information of the signal lamps corresponding to any lane can be acquired by a signal controller connected with the signal lamps.
S67, judging whether a third coordinate of any non-motor vehicle is in a calibration area, wherein the calibration area is an area which only represents a road intersection in the calibration image; in specific application, the coordinate system corresponding to the calibration image is also a reference coordinate system, namely the calibration image is equivalent to an expansion image which covers the global image and belongs to the global image, so that the third coordinate of any non-motor vehicle can be directly applied to the calibration image, and when the third coordinate of any non-motor vehicle belongs to the calibration area, the condition that any non-motor vehicle is at a road intersection at a preset moment is indicated; at this time, only the phase information of the signal lamp is needed to be judged, so as to obtain whether any non-motor vehicle has red light running behavior according to the phase information, as shown in the following step S68.
S68, if so, judging whether the phase information is red light information; in this embodiment, the phase information is, for example, phase information at a preset time; because the phase of the signal lamp is converted according to the fixed time, the phase information at the preset moment can be directly obtained from the signal controller.
After the phase information is obtained, if the phase information is the red light phase, it indicates that any motor vehicle has the red light running behavior, otherwise, the motor vehicle does not exist, wherein when the red light running behavior exists, red light running early warning information needs to be generated, as shown in the following step S69.
S69, if yes, red light running early warning information is generated, and the red light running early warning information is used as the safety early warning information.
In this embodiment, the safety pre-warning process of each other non-motor vehicle is the same as that of any one of the motor vehicles, and will not be described again.
In addition, the embodiment can judge whether each non-motor vehicle overspeed based on the speed of each non-motor vehicle at the second moment so as to realize overspeed early warning, and can identify the number of people on each non-motor vehicle based on the corresponding two-dimensional image at the second moment so as to judge whether each non-motor vehicle is overloaded so as to realize overload early warning.
After the safety pre-warning information of each non-motor vehicle is obtained, the safety pre-warning information can be sent to the safety helmets matched with each non-motor vehicle so as to carry out pre-warning prompt, as shown in the following step S7.
S7, the safety early warning information is sent to any safety helmet corresponding to the non-motor vehicle, so that an early warning device in the safety helmet sends out early warning prompt based on the safety early warning information; in specific application, the early warning device of the example safety helmet is preset with the early warning voice prompt information corresponding to different safety early warning information, so that after the corresponding safety early warning information is received, the corresponding early warning voice can be matched, and the voice early warning prompt is carried out on the old driver; if the voice prompt information corresponding to the retrograde early warning information is: "you are going backward, please pay attention to traffic safety", for example, the voice prompt information corresponding to the collision early warning information may be, but is not limited to: "you may collide with the motor vehicle, please slow down or drive by side", for example, the voice prompt information corresponding to the red light running warning information may be, but not limited to: the voice prompt information corresponding to the rest of the safety early warning information is also preset voice information, and is not described in detail herein.
In this embodiment, the surface of the example safety helmet may be added with the communication frequency text of the internal warning device, so that when the two-dimensional image is identified, the communication frequency of each safety helmet is identified, so that the one-to-one correspondence communication of each safety helmet is performed subsequently, and further the corresponding transmission of the safety warning information of each non-motor vehicle is realized, so as to ensure the accuracy of warning.
The safety early warning method of the non-motor vehicle can detect traffic data of traffic participants and conduct fusion analysis calculation based on the traffic data so as to identify object information of the traffic participants and predict the movement direction and the position of the traffic participants, and therefore safety early warning of the non-motor vehicle can be achieved based on the movement direction and the predicted position of the traffic participants, safety early warning information is generated and sent to a safety helmet matched with the non-motor vehicle, and after the safety early warning information is received, the safety helmet carries out traffic safety early warning prompt on old drivers, so that the aim of reducing the probability of occurrence of safety accidents is achieved.
As shown in fig. 2, a second aspect of the present embodiment provides a hardware device for implementing the safety precaution method of the non-motor vehicle according to the first aspect of the present embodiment, including:
The system comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for acquiring traffic data of a target area in any lane at a road intersection at a first moment and a second moment, wherein the traffic data comprise the speed and position coordinates of each moving object in the target area and a two-dimensional image of the target area, the two-dimensional image is shot by a camera at the road intersection, the position coordinates of any moving object are radar coordinates, and the any moving object is a motor vehicle, a non-motor vehicle or a pedestrian.
The image recognition unit is used for carrying out image recognition on the two-dimensional image corresponding to the first moment and the two-dimensional image corresponding to the second moment to respectively obtain a first traffic target and a second traffic target, wherein the first traffic target is each traffic participant in the two-dimensional image corresponding to the first moment, the second traffic target is each traffic participant in the two-dimensional image corresponding to the second moment, and any traffic participant is a motor vehicle, a non-motor vehicle or a pedestrian.
The matching unit is used for carrying out coordinate calibration and type matching on each moving object in the traffic data corresponding to the first moment based on the first traffic target, the position coordinates of each moving object at the first moment and the two-dimensional image corresponding to the first moment, and respectively obtaining the first coordinates of each moving object in the reference coordinate system and the object types of each moving object in the traffic data corresponding to the first moment.
And the matching unit is also used for carrying out coordinate calibration and type matching on each moving object in the traffic data corresponding to the second moment based on the second traffic target, the position coordinates of each moving object at the second moment and the two-dimensional image corresponding to the second moment, and respectively obtaining the second coordinates of each moving object in the reference coordinate system and the object types of each moving object in the traffic data corresponding to the second moment.
The position prediction unit is used for obtaining the movement directions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area according to the object type, the first coordinate and the speed of each moving object in the traffic data corresponding to the first moment and the object type, the second coordinate and the speed of each moving object in the traffic data corresponding to the second moment, and the positions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area when the movement directions are at the preset moment.
The early warning unit is used for generating safety early warning information of any non-motor vehicle in the target area based on the position of the any non-motor vehicle, the movement direction of the any non-motor vehicle, the position of each motor vehicle, the movement direction of each motor vehicle and the position of each pedestrian.
And the sending unit sends the safety early warning information to any safety helmet corresponding to the non-motor vehicle, so that an early warning device in the safety helmet sends out early warning prompt based on the safety early warning information.
The working process, working details and technical effects of the device provided in this embodiment may refer to the first aspect of the embodiment, and are not described herein again.
As shown in fig. 3, 4 and 5, a third aspect of the present embodiment provides a hardware system for implementing the safety warning method of the non-motor vehicle according to the first aspect of the present embodiment, where the hardware system includes a front end sensing terminal, an edge computing terminal and a safety helmet, where the safety helmet is configured on the non-motor vehicle, and the safety helmet is built with a warning device.
The front end sensing terminal is used for collecting traffic data of a target area in any lane at a road intersection at a first moment and a second moment, and sending the traffic data at the first moment and the second moment to the edge computing terminal, wherein the traffic data comprises speeds and position coordinates of all moving objects in the target area and two-dimensional images of the target area, the two-dimensional images are shot by a camera at the road intersection, the position coordinates of any moving object are radar coordinates, and the any moving object is a motor vehicle, a non-motor vehicle or a pedestrian.
The edge computing terminal is used for receiving traffic data at the first moment and the second moment, and generating safety early warning information of any non-motor vehicle in the target area by using the safety early warning method of the non-motor vehicle described in the first turn-over embodiment.
The edge computing terminal is also used for sending the safety early warning information to a safety helmet corresponding to any non-motor vehicle;
the safety helmet is used for sending out early warning prompts based on the safety early warning information.
In this embodiment, the edge computing terminal is connected to the safety helmet by using an RSU roadside unit in a communication manner, as shown in fig. 5, and a V2X communication protocol is adopted between the RSU roadside unit and the safety helmet; referring to fig. 4, the example safety helmet 10 includes a controller, a V2X communication module 20, a speaker 30, an antenna 40, a rechargeable battery 50, and a charging interface 60, as shown in fig. 4, wherein the rechargeable battery is disposed in a safety helmet interlayer to supply power to the V2X communication module and the speaker, and the charging interface is disposed outside the safety helmet to provide a charging function for the rechargeable battery.
As shown in fig. 6, a fourth aspect of the present embodiment provides another safety precaution device for a non-motor vehicle, taking the device as an electronic device, including: the system comprises a memory, a processor and a transceiver which are connected in sequence in communication, wherein the memory is used for storing a computer program, the transceiver is used for receiving and transmitting messages, and the processor is used for reading the computer program and executing the safety precaution method of the non-motor vehicle according to the first aspect of the embodiment.
By way of specific example, the Memory may include, but is not limited to, random access Memory (random access Memory, RAM), read Only Memory (ROM), flash Memory (Flash Memory), first-in-first-out Memory (First Input First Output, FIFO) and/or first-in-last-out Memory (First In Last Out, FILO), etc.; in particular, the processor may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ), and may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state.
In some embodiments, the processor may be integrated with a GPU (Graphics Processing Unit, image processor) for taking charge of rendering and rendering of content required to be displayed by the display screen, for example, the processor may not be limited to a microprocessor employing a model number of STM32F105 family, a reduced instruction set computer (reduced instruction set computer, RISC) microprocessor, an X86 or other architecture processor, or a processor integrating an embedded neural network processor (neural-network processing units, NPU); the transceiver may be, but is not limited to, a wireless fidelity (WIFI) wireless transceiver, a bluetooth wireless transceiver, a general packet radio service technology (General Packet Radio Service, GPRS) wireless transceiver, a ZigBee protocol (low power local area network protocol based on the ieee802.15.4 standard), a 3G transceiver, a 4G transceiver, and/or a 5G transceiver, etc. In addition, the device may include, but is not limited to, a power module, a display screen, and other necessary components.
The working process, working details and technical effects of the electronic device provided in this embodiment may refer to the first aspect of the embodiment, and are not described herein again.
A fifth aspect of the present embodiment provides a storage medium storing instructions containing the safety precaution method of the non-motor vehicle according to the first aspect, i.e. the storage medium has instructions stored thereon, which when executed on a computer, perform the safety precaution method of the non-motor vehicle according to the first aspect.
The storage medium refers to a carrier for storing data, and may include, but is not limited to, a floppy disk, an optical disk, a hard disk, a flash Memory, a flash disk, and/or a Memory Stick (Memory Stick), where the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
The working process, working details and technical effects of the storage medium provided in this embodiment may refer to the first aspect of the embodiment, and are not described herein again.
A sixth aspect of the present embodiment provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of safety precaution of a non-motor vehicle according to the first aspect of the embodiment, wherein the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
Finally, it should be noted that: the foregoing description is only of the preferred embodiments of the invention and is not intended to limit the scope of the invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A safety precaution method for a non-motor vehicle, characterized in that it is applied to a non-motor vehicle equipped with a safety helmet, wherein the safety helmet has a precaution device built in, and the method comprises:
acquiring traffic data of a target area in any lane at a road intersection at a first moment and a second moment, wherein the traffic data comprises speeds and position coordinates of all moving objects in the target area and two-dimensional images of the target area, the two-dimensional images are shot by a camera at the road intersection, the position coordinates of any moving object are radar coordinates, and the any moving object is a motor vehicle, a non-motor vehicle or a pedestrian;
performing image recognition on the two-dimensional image corresponding to the first moment and the two-dimensional image corresponding to the second moment to respectively obtain a first traffic target and a second traffic target, wherein the first traffic target is each traffic participant in the two-dimensional image corresponding to the first moment, the second traffic target is each traffic participant in the two-dimensional image corresponding to the second moment, and any traffic participant is a motor vehicle, a non-motor vehicle or a pedestrian;
Based on the first traffic target, the position coordinates of each moving object at the first moment and the two-dimensional image corresponding to the first moment, carrying out coordinate calibration and type matching on each moving object in traffic data corresponding to the first moment to respectively obtain first coordinates of each moving object in a reference coordinate system and object types of each moving object in the traffic data corresponding to the first moment;
based on the second traffic target, the position coordinates of each moving object at the second moment and the two-dimensional image corresponding to the second moment, carrying out coordinate calibration and type matching on each moving object in traffic data corresponding to the second moment, and respectively obtaining second coordinates of each moving object in a reference coordinate system and object types of each moving object in the traffic data corresponding to the second moment;
according to the object type, the first coordinates and the speed of each moving object in the traffic data corresponding to the first moment and the object type, the second coordinates and the speed of each moving object in the traffic data corresponding to the second moment, the moving directions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area and the positions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area are obtained;
For any non-motor vehicle in the target area, generating safety pre-warning information of the any non-motor vehicle based on the position of the any non-motor vehicle, the movement direction of the any non-motor vehicle, the position of each motor vehicle, the movement direction of each motor vehicle and the position of each pedestrian;
the safety early warning information is sent to any safety helmet corresponding to the non-motor vehicle, so that an early warning device in the safety helmet sends out early warning prompt based on the safety early warning information;
according to the object type, the first coordinates and the speed of each moving object in the traffic data corresponding to the first moment and the object type, the second coordinates and the speed of each moving object in the traffic data corresponding to the second moment, the moving directions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area and the positions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area at the preset moment are obtained, and the method comprises the following steps:
according to the object type and the first coordinates of each moving object in the traffic data corresponding to the first moment and the object type and the second coordinates of each moving object in the traffic data corresponding to the second moment, the moving directions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area are obtained;
Based on the object type and the speed of each moving object in the traffic data corresponding to the first moment and the object type and the speed of each moving object in the traffic data corresponding to the second moment, the acceleration of each non-motor vehicle, each motor vehicle and each pedestrian in the target area is obtained;
according to the acceleration of each non-motor vehicle, each motor vehicle and each pedestrian and the speed of each non-motor vehicle, each motor vehicle and each pedestrian at a second moment, obtaining the displacement of each non-motor vehicle, each motor vehicle and each pedestrian from the second moment to the preset moment;
obtaining third coordinates of each non-motor vehicle, each motor vehicle and each pedestrian under the reference coordinate system at a third moment based on the displacement, the second coordinates and the movement direction of each non-motor vehicle, each motor vehicle and each pedestrian, so as to obtain the positions of each non-motor vehicle, each motor vehicle and each pedestrian based on the third coordinates of each non-motor vehicle, each motor vehicle and each pedestrian;
generating safety warning information of any non-motor vehicle based on the position of the any non-motor vehicle, the movement direction of the any non-motor vehicle, the position of each motor vehicle, the movement direction of each motor vehicle and the position of each pedestrian, comprising:
Taking the motion direction of each motor vehicle as a calibration direction, and judging whether the motion direction of any non-motor vehicle is the same as the calibration direction;
if not, judging that any non-motor vehicle is in a retrograde state, and generating retrograde early warning information; and
judging whether the third coordinate of any non-motor vehicle is the same as the third coordinate of any motor vehicle and/or the third coordinate of any pedestrian;
if the collision pre-warning information is the same, collision pre-warning information is generated;
taking the retrograde early warning information and/or the collision early warning information as the safety early warning information; or (b)
Generating safety warning information of any non-motor vehicle based on the position of the any non-motor vehicle, the movement direction of the any non-motor vehicle, the position of each motor vehicle, the movement direction of each motor vehicle and the position of each pedestrian, comprising:
acquiring a calibration image and phase information of signal lamps corresponding to any lane, wherein the calibration image covers the road intersection and any lane, and a coordinate system corresponding to the calibration image is also the reference coordinate system;
judging whether the third coordinate of any non-motor vehicle is in a calibration area, wherein the calibration area is an area which only represents a road intersection in the calibration image;
If yes, judging whether the phase information is red light information or not;
if yes, red light running early warning information is generated, and the red light running early warning information is used as the safety early warning information.
2. The method according to claim 1, wherein performing coordinate calibration and type matching on each moving object in the traffic data corresponding to the first time based on the first traffic target, the position coordinates of each moving object corresponding to the first time and the two-dimensional image corresponding to the first time, to obtain the first coordinates of each moving object in the reference coordinate system in the traffic data corresponding to the first time, and the object types of each moving object, respectively, includes:
acquiring a global image of the target area at a first moment, wherein the global image is a top view image of the target area;
calculating a coordinate conversion matrix between each pixel point coordinate in a target image and each pixel point coordinate in the global image based on the global image, wherein a coordinate system corresponding to the global image is the reference coordinate system, and the target image is a two-dimensional image corresponding to a first moment;
acquiring image position coordinates corresponding to each first traffic target in a target image based on the first traffic targets, and converting the image position coordinates of each first traffic target into reference coordinate system coordinates based on the coordinate conversion matrix;
Acquiring a coordinate conversion relation between a radar coordinate system and a global image corresponding coordinate system, and converting the position coordinates of each moving object in the corresponding traffic data at the first moment into reference coordinate system coordinates based on the coordinate conversion relation;
for any moving object, the reference coordinate system coordinate of the any moving object is used as the first coordinate of the any moving object, and the first traffic target corresponding to the any moving object is obtained based on the reference coordinate system coordinate of the any moving object in a matching mode, so that the object type of the any moving object is obtained based on the first traffic target corresponding to the any moving object.
3. A safety warning device for a non-motor vehicle, comprising:
the system comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for acquiring traffic data of a target area in any lane at a road intersection at a first moment and a second moment, wherein the traffic data comprises speeds and position coordinates of all moving objects in the target area and two-dimensional images of the target area, the two-dimensional images are shot by a camera at the road intersection, the position coordinates of any moving object are radar coordinates, and the any moving object is a motor vehicle, a non-motor vehicle or a pedestrian;
The system comprises an image identification unit, a first traffic target and a second traffic target, wherein the image identification unit is used for carrying out image identification on a two-dimensional image corresponding to a first moment and a two-dimensional image corresponding to a second moment to respectively obtain the first traffic target and the second traffic target, the first traffic target is each traffic participant in the two-dimensional image corresponding to the first moment, the second traffic target is each traffic participant in the two-dimensional image corresponding to the second moment, and any traffic participant is a motor vehicle, a non-motor vehicle or a pedestrian;
the matching unit is used for carrying out coordinate calibration and type matching on each moving object in the traffic data corresponding to the first moment based on the first traffic target, the position coordinates of each moving object at the first moment and the two-dimensional image corresponding to the first moment, and respectively obtaining the first coordinates of each moving object in the reference coordinate system and the object types of each moving object in the traffic data corresponding to the first moment;
the matching unit is further used for carrying out coordinate calibration and type matching on each moving object in the traffic data corresponding to the second moment based on the second traffic target, the position coordinates of each moving object at the second moment and the two-dimensional image corresponding to the second moment, so as to respectively obtain second coordinates of each moving object in the reference coordinate system and object types of each moving object in the traffic data corresponding to the second moment;
The position prediction unit is used for obtaining the movement directions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area according to the object type, the first coordinate and the speed of each moving object in the traffic data corresponding to the first moment and the object type, the second coordinate and the speed of each moving object in the traffic data corresponding to the second moment, and the positions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area when the movement directions are at the preset moment;
the early warning unit is used for generating safety early warning information of any non-motor vehicle in the target area based on the position of the any non-motor vehicle, the movement direction of the any non-motor vehicle, the position of each motor vehicle, the movement direction of each motor vehicle and the position of each pedestrian;
the sending unit sends the safety early warning information to any safety helmet corresponding to the non-motor vehicle, so that an early warning device in the safety helmet sends out early warning prompt based on the safety early warning information;
according to the object type, the first coordinates and the speed of each moving object in the traffic data corresponding to the first moment and the object type, the second coordinates and the speed of each moving object in the traffic data corresponding to the second moment, the moving directions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area and the positions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area at the preset moment are obtained, and the method comprises the following steps:
According to the object type and the first coordinates of each moving object in the traffic data corresponding to the first moment and the object type and the second coordinates of each moving object in the traffic data corresponding to the second moment, the moving directions of each non-motor vehicle, each motor vehicle and each pedestrian in the target area are obtained;
based on the object type and the speed of each moving object in the traffic data corresponding to the first moment and the object type and the speed of each moving object in the traffic data corresponding to the second moment, the acceleration of each non-motor vehicle, each motor vehicle and each pedestrian in the target area is obtained;
according to the acceleration of each non-motor vehicle, each motor vehicle and each pedestrian and the speed of each non-motor vehicle, each motor vehicle and each pedestrian at a second moment, obtaining the displacement of each non-motor vehicle, each motor vehicle and each pedestrian from the second moment to the preset moment;
obtaining third coordinates of each non-motor vehicle, each motor vehicle and each pedestrian under the reference coordinate system at a third moment based on the displacement, the second coordinates and the movement direction of each non-motor vehicle, each motor vehicle and each pedestrian, so as to obtain the positions of each non-motor vehicle, each motor vehicle and each pedestrian based on the third coordinates of each non-motor vehicle, each motor vehicle and each pedestrian;
Generating safety warning information of any non-motor vehicle based on the position of the any non-motor vehicle, the movement direction of the any non-motor vehicle, the position of each motor vehicle, the movement direction of each motor vehicle and the position of each pedestrian, comprising:
taking the motion direction of each motor vehicle as a calibration direction, and judging whether the motion direction of any non-motor vehicle is the same as the calibration direction;
if not, judging that any non-motor vehicle is in a retrograde state, and generating retrograde early warning information; and
judging whether the third coordinate of any non-motor vehicle is the same as the third coordinate of any motor vehicle and/or the third coordinate of any pedestrian;
if the collision pre-warning information is the same, collision pre-warning information is generated;
taking the retrograde early warning information and/or the collision early warning information as the safety early warning information; or (b)
Generating safety warning information of any non-motor vehicle based on the position of the any non-motor vehicle, the movement direction of the any non-motor vehicle, the position of each motor vehicle, the movement direction of each motor vehicle and the position of each pedestrian, comprising:
acquiring a calibration image and phase information of signal lamps corresponding to any lane, wherein the calibration image covers the road intersection and any lane, and a coordinate system corresponding to the calibration image is also the reference coordinate system;
Judging whether the third coordinate of any non-motor vehicle is in a calibration area, wherein the calibration area is an area which only represents a road intersection in the calibration image;
if yes, judging whether the phase information is red light information or not;
if yes, red light running early warning information is generated, and the red light running early warning information is used as the safety early warning information.
4. The safety early warning system of the non-motor vehicle is characterized by comprising a front end sensing terminal, an edge computing terminal and a safety helmet, wherein the safety helmet is arranged on the non-motor vehicle, and an early warning device is arranged in the safety helmet;
the front-end sensing terminal is used for collecting traffic data of a target area in any lane at a road intersection at a first moment and a second moment, and sending the traffic data of the target area at the first moment and the second moment to the edge computing terminal, wherein the traffic data comprises speeds and position coordinates of all moving objects in the target area and two-dimensional images of the target area, the two-dimensional images are shot by a camera at the road intersection, the position coordinates of any moving object are radar coordinates, and the any moving object is an automobile, a non-automobile or a pedestrian;
The edge computing terminal is used for receiving traffic data at a first moment and a second moment and generating safety early warning information of any non-motor vehicle in a target area by using the safety early warning method of the non-motor vehicle according to any one of claims 1-2;
the edge computing terminal is also used for sending the safety early warning information to a safety helmet corresponding to any non-motor vehicle;
the safety helmet is used for sending out early warning prompts based on the safety early warning information.
5. The system of claim 4, wherein the edge computing terminal is communicatively coupled to the safety helmet using an RSU roadside unit, and wherein a V2X communication protocol is employed between the RSU roadside unit and the safety helmet.
6. An electronic device, comprising: the safety precaution system comprises a memory, a processor and a transceiver which are connected in sequence in communication, wherein the memory is used for storing a computer program, the transceiver is used for receiving and transmitting messages, and the processor is used for reading the computer program and executing the safety precaution method of the non-motor vehicle according to any one of claims 1-2.
7. A storage medium having instructions stored thereon which, when executed on a computer, perform the non-motor vehicle safety warning method of any one of claims 1 to 2.
CN202211249750.3A 2022-10-12 2022-10-12 Safety early warning method, device, system, equipment and medium for non-motor vehicle Active CN115571152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211249750.3A CN115571152B (en) 2022-10-12 2022-10-12 Safety early warning method, device, system, equipment and medium for non-motor vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211249750.3A CN115571152B (en) 2022-10-12 2022-10-12 Safety early warning method, device, system, equipment and medium for non-motor vehicle

Publications (2)

Publication Number Publication Date
CN115571152A CN115571152A (en) 2023-01-06
CN115571152B true CN115571152B (en) 2023-06-06

Family

ID=84585778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211249750.3A Active CN115571152B (en) 2022-10-12 2022-10-12 Safety early warning method, device, system, equipment and medium for non-motor vehicle

Country Status (1)

Country Link
CN (1) CN115571152B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584571A (en) * 2019-01-16 2019-04-05 苏州齐思智行汽车***有限公司 Intersection pre-warning and control method and system and sensing device used
CN111210662A (en) * 2020-03-04 2020-05-29 五邑大学 Intersection safety early warning system and method based on machine vision and DSRC
CN112712733A (en) * 2020-12-23 2021-04-27 交通运输部公路科学研究所 Vehicle-road cooperation-based collision early warning method and system and road side unit
CN113112805A (en) * 2021-04-16 2021-07-13 吉林大学 Intersection monitoring and early warning method based on base station communication and intersection camera positioning
CN113345267A (en) * 2021-06-03 2021-09-03 招商局检测车辆技术研究院有限公司 Crossing approaching signal area early warning method and system based on generalized V2X
CN114782548A (en) * 2022-04-20 2022-07-22 深圳市旗扬特种装备技术工程有限公司 Global image-based radar vision data calibration method, device, equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140176714A1 (en) * 2012-12-26 2014-06-26 Automotive Research & Test Center Collision prevention warning method and device capable of tracking moving object

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584571A (en) * 2019-01-16 2019-04-05 苏州齐思智行汽车***有限公司 Intersection pre-warning and control method and system and sensing device used
CN111210662A (en) * 2020-03-04 2020-05-29 五邑大学 Intersection safety early warning system and method based on machine vision and DSRC
CN112712733A (en) * 2020-12-23 2021-04-27 交通运输部公路科学研究所 Vehicle-road cooperation-based collision early warning method and system and road side unit
CN113112805A (en) * 2021-04-16 2021-07-13 吉林大学 Intersection monitoring and early warning method based on base station communication and intersection camera positioning
CN113345267A (en) * 2021-06-03 2021-09-03 招商局检测车辆技术研究院有限公司 Crossing approaching signal area early warning method and system based on generalized V2X
CN114782548A (en) * 2022-04-20 2022-07-22 深圳市旗扬特种装备技术工程有限公司 Global image-based radar vision data calibration method, device, equipment and medium

Also Published As

Publication number Publication date
CN115571152A (en) 2023-01-06

Similar Documents

Publication Publication Date Title
US11915492B2 (en) Traffic light recognition method and apparatus
JP6548690B2 (en) Simulation system, simulation program and simulation method
KR101534056B1 (en) Traffic signal mapping and detection
US20190050653A1 (en) Perception device for obstacle detection and tracking and a perception method for obstacle detection and tracking
US20190188862A1 (en) A perception device for obstacle detection and tracking and a perception method for obstacle detection and tracking
CN112904370A (en) Multi-view deep neural network for lidar sensing
US20190118804A1 (en) Vehicle control device, vehicle control method, and program
JP2024041897A (en) Data distribution system, sensor device, and server
CN113591518B (en) Image processing method, network training method and related equipment
CN111192341A (en) Method and device for generating high-precision map, automatic driving equipment and storage medium
Wang et al. Ips300+: a challenging multi-modal data sets for intersection perception system
JP7376992B2 (en) Information processing device, information processing method, and program
US11308324B2 (en) Object detecting system for detecting object by using hierarchical pyramid and object detecting method thereof
CN114782548B (en) Global image-based radar data calibration method, device, equipment and medium
KR20240019763A (en) Object detection using image and message information
WO2020155075A1 (en) Navigation apparatus and method, and related device
CN115571152B (en) Safety early warning method, device, system, equipment and medium for non-motor vehicle
CN116434173A (en) Road image detection method, device, electronic equipment and storage medium
CN113312403B (en) Map acquisition method and device, electronic equipment and storage medium
US20210383213A1 (en) Prediction device, prediction method, computer program product, and vehicle control system
CN116685871A (en) Synchronization method and device and vehicle
Wang et al. Aprus: An airborne altitude-adaptive purpose-related uav system for object detection
Tsu-Tian Research on intelligent transportation systems in Taiwan
Kheder et al. Real-time traffic monitoring system using IoT-aided robotics and deep learning techniques
AlKishri et al. Object recognition for organizing the movement of self-driving car

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231127

Address after: 518000 Guangdong Wutong street, Baoan District, Shenzhen, China. The 5 floor of 13B building, Taihua Indus Industrial Park, Sanwei community

Patentee after: Shenzhen Qiyang special equipment technology Engineering Co.,Ltd.

Patentee after: Shenzhen comprehensive transportation and municipal engineering design and Research Institute Co.,Ltd.

Address before: 518000 Guangdong Wutong street, Baoan District, Shenzhen, China. The 5 floor of 13B building, Taihua Indus Industrial Park, Sanwei community

Patentee before: Shenzhen Qiyang special equipment technology Engineering Co.,Ltd.