CN113326758A - Head-up display technology for remotely controlling driving monitoring video - Google Patents

Head-up display technology for remotely controlling driving monitoring video Download PDF

Info

Publication number
CN113326758A
CN113326758A CN202110572186.8A CN202110572186A CN113326758A CN 113326758 A CN113326758 A CN 113326758A CN 202110572186 A CN202110572186 A CN 202110572186A CN 113326758 A CN113326758 A CN 113326758A
Authority
CN
China
Prior art keywords
video
path
target
vehicle
augmented reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110572186.8A
Other languages
Chinese (zh)
Inventor
袁胜
祖超越
高丰
符茂磊
边伟
苏鹏亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Vehicle Intelligence Pioneers Inc
Original Assignee
Qingdao Vehicle Intelligence Pioneers Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Vehicle Intelligence Pioneers Inc filed Critical Qingdao Vehicle Intelligence Pioneers Inc
Priority to CN202110572186.8A priority Critical patent/CN113326758A/en
Publication of CN113326758A publication Critical patent/CN113326758A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention provides an augmented reality head-up display method for remotely monitoring videos, which comprises the following steps: marking the position of the obstacle target in the video in the monitoring video of the controlled vehicle; judging the current operation condition of the controlled vehicle; and according to the current working condition of the controlled vehicle, the working path is superposed in the video in an augmented reality mode. The augmented reality head-up display method for remotely controlling the monitoring video realizes collision prompt and path prompt and improves driving safety and efficiency.

Description

Head-up display technology for remotely controlling driving monitoring video
Technical Field
The invention belongs to the field of image recognition and distance surveying and mapping, and particularly relates to a head-up display technology for remotely monitoring videos.
Background
Among the monitoring information of the controlled vehicle driven by remote control, the video information around the vehicle is the most important part, and the video information has the function of enabling the remote control driver to intuitively master the scene information around the vehicle and the pose information of the vehicle. However, the image information collected by the camera is limited by its own imaging mode and overall display layout, and has many differences from the visual information directly acquired by human eyes, which causes difficulty in information judgment, especially judgment of distance and orientation. Furthermore, the video of the camera should be its primary focus during the driver's remote driving of the controlled vehicle. In the currently mainstream method for embedding a video window into a graphical user interface, most of information except for a video is placed at a place outside a video range in a graphical user interface control mode, so that a driver is forced to shift a visual focus to a place outside the video when observing the information, and certain influence is caused on driving safety. Therefore, the related information of the object in the scene is correspondingly displayed on the position of the object in the monitoring video in an augmented reality mode, so that the transfer of the visual focus of the driver can be obviously reduced, and the driving safety is improved.
Therefore, during the remote control driving work, there are the following two problems.
(1) Due to the difference between the video imaging mode and human eyes, the judgment of the collision information of the remote driver on the surrounding environment of the vehicle, especially the traffic participants, is prone to generate errors.
(2) Other parameterized information in vehicle driving is prompted at a part outside a video window, and a sight focus of a remote driver leaves a video area when observing the information, so that potential safety hazards exist.
In view of this, it is desirable to provide an augmented reality head-up display method for remote control driving monitoring to reduce driving judgment errors and potential safety hazards during remote control driving.
Disclosure of Invention
Therefore, the invention provides an augmented reality head-up display method for remote control driving monitoring, so as to reduce driving judgment errors and potential safety hazards.
The invention discloses an augmented reality head-up display method for remote control driving monitoring, which comprises the following steps: s1, marking the position of the obstacle target in the video in the monitoring video of the controlled vehicle;
s2, judging the current operation condition of the controlled vehicle;
and S3, according to the current working condition of the controlled vehicle, overlaying the working path in the video in an augmented reality mode.
Further, step S1 specifically includes,
s101, acquiring target bounding box information;
s102, acquiring target depth information;
s103, overlapping the bounding box information and the depth information of the target into the monitoring video.
Further, step S101 includes, for example,
s101a, capturing each frame in the monitoring video;
s101b, detecting the obstacle target in each frame of image, and creating an axis alignment bounding box;
and S101c, calculating the geometric information of the axis alignment bounding box and corresponding the axis alignment bounding box to the obstacle target.
Further, step S102 includes, for example,
s102a, reading a vehicle-end depth image and performing down-sampling through point cloud data fused with a monitoring video;
s102b, sending the depth information of the vehicle end after down-sampling to a remote control driving end;
s103c, matching the time and position of the obstacle target, and merging the depth information onto the axis-aligned bounding box.
Further, the current job situation in step S2 includes being in the job path and being away from the job path.
Further, when the target is far from within the work path, step S3 includes the steps of,
s301a, calculating the distance between the position of the current vehicle and the end points of all the work paths, and determining the work path closest to the current vehicle;
s301b, down-sampling each path point in the nearest work path;
s301c, selecting the nearest point in the nearest work path as a target point, and then calculating the angle difference between the current vehicle and the target point;
s301d, the pointing image is superimposed on the video according to the angle difference calculated in step S301 c.
Further, when the target is located within the work path, step S3 includes the steps of,
s302a, selecting a plurality of path points in front of the current vehicle, and sequentially calculating the relative positions of the path points and the center point in front of the vehicle;
s302b, generating a route line image in the top view direction;
s302c, superimposing the route bar image into the video.
Further, the process of superimposing the image into the video in step S301d and step S302c is calculated by a perspective transformation matrix, and then the corresponding indication image or path line image is projected onto the plane where the ground is located in the video.
Compared with the prior art, the technical scheme of the invention has the following advantages:
(1) the method of the invention prompts the collision information of the traffic participants in the video in an augmented reality mode and gives the collision distance, thereby avoiding the judgment error caused by the difference between the video imaging mode and the human eye in the traditional method.
(2) According to the method, the path is displayed in the video in an augmented reality mode, other important information is integrated in the video in a head-up display mode, a sight focus of a remote driver in the whole driving process does not need to leave a video area, and the safety of remote driving is improved.
Drawings
Fig. 1 is a schematic flow chart of step S1 provided by the embodiment of the present invention;
FIG. 2 is a flowchart illustrating step S3 according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a perspective transformation rectangle provided by the embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The augmented reality head-up display method for remotely controlling a driving monitoring video according to the embodiment, as shown in fig. 1-2, includes the following steps:
s1, marking the position of the obstacle target in the video in the monitoring video of the controlled vehicle;
s2, judging the current operation condition of the controlled vehicle;
and S3, according to the current working condition of the controlled vehicle, overlaying the working path in the video in an augmented reality mode.
The step S1 specifically includes the steps of,
s101, acquiring target bounding box information;
and S102, acquiring target depth information.
The specific process of acquiring the depth information of the target in this embodiment is to match the point cloud data with the image through external parameters of the camera and the laser radar and internal parameters of the camera, and further determine the point cloud information corresponding to the target bounding box, so as to obtain the depth information of the detected target. And then, taking the minimum value, namely the nearest distance in each matching result as the distance information of the detection target, and marking the distance information outside the target bounding box.
S103, overlapping the bounding box information and the depth information of the target into the monitoring video.
The step S101 includes the steps of,
s101a, capturing each frame in the monitoring video;
s101b, detecting the obstacle target in each frame of image, and creating an axis alignment bounding box;
and S101c, calculating the geometric information of the axis alignment bounding box and corresponding the axis alignment bounding box to the obstacle target.
The step S102 includes the steps of,
s102a, reading a vehicle-end depth image and performing down-sampling through point cloud data fused with a monitoring video;
s102b, sending the depth information of the vehicle end after down-sampling to a remote control driving end;
s103c, matching the time and position of the obstacle target, and merging the depth information onto the axis-aligned bounding box.
Through several steps in step S1, the position and size of the obstacle and the depth information of the obstacle are displayed in the vehicle monitoring video, providing a driving reference to the driver. It is worth noting that if the vehicle does not have a depth camera, this step is not performed, nor is depth information labeled.
The current job situation in step S2 includes being in the job path and being away from the job path. When the controlled vehicle is in the working path, the remote driver needs to refer to the working path to drive the vehicle; when the vehicle is far away from the working path, the remote driver needs to be prompted by the direction of the working area, and in order to realize the function, different processing methods are adopted for the two cases in the embodiment.
When the target is far from within the work path, step S3 includes the steps of,
s301a, calculating the distance between the position of the current vehicle and the end points of all the work paths, and determining the work path closest to the current vehicle.
S301b, down-samples each path point in the closest work path.
S301c, selects the closest point in the closest work path as the target point, and then calculates the angular difference between the current vehicle and the target point.
S301d, the pointing image is superimposed on the video according to the angle difference calculated in step S301 c. Specifically, a previously prepared disk picture is rotated based on the angle difference, and thus image data before projection display is prepared.
When the target is located within the job path, step S3 includes the following steps.
S302a, selecting a plurality of path points in front of the current vehicle, and sequentially calculating the relative positions of the path points and the center point in front of the vehicle.
S302b, a route line image in the top view direction is generated.
S302c, superimposing the route bar image into the video.
The process of superimposing the images into the video in step S301d and step S302c is calculated by a perspective transformation matrix, and then the corresponding indication image or path line image is projected onto the plane where the ground is located in the video.
The specific process of the perspective transformation matrix calculation is as follows, the calculation of the perspective matrix is realized through an interface in OpenCV, and the interface needs to provide the positions of 4 points corresponding to two planes in projection transformation as input parameters, so as to obtain a homography transformation matrix of 3 by 3. The transformation matrix contains the contents of translation, rotation, scaling, clipping transformations, and projective transformation parameters. After the 8 reference points are provided for the interface, the perspective transformation matrix output by the interface can be directly applied to an interface for generating a projection transformation image by OpenCV so as to obtain a transformed result.
To obtain the reference points for generating the perspective transformation matrix, as shown in fig. 3. The specific mode is that the edges of the video visible ranges on the two sides of the vehicle start to extend towards the right front direction, and marks are made on the ground every 1 m, so that a rectangle positioned in front of the vehicle is obtained. Taking a rectangle extending in a range of 1 meter arbitrarily in the rectangle as a reference object, marking the positions of 4 endpoints in a video as reference points of a projection target plane, and taking the two endpoints of the side of the rectangle close to a vehicle as reference points of the projection target plane, then translating the side upwards by a corresponding distance according to the vehicle width proportion on an image plane (if the vehicle width is 3 meters, the distance of the upward translation is one third of the side length), and taking 2 endpoints as corresponding points of the far end of the projection target plane. Therefore, all needed 8 reference points are obtained, and the perspective matrix can be obtained by inputting all the reference points into an interface of the OpenCV calculation perspective matrix.
After the perspective matrix is obtained through calibration, the perspective matrix can be finely adjusted according to the actual effect. The method has the advantages of simple operation, capability of being completed in a short time and adaptation to the situation that the pose of the engineering vehicle camera is possibly changed due to vibration and collision at any time.
Through the flow in step S3, the work route is displayed in the monitoring video of the vehicle in the augmented reality mode to assist the driver in driving.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (8)

1. An augmented reality head-up display method for remotely controlling a driving monitoring video is characterized by comprising the following steps:
s1, marking the position of the obstacle target in the video in the monitoring video of the controlled vehicle;
s2, judging the current operation condition of the controlled vehicle;
and S3, according to the current working condition of the controlled vehicle, overlaying the working path in the video in an augmented reality mode.
2. The method for augmented reality heads-up display of remote control driving monitor video according to claim 1, wherein the step S1 specifically includes,
s101, acquiring target bounding box information;
s102, acquiring target depth information;
s103, overlapping the bounding box information and the depth information of the target into the monitoring video.
3. The method for augmented reality heads-up display of remote control driving monitor video according to claim 2, wherein the step S2 comprises the following steps,
s101a, capturing each frame in the monitoring video;
s101b, detecting the obstacle target in each frame of image, and creating an axis alignment bounding box;
and S101c, calculating the geometric information of the axis alignment bounding box and corresponding the axis alignment bounding box to the obstacle target.
4. The augmented reality heads-up display method for remotely controlling a driving monitoring video according to claim 3, wherein the step S102 includes,
s102a, reading a vehicle-end depth image and performing down-sampling through point cloud data fused with a monitoring video;
s102b, sending the depth information of the vehicle end after down-sampling to a remote control driving end;
s103c, matching the time and position of the obstacle target, and merging the depth information onto the axis-aligned bounding box.
5. The augmented reality heads-up display method for remotely controlling a driving monitoring video according to claim 4, wherein the current working situation in step S2 includes being in the working path and being away from the working path.
6. The augmented reality heads-up display method for remotely controlling a driving monitoring video according to claim 5, wherein when the target is far from within the working path, the step S3 includes the steps of,
s301a, calculating the distance between the position of the current vehicle and the end points of all the work paths, and determining the work path closest to the current vehicle;
s301b, average down-sampling is carried out on each path point in the nearest work path;
s301c, selecting the nearest point in the nearest work path as a target point, and then calculating the angle difference between the current vehicle and the target point;
s301d, the pointing image is superimposed on the video according to the angle difference calculated in step S301 c.
7. The augmented reality heads-up display method for remotely controlling a driving monitoring video according to claim 6, wherein when the target is located within the working path, the step S3 includes the steps of,
s302a, selecting a plurality of path points in front of the current vehicle, and sequentially calculating the relative positions of the path points and the center point in front of the vehicle;
s302b, generating a route line image in the top view direction;
s302c, superimposing the route bar image into the video.
8. The method as claimed in claim 7, wherein the overlaying into the video in steps S301d and S302c is calculated by a perspective transformation matrix, and then the corresponding indication image or path line image is projected onto the plane of the ground in the video.
CN202110572186.8A 2021-05-25 2021-05-25 Head-up display technology for remotely controlling driving monitoring video Pending CN113326758A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110572186.8A CN113326758A (en) 2021-05-25 2021-05-25 Head-up display technology for remotely controlling driving monitoring video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110572186.8A CN113326758A (en) 2021-05-25 2021-05-25 Head-up display technology for remotely controlling driving monitoring video

Publications (1)

Publication Number Publication Date
CN113326758A true CN113326758A (en) 2021-08-31

Family

ID=77416613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110572186.8A Pending CN113326758A (en) 2021-05-25 2021-05-25 Head-up display technology for remotely controlling driving monitoring video

Country Status (1)

Country Link
CN (1) CN113326758A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105333883A (en) * 2014-08-07 2016-02-17 深圳点石创新科技有限公司 Navigation path and trajectory displaying method and apparatus for head-up display (HUD)
CN108515909A (en) * 2018-04-04 2018-09-11 京东方科技集团股份有限公司 A kind of automobile head-up-display system and its barrier prompt method
CN109050405A (en) * 2018-08-23 2018-12-21 毕则乾 A kind of multi-functional augmented reality head-up-display system and method
CN109708653A (en) * 2018-11-21 2019-05-03 斑马网络技术有限公司 Crossing display methods, device, vehicle, storage medium and electronic equipment
CN109795415A (en) * 2019-02-19 2019-05-24 上海理工大学 A kind of Intelligent night vision head-up display device
CN111121815A (en) * 2019-12-27 2020-05-08 重庆利龙科技产业(集团)有限公司 Path display method and system based on AR-HUD navigation and computer storage medium
CN111707283A (en) * 2020-05-11 2020-09-25 宁波吉利汽车研究开发有限公司 Navigation method, device, system and equipment based on augmented reality technology
CN111717202A (en) * 2020-07-01 2020-09-29 新石器慧通(北京)科技有限公司 Driving prompting method and device for unmanned vehicle

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105333883A (en) * 2014-08-07 2016-02-17 深圳点石创新科技有限公司 Navigation path and trajectory displaying method and apparatus for head-up display (HUD)
CN108515909A (en) * 2018-04-04 2018-09-11 京东方科技集团股份有限公司 A kind of automobile head-up-display system and its barrier prompt method
CN109050405A (en) * 2018-08-23 2018-12-21 毕则乾 A kind of multi-functional augmented reality head-up-display system and method
CN109708653A (en) * 2018-11-21 2019-05-03 斑马网络技术有限公司 Crossing display methods, device, vehicle, storage medium and electronic equipment
CN109795415A (en) * 2019-02-19 2019-05-24 上海理工大学 A kind of Intelligent night vision head-up display device
CN111121815A (en) * 2019-12-27 2020-05-08 重庆利龙科技产业(集团)有限公司 Path display method and system based on AR-HUD navigation and computer storage medium
CN111707283A (en) * 2020-05-11 2020-09-25 宁波吉利汽车研究开发有限公司 Navigation method, device, system and equipment based on augmented reality technology
CN111717202A (en) * 2020-07-01 2020-09-29 新石器慧通(北京)科技有限公司 Driving prompting method and device for unmanned vehicle

Similar Documents

Publication Publication Date Title
CN108027511B (en) Information display device, information providing system, moving object apparatus, information display method, and recording medium
EP3565739B1 (en) Rear-stitched view panorama for rear-view visualization
US8446471B2 (en) Method and system for generating surrounding seamless bird-view image with distance interface
JP4969269B2 (en) Image processing device
CN112224132B (en) Vehicle panoramic all-around obstacle early warning method
EP2309453A2 (en) Image displaying apparatus and image displaying method
US20140152774A1 (en) Vehicle periphery monitoring device
US20050046702A1 (en) Image photographing apparatus and image processing method
JP2008269139A (en) Drive support system and vehicle
EP2079053A1 (en) Method and apparatus for calibrating a video display overlay
EP1536378A2 (en) Three-dimensional image display apparatus and method for models generated from stereo images
WO2023071834A1 (en) Alignment method and alignment apparatus for display device, and vehicle-mounted display system
JP2008530667A (en) Method and apparatus for visualizing the periphery of a vehicle by fusing infrared and visible images
US10607414B2 (en) Method and device for seeing through an obstruction based on smart glasses, and computer-readable storage medium
WO2019224922A1 (en) Head-up display control device, head-up display system, and head-up display control method
JP5178454B2 (en) Vehicle perimeter monitoring apparatus and vehicle perimeter monitoring method
WO2020012879A1 (en) Head-up display
KR101705558B1 (en) Top view creating method for camera installed on vehicle and AVM system
EP3942794A1 (en) Depth-guided video inpainting for autonomous driving
JP2015226304A (en) Projection device for vehicle and head-up display system
JP2019139273A (en) Display system
WO2018222122A1 (en) Methods for perspective correction, computer program products and systems
KR20220022340A (en) Device and method to visualize content
JP2011254128A (en) Plane view generating device and plane view generating method
JP2015154125A (en) Vehicle periphery image display device and vehicle periphery image display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210831

RJ01 Rejection of invention patent application after publication