CN118270035A - Early warning system and early warning method for vehicle and vehicle - Google Patents

Early warning system and early warning method for vehicle and vehicle Download PDF

Info

Publication number
CN118270035A
CN118270035A CN202311788623.5A CN202311788623A CN118270035A CN 118270035 A CN118270035 A CN 118270035A CN 202311788623 A CN202311788623 A CN 202311788623A CN 118270035 A CN118270035 A CN 118270035A
Authority
CN
China
Prior art keywords
image data
dimensional image
dimensional
camera
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311788623.5A
Other languages
Chinese (zh)
Inventor
崔金林
杨冬生
高文
张国才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN202311788623.5A priority Critical patent/CN118270035A/en
Publication of CN118270035A publication Critical patent/CN118270035A/en
Pending legal-status Critical Current

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)

Abstract

An early warning system, an early warning method and a vehicle for the vehicle comprise: the first camera module and the second camera module are used for collecting two-dimensional image data of a first side of the vehicle and two-dimensional image data of a second side of the vehicle; the third camera module is used for collecting two-dimensional image data behind the vehicle; the controller, with first module, second module and the third module communication connection of making a video recording, dispose to: fusing the first image data, the second image data, the third image data and the fourth image data with three-dimensional image data to obtain image data; and the presentation device is used for presenting the image data. According to the scheme, the image data with the three-dimensional image data can be obtained based on the two-dimensional image data collected by the first camera module, the second camera module and the third camera module, so that the image data comprising the holographic image data is obtained, the distance information of things and vehicles can be provided based on the image data, and driving safety and experience are improved.

Description

Early warning system and early warning method for vehicle and vehicle
Technical Field
The application relates to the technical field of vehicles, in particular to an early warning system and an early warning method for a vehicle and the vehicle.
Background
In order to solve the problem that the traditional mechanical rearview mirror is unfavorable for observation under severe environments (such as dark environments with poor visibility and blurred rearview mirrors in rainy and snowy weather), measures of adding an electronic flow external media rearview mirror to an automobile are often adopted, namely, a camera is arranged at the position of the mechanical rearview mirror, and a rear view image is acquired by the camera and is displayed on a display near a door in the automobile.
However, in the related art, most of the solutions mainly include installing a camera at a position of a mechanical rearview mirror, shooting a scene behind a vehicle, and displaying a two-dimensional image on a display, where such two-dimensional image cannot provide distance information between surroundings of the vehicle and the vehicle, so that driving experience is deteriorated and driving safety is lowered; a few schemes can shoot a rear image through a rearview camera, comprise a known mark made on a vehicle body, a marked actual lane line, a vehicle, other traffic indication marks and the like, and can acquire rough distance information by utilizing a triangulation principle, but the distance information is estimated, has larger error and inaccuracy, is easily influenced by external environment, and has poor scheme stability; other schemes also include fusing the image shot by the camera with data of other sensors (such as a laser radar and a millimeter wave radar) to output a three-dimensional image, and obtaining distance information between surrounding objects of the vehicle and the vehicle.
There is therefore a need for an improvement to at least partially solve the above-mentioned problems.
Disclosure of Invention
In the summary, a series of concepts in a simplified form are introduced, which will be further described in detail in the detailed description. The summary of the application is not intended to define the key features and essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In view of the problems existing at present, an aspect of the present application provides an early warning system for a vehicle, including: the first camera module and the second camera module are respectively arranged on the first side and the second side of the vehicle and are used for collecting two-dimensional image data of the first side of the vehicle and two-dimensional image data of the second side of the vehicle;
the third camera shooting module is arranged between the first camera shooting module and the second camera shooting module and is used for collecting two-dimensional image data of the rear of the vehicle;
the controller, the controller with first module of making a video recording, the second make a video recording the module and the third make a video recording the module communication connection, the controller is configured to:
performing fusion processing on first image data, second image data, third image data and fourth image data with three-dimensional image data to obtain image data, wherein the first image data, the second image data, the third image data and the fourth image data are generated based on the two-dimensional image data acquired by the first camera module, the second camera module and the third camera module, and the image data comprise holographic image data;
and the presentation device is used for presenting the image data.
The first camera module includes a first camera, a second camera and a third camera, the second camera module includes a fourth camera, a fifth camera and a sixth camera, a focal length of the first camera and a focal length of the second camera are smaller than a focal length of the third camera, a focal length of the fourth camera and a focal length of the fifth camera are smaller than a focal length of the sixth camera, the first camera is configured to collect first two-dimensional image data, the second camera is configured to collect second two-dimensional image data, the third camera is configured to collect third two-dimensional image data, the fourth camera is configured to collect fourth two-dimensional image data, the fifth camera is configured to collect fifth two-dimensional image data, the sixth camera is configured to collect sixth two-dimensional image data, the first two-dimensional image data and the second two-dimensional image data at least partially overlap, and the fourth two-dimensional image data and the fifth two-dimensional image data at least partially overlap.
Illustratively, the third camera module includes a seventh camera having a focal length that is greater than the focal length of the first camera, the focal length of the second camera, the focal length of the fourth camera, and the focal length of the fifth camera, the seventh camera configured to acquire seventh two-dimensional image data that at least partially coincides with the third two-dimensional image data and that at least partially coincides with the sixth two-dimensional image data.
Illustratively, the first image data, the second image data, the third image data, and the fourth image data are generated based on two-dimensional image data acquired by the first camera module, the second camera module, and the third camera module, and include:
The first image data is generated based on the first two-dimensional image data and the second two-dimensional image data, the second image data is generated based on the fourth two-dimensional image data and the fifth two-dimensional image data, the third image data is generated based on the third two-dimensional image data and the seventh two-dimensional image data, and the fourth image data is generated based on the sixth two-dimensional image data and the seventh two-dimensional image data.
Illustratively, the first image data is generated based on the first two-dimensional image data and the second two-dimensional image data, comprising:
Acquiring depth information of image data of a superposition part of the first two-dimensional image data and the second two-dimensional image data;
performing fusion processing on the first two-dimensional image data and the second two-dimensional image data to obtain first fusion image data, wherein a first part of image data of the first fusion image data is image data of a superposition part of the first two-dimensional image data and the second two-dimensional image data;
And converting the first part of image data of the first fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the first two-dimensional image data and the second two-dimensional image data to obtain the first image data.
Illustratively, the second image data is generated based on the fourth two-dimensional image data and the fifth two-dimensional image data, comprising:
acquiring depth information of image data of a superposition part of the fourth two-dimensional image data and the fifth two-dimensional image data;
performing fusion processing on the fourth two-dimensional image data and the fifth two-dimensional image data to obtain second fusion image data, wherein the first part of image data of the second fusion image data is the image data of the superposition part of the fourth two-dimensional image data and the fifth two-dimensional image data;
And converting the first part of image data of the second fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the fourth two-dimensional image data and the fifth two-dimensional image data to obtain the second image data.
Illustratively, the third image data is generated based on the third two-dimensional image data and the seventh two-dimensional image data, including:
acquiring depth information of image data of a superposition portion of the third two-dimensional image data and the seventh two-dimensional image data;
Performing fusion processing on the third two-dimensional image data and the seventh two-dimensional image data to obtain third fusion image data, wherein the first part of image data of the third fusion image data is the image data of the superposition part of the third two-dimensional image data and the seventh two-dimensional image data;
And converting the first part of image data of the third fused image data into three-dimensional image data based on the depth information of the image data of the overlapped part of the third two-dimensional image data and the seventh two-dimensional image data to obtain the third image data.
Illustratively, the fourth image data is generated based on the sixth two-dimensional image data and the seventh two-dimensional image data, comprising:
acquiring depth information of image data of a superposition portion of the sixth two-dimensional image data and the seventh two-dimensional image data;
Performing fusion processing on the sixth two-dimensional image data and the seventh two-dimensional image data to obtain fourth fused image data, wherein the first part of image data of the fourth fused image data is the image data of the superposition part of the sixth two-dimensional image data and the seventh two-dimensional image data;
And converting the first part of image data of the fourth fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the sixth two-dimensional image data and the seventh two-dimensional image data to obtain the fourth image data.
Illustratively, the controller performs a fusion process of the first image data, the second image data, the third image data, and the fourth image data having three-dimensional image data to obtain image data, including:
Performing fusion processing on the first image data and the third image data to obtain first image data;
And carrying out fusion processing on the second image data and the fourth image data to obtain second image data.
The presentation means may comprise a first presentation means for presenting the first image data and a second presentation means for presenting the second image data.
Illustratively, the controller is further configured to output pre-warning information based on the image data;
The presentation device is also used for presenting the early warning information.
Illustratively, the pre-warning information includes at least one of:
A back collision distance prompt, a collision risk prompt, a door opening risk prompt and a rear vehicle overtaking prompt.
Another aspect of the present application provides a warning method for a vehicle, including: respectively acquiring two-dimensional image data of a first side of the vehicle, two-dimensional image data of a second side of the vehicle and two-dimensional image data of the rear of the vehicle;
Performing fusion processing on first image data, second image data, third image data and fourth image data with three-dimensional image data to obtain image data, wherein the first image data, the second image data, the third image data and the fourth image data are generated based on acquired two-dimensional image data of a first side of the vehicle, two-dimensional image data of a second side of the vehicle and two-dimensional image data of a rear of the vehicle, and the image data comprises holographic image data;
And presenting the image data.
Illustratively, the two-dimensional image data of the first side of the vehicle comprises first, second and third two-dimensional image data, and the two-dimensional image data of the second side of the vehicle comprises fourth, fifth and sixth two-dimensional image data, wherein the first and second two-dimensional image data at least partially overlap, and the fourth and fifth two-dimensional image data at least partially overlap.
Illustratively, the two-dimensional image data of the rear of the vehicle includes seventh two-dimensional image data, the third and seventh two-dimensional image data at least partially overlapping, the sixth and seventh two-dimensional image data at least partially overlapping.
Illustratively, the first, second, third, and fourth image data are generated based on acquired two-dimensional image data of a first side of the vehicle, two-dimensional image data of a second side of the vehicle, and two-dimensional image data of a rear of the vehicle, comprising:
The first image data is generated based on the first two-dimensional image data and the second two-dimensional image data, the second image data is generated based on the fourth two-dimensional image data and the fifth two-dimensional image data, the third image data is generated based on the third two-dimensional image data and the seventh two-dimensional image data, and the fourth image data is generated based on the sixth two-dimensional image data and the seventh two-dimensional image data.
Illustratively, the first image data is generated based on the first two-dimensional image data and the second two-dimensional image data, comprising:
Acquiring depth information of image data of a superposition part of the first two-dimensional image data and the second two-dimensional image data;
performing fusion processing on the first two-dimensional image data and the second two-dimensional image data to obtain first fusion image data, wherein a first part of image data of the first fusion image data is image data of a superposition part of the first two-dimensional image data and the second two-dimensional image data;
And converting the first part of image data of the first fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the first two-dimensional image data and the second two-dimensional image data to obtain the first image data.
Illustratively, the second image data is generated based on the fourth two-dimensional image data and the fifth two-dimensional image data, comprising:
acquiring depth information of image data of a superposition part of the fourth two-dimensional image data and the fifth two-dimensional image data;
performing fusion processing on the fourth two-dimensional image data and the fifth two-dimensional image data to obtain second fusion image data, wherein the first part of image data of the second fusion image data is the image data of the superposition part of the fourth two-dimensional image data and the fifth two-dimensional image data;
And converting the first part of image data of the second fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the fourth two-dimensional image data and the fifth two-dimensional image data to obtain the second image data.
Illustratively, the third image data is generated based on the third two-dimensional image data and the seventh two-dimensional image data, including:
acquiring depth information of image data of a superposition portion of the third two-dimensional image data and the seventh two-dimensional image data;
Performing fusion processing on the third two-dimensional image data and the seventh two-dimensional image data to obtain third fusion image data, wherein the first part of image data of the third fusion image data is the image data of the superposition part of the third two-dimensional image data and the seventh two-dimensional image data;
And converting the first part of image data of the third fused image data into three-dimensional image data based on the depth information of the image data of the overlapped part of the third two-dimensional image data and the seventh two-dimensional image data to obtain the third image data.
Illustratively, the fourth image data is generated based on the sixth two-dimensional image data and the seventh two-dimensional image data, comprising:
acquiring depth information of image data of a superposition portion of the sixth two-dimensional image data and the seventh two-dimensional image data;
Performing fusion processing on the sixth two-dimensional image data and the seventh two-dimensional image data to obtain fourth fused image data, wherein the first part of image data of the fourth fused image data is the image data of the superposition part of the sixth two-dimensional image data and the seventh two-dimensional image data;
And converting the first part of image data of the fourth fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the sixth two-dimensional image data and the seventh two-dimensional image data to obtain the fourth image data.
Illustratively, the fusing the first, second, third, and fourth image data having three-dimensional image data to obtain the image data includes:
Performing fusion processing on the first image data and the third image data to obtain first image data;
And carrying out fusion processing on the second image data and the fourth image data to obtain second image data.
Illustratively, the method further comprises:
outputting early warning information based on the image data;
and presenting the early warning information.
Illustratively, the pre-warning information includes at least one of:
A back collision distance prompt, a collision risk prompt, a door opening risk prompt and a rear vehicle overtaking prompt.
In a further aspect, the application provides a vehicle, which is characterized in that the vehicle comprises the early warning system for the vehicle.
According to the early warning system, the early warning method and the vehicle for the vehicle, provided by the application, the first image data, the second image data, the third image data and the fourth image data with three-dimensional image data can be obtained based on the two-dimensional image data acquired by the first image capturing module, the second image capturing module and the third image capturing module, and the first image data, the second image data, the third image data and the fourth image data are fused to obtain the image data, the image data comprises holographic image data, and the distance information between the surroundings of the vehicle and the vehicle can be provided based on the image data, so that the driving safety and the experience are improved.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing embodiments of the present invention in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, and not constitute a limitation to the invention. In the drawings, like reference numerals generally refer to like parts or steps.
In the accompanying drawings:
FIG. 1 shows a schematic block diagram of an early warning system for a vehicle in accordance with an embodiment of the present application.
Fig. 2 shows a schematic block diagram of an early warning system for a vehicle according to another embodiment of the present application.
Fig. 3 shows a schematic flow chart of an early warning method for a vehicle according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present invention and not all embodiments of the present invention, and it should be understood that the present invention is not limited by the example embodiments described herein. Based on the embodiments of the invention described in the present application, all other embodiments that a person skilled in the art would have without inventive effort shall fall within the scope of the invention.
A vehicle warning system according to an embodiment of the present application will be described with reference to fig. 1 and 2, in which fig. 1 shows a schematic block diagram of a vehicle warning system according to an embodiment of the present application, and fig. 2 shows a schematic block diagram of a vehicle warning system according to another embodiment of the present application.
As shown in fig. 1, the vehicle early warning system 100 of the present application includes a first camera module 110, a second camera module 120, a third camera module 130, a controller 140 and a presentation device 150, wherein: the first camera module 110 and the second camera module 120 are respectively arranged on a first side and a second side of the vehicle, the first camera module 110 is used for collecting two-dimensional image data of the first side of the vehicle, and the second camera module is used for collecting two-dimensional image data of the second side of the vehicle; the third camera module 130 is disposed between the first camera module and the second camera module, and is configured to collect two-dimensional image data of the rear of the vehicle; the controller 140 is communicatively connected to the first camera module 110, the second camera module 120, and the third camera module 130, and the controller 140 is configured to: fusing first, second, third and fourth image data having three-dimensional image data, generated based on two-dimensional image data acquired by the first, second and third camera modules 110, 120 and 130, that is, the first, second, third and fourth image data generated based on two-dimensional image data acquired by the first, second and rear sides of the vehicle, the second side of the vehicle, and the rear side of the vehicle, to obtain image data including hologram image data; the presentation device 150 is used for presenting the image data. The controller 140 may be a central multimedia host, or may be an image processing element added in the vehicle for processing image data, which is not limited thereto. For example, the third camera module 130 may be disposed at the top of the vehicle or at the rear of the vehicle.
In one embodiment, as shown in fig. 2, the first camera module 110 includes a first camera 111, a second camera 112 and a third camera 113, the second camera module 120 includes a fourth camera 121, a fifth camera 122 and a sixth camera 123, the focal length of the first camera 111 and the focal length of the second camera 112 are smaller than the focal length of the third camera 113, the focal length of the fourth camera 121 and the focal length of the fifth camera 122 are smaller than the focal length of the sixth camera 123, the first camera 111 is configured to acquire first two-dimensional image data, the second camera 112 is configured to acquire second two-dimensional image data, the third camera 113 is configured to acquire third two-dimensional image data, the fourth camera 121 is configured to acquire fourth two-dimensional image data, the fifth camera 122 is configured to acquire fifth two-dimensional image data, the sixth camera 123 is configured to acquire sixth two-dimensional image data, the first two-dimensional image data and the second two-dimensional image data at least partially overlap, the fourth two-dimensional image data and the fifth two-dimensional image data at least partially overlap, and the third two-dimensional image data and the sixth two-dimensional image data at least partially overlap.
In one example, as shown in fig. 2, the third camera module 130 includes a seventh camera 131, where the focal length of the seventh camera 131 is greater than the focal length of the first camera 111, the focal length of the second camera 112, the focal length of the fourth camera 121, and the focal length of the fifth camera 122, the seventh camera 131 is configured to acquire seventh two-dimensional image data, the seventh two-dimensional image data at least partially overlaps the third two-dimensional image data, and the seventh two-dimensional image data at least partially overlaps the sixth two-dimensional image data. Illustratively, the seventh camera 131, the third camera 113, and the sixth camera 123 have the same focal length. Illustratively, the focal length of the seventh camera 131 is greater than 100mm.
In one example, the first image data, the second image data, the third image data, and the fourth image data are generated based on two-dimensional image data acquired by the first camera module 110, the second camera module 120, and the third camera module 130, including: the first image data is generated based on the first two-dimensional image data and the second two-dimensional image data, the second image data is generated based on the fourth two-dimensional image data and the fifth two-dimensional image data, the third image data is generated based on the third two-dimensional image data and the seventh two-dimensional image data, and the fourth image data is generated based on the sixth two-dimensional image data and the seventh two-dimensional image data.
In one example, the first camera module 110 and the second camera module 120 may be disposed at the positions of the original left mechanical rearview mirror and the original right mechanical rearview mirror of the vehicle, respectively, for replacing the original left mechanical rearview mirror and the original right mechanical rearview mirror; the first camera module 110 and the second camera module 120 may be disposed at other suitable positions on the left and right sides of the vehicle, respectively, so long as the left rear and right rear views of the vehicle satisfying the requirements can be obtained. Illustratively, because the focal length of the first camera 111 and the focal length of the second camera 112 are smaller than the focal length of the third camera 113, the angle of view of the first camera 111 and the angle of view of the second camera 112 are larger than the angle of view of the third camera 113, the range of the scene that can be taken by the first camera 111 and the second camera 112 is larger than the range of the scene that can be taken by the third camera 113, and the size of the scene at the same distance in the first two-dimensional image data and the second two-dimensional image data acquired by the first camera 111 and the second camera 112 is larger than the size in the third two-dimensional image data acquired by the third camera 113, That is, the first camera 111 and the second camera 112 are more suitable for capturing scenes closer to the vehicle, and the third camera 113 is more suitable for capturing scenes farther from the vehicle. Illustratively, because the focal length of the fourth camera 121 and the focal length of the fifth camera 122 are smaller than the focal length of the sixth camera 123, the angle of view of the fourth camera 121 and the angle of view of the fifth camera 122 are larger than the angle of view of the sixth camera 123, the range of the scene that can be taken by the fourth camera 121 and the fifth camera 122 is larger than the range of the scene that can be taken by the sixth camera 123, and the size of the scene at the same distance in the fourth two-dimensional image data and the fifth two-dimensional image data acquired by the fourth camera 121 and the fifth camera 122 is larger than the size in the sixth two-dimensional image data acquired by the sixth camera 123, That is, the fourth camera 121 and the fifth camera 122 are more suitable for capturing scenes closer to the vehicle, and the sixth camera 123 is more suitable for capturing scenes farther from the vehicle. Illustratively, the field of view of the first camera 111 at least partially coincides with the field of view of the second camera 112, the field of view of the fourth camera 121 at least partially coincides with the field of view of the fifth camera 122, and the field of view of the third camera 113 at least partially coincides with the field of view of the sixth camera 123. Illustratively, the focal length of the first camera 111, the focal lengths of the second camera 112, the fourth camera 121, and the fifth camera 122 are less than 40mm. Illustratively, the focal length of the third camera 113 and the sixth camera 123 is greater than 100mm. Illustratively, the focal length of the first camera 111 is equal to the focal length of the second camera 112. Illustratively, the focal length of the fourth camera 121 is equal to the focal length of the fifth camera 122. Illustratively, the focal length of the third camera 113 is equal to the focal length of the sixth camera 123. Illustratively, the focal length of the first camera 111, the focal length of the second camera 112, the focal length of the fourth camera 121, and the focal length of the fifth camera 122 are all equal.
In one example, the first image data is generated based on the first two-dimensional image data and the second two-dimensional image data, comprising: acquiring depth information of image data of a superposition part of the first two-dimensional image data and the second two-dimensional image data; performing fusion processing on the first two-dimensional image data and the second two-dimensional image data to obtain first fusion image data, wherein the first part of image data of the first fusion image data is the image data of the overlapped part of the first two-dimensional image data and the second two-dimensional image data; and converting the first part data of the first fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the first two-dimensional image data and the second two-dimensional image data to obtain the first image data. Illustratively, the field of view of the first camera 111 at least partially coincides with the field of view of the second camera 112 such that the acquired first and second two-dimensional image data at least partially coincides, when the same object is present in both the first and second two-dimensional image data, the position of the first and second cameras 111, 112 will be shifted due to the baseline distance between them, referred to as parallax, which may be obtained by calculating the distance difference of the object at corresponding points in the first and second two-dimensional image data, based on the baseline distance between the first camera 111 and the second camera 112, the focal length and parallax of the first camera 111 and the second camera 112, the depth information of the object, that is, the three-dimensional coordinates of the object, may be obtained through the principle of triangulation, and a depth map may be generated to store the depth information, for example. Illustratively, the first two-dimensional image data and the second two-dimensional image data are at least partially coincident, so that part of the data in the fused first fused image data is included in both the first two-dimensional image data and the second two-dimensional image data, in which embodiment the first part of the image data of the first fused image data is the image data of the coincident part of the first two-dimensional image data and the second two-dimensional image data, and the remaining part of the image data of the first fused image data is the image data of the non-coincident part of the first two-dimensional image data and the second two-dimensional image data. illustratively, since the depth information of the image data of the overlapping portion of the first two-dimensional image data and the second two-dimensional image data is stored with three-dimensional coordinate information, the image data of the overlapping portion of the first two-dimensional image data and the second two-dimensional image data can be converted from two-dimensional image data to three-dimensional image data by an image conversion algorithm based on the depth information, that is, the first partial image data of the first fused image data is converted from two-dimensional image to three-dimensional image based on the depth information, the first image data thus obtained includes the three-dimensional image data of the overlapping portion of the first two-dimensional image data and the second two-dimensional image data and the two-dimensional image data of the non-overlapping portion of the first two-dimensional image data and the second two-dimensional image data. Illustratively, first image data may be generated in the first camera module 110 and output into the controller 140; the first camera module 110 may acquire depth information of the image data of the overlapping portion of the first two-dimensional image data and the second two-dimensional image data and send the depth information and the first two-dimensional image data and the second two-dimensional image data to the controller 140, and the controller 140 may generate the first fused image data and generate the first image data based on the depth information of the image data of the overlapping portion of the first fused image data and the received first two-dimensional image data and second two-dimensional image data; the first camera module 110 may send the first two-dimensional image data and the second two-dimensional image data to the controller 140, and the controller 140 may generate the first fused image data and obtain depth information of image data of a superposition portion of the first two-dimensional image data and the second two-dimensional image data, and finally generate the first image data. The first image data is generated in the controller 140. Illustratively, the resolution and the field of view can be improved by fusing the first two-dimensional image data with the second two-dimensional image data to the first fused image data.
In one example, the second image data is generated based on the fourth two-dimensional image data and the fifth two-dimensional image data, comprising: acquiring depth information of image data of a superposition part of the fourth two-dimensional image data and the fifth two-dimensional image data; performing fusion processing on the fourth two-dimensional image data and the fifth two-dimensional image data to obtain second fused image data, wherein the first part of image data of the second fused image data is the image data of the overlapped part of the fourth two-dimensional image data and the fifth two-dimensional image data; and converting the first part data of the second fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the fourth two-dimensional image data and the fifth two-dimensional image data to obtain second image data. Illustratively, the field of view of the fourth camera 121 at least partially coincides with the field of view of the fifth camera 122 such that the acquired fourth and fifth two-dimensional image data at least partially coincides, when the same object is present in both the fourth and fifth two-dimensional image data, the position thereof in the fourth and fifth two-dimensional image data is shifted due to the baseline distance between the fourth and fifth cameras 121, 122, which is referred to as parallax, which may be obtained by calculating the distance difference of the object at the corresponding point in the fourth and fifth two-dimensional image data, Based on the baseline distance between the fourth camera 121 and the fifth camera 122, the focal length and parallax of the fourth camera 121 and the fifth camera 122, the depth information of the object, that is, the three-dimensional coordinates of the object, may be obtained through the principle of triangulation, and a depth map may be generated to store the depth information, for example. Illustratively, the fourth two-dimensional image data and the fifth two-dimensional image data are at least partially coincident, so that part of the data in the fused second fused image data is included in both the fourth two-dimensional image data and the fifth two-dimensional image data, in which embodiment the first part of the image data of the second fused image data is the image data of the coincident part of the fourth two-dimensional image data and the fifth two-dimensional image data, and the remaining part of the image data of the second fused image data is the image data of the non-coincident part of the fourth two-dimensional image data and the fifth two-dimensional image data. Illustratively, since the depth information of the image data of the overlapping portion of the fourth two-dimensional image data and the fifth two-dimensional image data stores three-dimensional coordinate information, the image data of the overlapping portion of the fourth two-dimensional image data and the fifth two-dimensional image data can be converted from two-dimensional image data to three-dimensional image data by an image conversion algorithm based on the depth information, that is, the first partial image data of the second fused image data is converted from two-dimensional image to three-dimensional image based on the depth information, resulting in second image data including the three-dimensional image data of the overlapping portion of the fourth two-dimensional image data and the fifth two-dimensional image data and the two-dimensional image data of the non-overlapping portion of the fourth two-dimensional image data and the fifth two-dimensional image data. Illustratively, second image data may be generated in the second camera module 120 and output into the controller 140; the second camera module 120 may acquire depth information of the image data of the overlapping portion of the fourth two-dimensional image data and the fifth two-dimensional image data and send the depth information and the fourth two-dimensional image data and the fifth two-dimensional image data to the controller 140, and the controller 140 may generate second fused image data and generate second image data based on the depth information of the second fused image data and the received image data of the overlapping portion of the fourth two-dimensional image data and the fifth two-dimensional image data; the second camera module 120 may send the fourth two-dimensional image data and the fifth two-dimensional image data to the controller 140, and the controller 140 may generate the second fused image data and obtain depth information of the image data of the overlapping portion of the fourth two-dimensional image data and the fifth two-dimensional image data, and finally generate the second image data. Illustratively, by fusing the fourth two-dimensional image data and the fifth two-dimensional image data into the second fused image data first, the resolution can be improved and the field of view can be enlarged.
In one example, the third image data is generated based on the third two-dimensional image data and the seventh two-dimensional image data, comprising: acquiring depth information of image data of a superposition part of the third two-dimensional image data and the seventh two-dimensional image data; performing fusion processing on the third two-dimensional image data and the seventh two-dimensional image data to obtain third fused image data, wherein the first part of image data of the third fused image data is the image data of the overlapped part of the third two-dimensional image data and the seventh two-dimensional image data; and converting the first part data of the third fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the third two-dimensional image data and the seventh two-dimensional image data to obtain third image data. Illustratively, the field of view of the third camera 113 at least partially coincides with the field of view of the seventh camera 131 such that the acquired third and seventh two-dimensional image data at least partially coincide, when the same object is present in both the third and seventh two-dimensional image data, the position thereof in the third and seventh two-dimensional image data is shifted due to the baseline distance between the third and seventh cameras 113, 131, which offset is referred to as parallax, which parallax can be obtained by calculating the distance difference of the corresponding points of the object in the third and seventh two-dimensional image data, Based on the baseline distance between the third camera 113 and the seventh camera 131, the focal lengths of the third camera 113 and the seventh camera 131, and the parallax, the depth information of the object, that is, the three-dimensional coordinates of the object, may be obtained through the triangulation principle, and a depth map may be generated to store the depth information, for example. Illustratively, the third two-dimensional image data and the seventh two-dimensional image data are at least partially coincident, so that part of the data in the fused third fused image data is included in both the third two-dimensional image data and the seventh two-dimensional image data, and in this embodiment, the first part of the image data in the third fused image data is the image data of the coincident part of the third two-dimensional image data and the seventh two-dimensional image data, and the remaining part of the image data in the third fused image data is the image data of the non-coincident part of the third two-dimensional image data and the seventh two-dimensional image data. Illustratively, since the depth information of the image data of the overlapping portion of the third two-dimensional image data and the seventh two-dimensional image data stores three-dimensional coordinate information, the image data of the overlapping portion of the third two-dimensional image data and the seventh two-dimensional image data can be converted from the two-dimensional image data to the three-dimensional image data by the image conversion algorithm based on the depth information, that is, the first partial image data of the third fused image data is converted from the two-dimensional image to the three-dimensional image based on the depth information, the third image data is obtained, and the third image data thus obtained includes the three-dimensional image data of the overlapping portion of the third two-dimensional image data and the seventh two-dimensional image data and the two-dimensional image data of the non-overlapping portion of the third two-dimensional image data and the seventh two-dimensional image data. illustratively, the controller 140 receives the third two-dimensional image data and the seventh two-dimensional image data and generates the third image data, respectively. Illustratively, by fusing the third two-dimensional image data and the seventh two-dimensional image data into the third fused image data first, the resolution can be improved and the field of view can be enlarged.
In one example, the fourth image data is generated based on the sixth two-dimensional image data and the seventh two-dimensional image data, comprising: acquiring depth information of image data of a superposition portion of the sixth two-dimensional image data and the seventh two-dimensional image data; performing fusion processing on the sixth two-dimensional image data and the seventh two-dimensional image data to obtain fourth fused image data, wherein the first part of image data of the fourth fused image data is the image data of the overlapped part of the sixth two-dimensional image data and the seventh two-dimensional image data; and converting the first part data of the fourth fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the sixth two-dimensional image data and the seventh two-dimensional image data to obtain fourth image data. Illustratively, the field of view of the sixth camera 123 at least partially coincides with the field of view of the seventh camera 131 such that the acquired sixth and seventh two-dimensional image data at least partially coincide, when the same object is present in both the sixth and seventh two-dimensional image data, the position thereof in the sixth and seventh two-dimensional image data is shifted due to the baseline distance between the sixth and seventh cameras 123, 131, which offset is referred to as parallax, which parallax can be obtained by calculating the distance difference of the object at the corresponding point in the sixth and seventh two-dimensional image data, Based on the baseline distance between the sixth camera 123 and the seventh camera 131, the focal lengths and parallaxes of the sixth camera 123 and the seventh camera 131, the depth information of the object, that is, the three-dimensional coordinates of the object, may be obtained through the principle of triangulation, and a depth map may be generated to store the depth information, for example. illustratively, the sixth two-dimensional image data and the seventh two-dimensional image data are at least partially overlapped, so that part of the data in the fourth fused image data obtained by fusion is included in both the sixth two-dimensional image data and the seventh two-dimensional image data, and in this embodiment, the first part of the image data in the fourth fused image data is the image data of the overlapped part of the sixth two-dimensional image data and the seventh two-dimensional image data, and the rest of the image data in the fourth fused image data is the image data of the non-overlapped part of the sixth two-dimensional image data and the seventh two-dimensional image data. Illustratively, since the depth information of the image data of the overlapping portion of the sixth two-dimensional image data and the seventh two-dimensional image data stores three-dimensional coordinate information, the image data of the overlapping portion of the sixth two-dimensional image data and the seventh two-dimensional image data can be converted from two-dimensional image data to three-dimensional image data by an image conversion algorithm based on the depth information, that is, the first partial image data of the fourth fused image data is converted from two-dimensional image to three-dimensional image based on the depth information, resulting in fourth image data including the three-dimensional image data of the overlapping portion of the sixth two-dimensional image data and the seventh two-dimensional image data and the two-dimensional image data of the non-overlapping portion of the sixth two-dimensional image data and the seventh two-dimensional image data. Illustratively, the controller 140 receives the sixth two-dimensional image data and the seventh two-dimensional image data and generates fourth image data, respectively. Illustratively, by fusing the sixth two-dimensional image data and the seventh two-dimensional image data into the fourth fused image data first, the resolution can be improved and the field of view can be enlarged.
In one example, the controller 140 performs a fusion process of the first, second, third, and fourth image data having three-dimensional image data to obtain image data, including: fusing the first image data and the third image data to obtain first image data; and carrying out fusion processing on the second image data and the fourth image data to obtain second image data. For example, the three-dimensional image data in the first image data can provide distance information of a first side of the vehicle and an object behind the side from the vehicle, and the three-dimensional image data in the second image data can provide distance information of a second side of the vehicle and an object behind the side from the vehicle.
In one example, the presentation device 150 may be a media-free 3D air imaging module, a holographic projector to project holographic projections of the first image data and the second image data, or the presentation device 150 may be a display to display the first image data and the second image data, which is not limited.
In one example, as shown in fig. 2, the presentation device 150 includes a first presentation device 1501 and a second presentation device 1502, wherein the first presentation device 1501 is used for presenting first image data, and the second presentation device 1502 is used for presenting second image data. The first image data is obtained by fusing the first image data and the third image data, the first image data is generated based on the first two-dimensional image data and the second two-dimensional image data, the third image data is generated based on the third two-dimensional image data and the seventh two-dimensional image data, and the focal lengths of the first camera 111, the second camera 112, the third camera 113 and the seventh camera 131 are not exactly the same, for example, the focal lengths of the first camera 111 and the second camera 112 are smaller than the focal lengths of the third camera 113 and the seventh camera 131, that is, the first image data includes three-dimensional image data and two-dimensional image data obtained by processing two-dimensional image data collected by cameras with different focal lengths, so the first image data can be smoothly amplified through algorithms such as an image smoothing transition algorithm, so that the first image data keeps quality unchanged during the amplification process, and an effect equivalent to optical zooming is achieved. The second image data is obtained by fusing the second image data with the fourth image data, the second image data is generated based on the fourth two-dimensional image data and the fifth two-dimensional image data, the fourth image data is generated based on the sixth two-dimensional image data and the seventh two-dimensional image data, and the focal lengths of the fourth camera 121, the fifth camera 122, the sixth camera 123 and the seventh camera 131 are not exactly the same, for example, the focal lengths of the fourth camera 121 and the fifth camera 122 are smaller than the focal lengths of the sixth camera 123 and the seventh camera 131, that is, the second image data includes three-dimensional image data and two-dimensional image data obtained by processing two-dimensional image data collected by cameras with different focal lengths, so the second image data can be smoothly amplified through an algorithm such as an image smoothing transition algorithm, so that the second image data keeps quality unchanged during the amplification process, and an effect equivalent to optical zooming is achieved. For example, the first rendering device 1501 may be disposed on a first side of the vehicle interior and the second rendering device 1502 may be disposed on a second side of the vehicle interior.
In one example, the controller 140 is further configured to output pre-warning information based on the image data; the presenting device 150 is further configured to present the early warning information. For example, the controller 140 may output the early warning information based on three-dimensional image data in the image data through an AI algorithm or the like.
In one example, the pre-warning information includes at least one of: a back collision distance prompt, a collision risk prompt, a door opening risk prompt and a rear vehicle overtaking prompt. For example, the three-dimensional image data in the image data stores three-dimensional coordinate information, and the controller 140 may calculate the distance between the object and the vehicle, even calculate the moving speed of the object through AI algorithm, etc., so as to generate different pre-warning information, such as a reverse collision distance prompt, a collision risk prompt, a door opening risk prompt, and a rear vehicle overtaking prompt.
The foregoing exemplarily describes an early warning system for a vehicle according to an embodiment of the present application. Based on the above description, the early warning system for a vehicle according to an embodiment of the present application can obtain the first image data, the second image data, the third image data and the fourth image data with three-dimensional image data based on the two-dimensional image data of the first side, the two-dimensional image data of the second side and the two-dimensional image data of the rear side of the vehicle, which are collected by the first camera module, the second camera module and the third camera module, and perform fusion processing on the first image data, the second image data, the third image data and the fourth image data to obtain image data, wherein the image data includes holographic image data, and distance information between the surroundings of the vehicle and the vehicle can be provided based on the image data, thereby improving driving safety and experience. Illustratively, the collected two-dimensional image data of the first side of the vehicle, the two-dimensional image data of the second side of the vehicle, and the two-dimensional image data of the rear of the vehicle include first two-dimensional image data, second two-dimensional image data, third two-dimensional image data, fourth two-dimensional image data, fifth two-dimensional image data, sixth two-dimensional image data, and seventh two-dimensional image data, and are capable of generating first image data having three-dimensional image data based on the first two-dimensional image data and the second two-dimensional image data, second image data having three-dimensional image data based on the fourth two-dimensional image data and the fifth two-dimensional image data, third image data having three-dimensional image data based on the third two-dimensional image data and the seventh two-dimensional image data, fourth image data having three-dimensional image data based on the sixth two-dimensional image data and the seventh two-dimensional image data, and the first image data, the second image data, the third image data and the fourth image data are fused to obtain image data, the image data comprises holographic image data, the holographic image data in the image data can provide the distance between objects around the vehicle and the vehicle, so that a user can directly obtain distance information from the image data presented in the presentation device, driving safety and experience are improved, and the image data is obtained by processing two-dimensional image data acquired by cameras with different focal lengths, so that the image data can be smoothly amplified, the quality of the image data is not reduced in the amplifying process, the function equivalent to optical zooming is realized, and other sensor data are not needed for matching in the scheme, so that the cost is lower.
Next, an early warning method for a vehicle according to an embodiment of the present application will be described with reference to fig. 3, and fig. 3 shows a schematic flowchart of an early warning method for a vehicle according to an embodiment of the present application. For example, the early warning system for a vehicle described above may be used to implement the early warning method for a vehicle according to an embodiment of the present application.
As shown in fig. 3, an early warning method 300 for a vehicle according to an embodiment of the present application includes the following steps:
First, S310 is performed to acquire two-dimensional image data of a first side of the vehicle, two-dimensional image data of a second side of the vehicle, and two-dimensional image data of a rear of the vehicle, respectively.
In one example, the number of two-dimensional images of the first side of the vehicle includes first two-dimensional image data, second two-dimensional image data, and third two-dimensional image data, and the two-dimensional image data of the second side of the vehicle includes fourth two-dimensional image data, fifth two-dimensional image data, and sixth two-dimensional image data, wherein the first two-dimensional image data and the second two-dimensional image data at least partially overlap, and the fourth two-dimensional image data and the fifth two-dimensional image data at least partially overlap. For example, two-dimensional image data of a first side of the vehicle and a second side of the vehicle may be acquired by cameras having different focal lengths. Illustratively, the focal length of the camera that collects the first two-dimensional image data and the focal length of the camera that collects the second two-dimensional image data are smaller than the focal length of the camera that collects the third two-dimensional image data, and the focal length of the camera that collects the fourth two-dimensional image data and the focal length of the camera that collects the fifth two-dimensional image data are smaller than the focal length of the camera that collects the sixth two-dimensional image data.
In one example, the two-dimensional image data of the rear of the vehicle includes seventh two-dimensional image data, the third two-dimensional image data at least partially overlapping the seventh two-dimensional image data, the sixth two-dimensional image data at least partially overlapping the seventh two-dimensional image data. For example, two-dimensional image data of the rear of the vehicle may be acquired by a camera. Illustratively, the focal length of the camera that collects the seventh two-dimensional image data is greater than the focal length of the camera that collects the first two-dimensional image data and the focal length of the camera that collects the second two-dimensional image data, and the focal length of the camera that collects the seventh two-dimensional image data is greater than the focal length of the camera that collects the fourth two-dimensional image data and the focal length of the camera that collects the fifth two-dimensional image data. Illustratively, the focal length of the camera that collects the third two-dimensional image, the camera that collects the sixth two-dimensional image data, and the camera that collects the seventh two-dimensional image data are the same. Illustratively, the focal length of the camera that collects the first two-dimensional image data, the focal length of the camera that collects the second two-dimensional image data, the focal length of the camera that collects the fourth two-dimensional image data, and the focal length of the camera that collects the fifth two-dimensional image data are less than 40mm, the focal length of the camera that collects the third two-dimensional image, the focal length of the camera that collects the sixth two-dimensional image data, and the focal length of the camera that collects the seventh two-dimensional image data are greater than 100mm.
Next, step S320 is performed to perform fusion processing on the first image data, the second image data, the third image data, and the fourth image data having the three-dimensional image data to obtain image data, wherein the first image data, the second image data, the third image data, and the fourth image data are generated based on the acquired two-dimensional image data of the first side of the vehicle, the two-dimensional image data of the second side of the vehicle, and the two-dimensional image data of the rear of the vehicle, and the image data includes hologram image data.
In one example, the first, second, third, and fourth image data are generated based on acquired two-dimensional image data of a first side of the vehicle, two-dimensional image data of a second side of the vehicle, and two-dimensional image data of a rear of the vehicle, comprising: the first image data is generated based on the first two-dimensional image data and the second two-dimensional image data, the second image data is generated based on the fourth two-dimensional image data and the fifth two-dimensional image data, the third image data is generated based on the third two-dimensional image data and the seventh two-dimensional image data, and the fourth image data is generated based on the sixth two-dimensional image data and the seventh two-dimensional image data.
In one example, the first image data is generated based on the first two-dimensional image data and the second two-dimensional image data, comprising: acquiring depth information of image data of a superposition part of the first two-dimensional image data and the second two-dimensional image data; performing fusion processing on the first two-dimensional image data and the second two-dimensional image data to obtain first fusion image data, wherein the first part of image data of the first fusion image data is the image data of the overlapped part of the first two-dimensional image data and the second two-dimensional image data; and converting the first part data of the first fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the first two-dimensional image data and the second two-dimensional image data to obtain the first image data. The first two-dimensional image data and the second two-dimensional image data acquired are at least partially coincident, when the same object is simultaneously present in the first two-dimensional image data and the second two-dimensional image data, the positions of the same object in the first two-dimensional image data and the second two-dimensional image data are offset, the offset is called parallax, the parallax can be obtained by calculating the distance difference between corresponding points of the object in the first two-dimensional image data and the second two-dimensional image data, depth information of the object can be obtained through a triangulation principle based on the parallax and related parameters, and a depth map can be generated for storing the depth information, for example. Illustratively, the first two-dimensional image data and the second two-dimensional image data are at least partially coincident, so that part of the data in the fused first fused image data is included in both the first two-dimensional image data and the second two-dimensional image data, in which embodiment the first part of the image data of the first fused image data is the image data of the coincident part of the first two-dimensional image data and the second two-dimensional image data, and the remaining part of the image data of the first fused image data is the image data of the non-coincident part of the first two-dimensional image data and the second two-dimensional image data. Illustratively, since the depth information of the image data of the overlapping portion of the first two-dimensional image data and the second two-dimensional image data is stored with three-dimensional coordinate information, the image data of the overlapping portion of the first two-dimensional image data and the second two-dimensional image data can be converted from two-dimensional image data to three-dimensional image data by an image conversion algorithm based on the depth information, that is, the first partial image data of the first fused image data is converted from two-dimensional image to three-dimensional image based on the depth information, the first image data thus obtained includes the three-dimensional image data of the overlapping portion of the first two-dimensional image data and the second two-dimensional image data and the two-dimensional image data of the non-overlapping portion of the first two-dimensional image data and the second two-dimensional image data. Illustratively, the resolution and the field of view can be improved by fusing the first two-dimensional image data with the second two-dimensional image data to the first fused image data.
In one example, the second image data is generated based on the fourth two-dimensional image data and the fifth two-dimensional image data, comprising: acquiring depth information of image data of a superposition part of the fourth two-dimensional image data and the fifth two-dimensional image data; performing fusion processing on the fourth two-dimensional image data and the fifth two-dimensional image data to obtain second fused image data, wherein the first part of image data of the second fused image data is the image data of the overlapped part of the fourth two-dimensional image data and the fifth two-dimensional image data; and converting the first part data of the second fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the fourth two-dimensional image data and the fifth two-dimensional image data to obtain second image data. The acquired fourth two-dimensional image data and the fifth two-dimensional image data are at least partially overlapped, when the same object is simultaneously present in the fourth two-dimensional image data and the fifth two-dimensional image data, the positions of the same object in the fourth two-dimensional image data and the fifth two-dimensional image data are offset, the offset is called parallax, the parallax can be obtained by calculating the distance difference between corresponding points of the object in the fourth two-dimensional image data and the fifth two-dimensional image data, depth information of the object can be obtained through a triangulation principle based on the parallax and related parameters, and a depth map can be generated for storing the depth information, for example. Illustratively, the fourth two-dimensional image data and the fifth two-dimensional image data are at least partially coincident, so that part of the data in the fused second fused image data is included in both the fourth two-dimensional image data and the fifth two-dimensional image data, in which embodiment the first part of the image data of the second fused image data is the image data of the coincident part of the fourth two-dimensional image data and the fifth two-dimensional image data, and the remaining part of the image data of the second fused image data is the image data of the non-coincident part of the fourth two-dimensional image data and the fifth two-dimensional image data. Illustratively, since the depth information of the image data of the overlapping portion of the fourth two-dimensional image data and the fifth two-dimensional image data stores three-dimensional coordinate information, the image data of the overlapping portion of the fourth two-dimensional image data and the fifth two-dimensional image data can be converted from two-dimensional image data to three-dimensional image data by an image conversion algorithm based on the depth information, that is, the first partial image data of the second fused image data is converted from two-dimensional image to three-dimensional image based on the depth information, resulting in second image data including the three-dimensional image data of the overlapping portion of the fourth two-dimensional image data and the fifth two-dimensional image data and the two-dimensional image data of the non-overlapping portion of the fourth two-dimensional image data and the fifth two-dimensional image data. Illustratively, by fusing the fourth two-dimensional image data and the fifth two-dimensional image data into the second fused image data first, the resolution can be improved and the field of view can be enlarged.
In one example, the third image data is generated based on the third two-dimensional image data and the seventh two-dimensional image data, comprising: acquiring depth information of image data of a superposition part of the third two-dimensional image data and the seventh two-dimensional image data; performing fusion processing on the third two-dimensional image data and the seventh two-dimensional image data to obtain third fused image data, wherein the first part of image data of the third fused image data is the image data of the overlapped part of the third two-dimensional image data and the seventh two-dimensional image data; and converting the first part data of the third fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the third two-dimensional image data and the seventh two-dimensional image data to obtain third image data. The acquired third and seventh two-dimensional image data are at least partially coincident, and when the same object is simultaneously present in the third and seventh two-dimensional image data, the positions of the same object in the third and seventh two-dimensional image data are offset, the offset is called parallax, the parallax can be obtained by calculating the distance difference between corresponding points of the object in the third and seventh two-dimensional image data, and based on the parallax and related parameters, depth information of the object, namely three-dimensional coordinates of the object, can be obtained through a triangulation principle, and a depth map can be generated for storing the depth information, for example. Illustratively, the third two-dimensional image data and the seventh two-dimensional image data are at least partially coincident, so that part of the data in the fused third fused image data is included in both the third two-dimensional image data and the seventh two-dimensional image data, and in this embodiment, the first part of the image data in the third fused image data is the image data of the coincident part of the third two-dimensional image data and the seventh two-dimensional image data, and the remaining part of the image data in the third fused image data is the image data of the non-coincident part of the third two-dimensional image data and the seventh two-dimensional image data. Illustratively, since the depth information of the image data of the overlapping portion of the third two-dimensional image data and the seventh two-dimensional image data stores three-dimensional coordinate information, the image data of the overlapping portion of the third two-dimensional image data and the seventh two-dimensional image data can be converted from the two-dimensional image data to the three-dimensional image data by the image conversion algorithm based on the depth information, that is, the first partial image data of the third fused image data is converted from the two-dimensional image to the three-dimensional image based on the depth information, the third image data is obtained, and the third image data thus obtained includes the three-dimensional image data of the overlapping portion of the third two-dimensional image data and the seventh two-dimensional image data and the two-dimensional image data of the non-overlapping portion of the third two-dimensional image data and the seventh two-dimensional image data. Illustratively, by fusing the third two-dimensional image data and the seventh two-dimensional image data into the third fused image data first, the resolution can be improved and the field of view can be enlarged.
In one example, the fourth image data is generated based on the sixth two-dimensional image data and the seventh two-dimensional image data, comprising: acquiring depth information of image data of a superposition portion of the sixth two-dimensional image data and the seventh two-dimensional image data; performing fusion processing on the sixth two-dimensional image data and the seventh two-dimensional image data to obtain fourth fused image data, wherein the first part of image data of the fourth fused image data is the image data of the overlapped part of the sixth two-dimensional image data and the seventh two-dimensional image data; and converting the first part data of the fourth fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the sixth two-dimensional image data and the seventh two-dimensional image data to obtain fourth image data. Illustratively, the acquired sixth two-dimensional image data and the seventh two-dimensional image data are at least partially coincident, when the same object is simultaneously present in the sixth two-dimensional image data and the seventh two-dimensional image data, the positions of the same object in the sixth two-dimensional image data and the seventh two-dimensional image data are offset, the offset is called parallax, the parallax can be obtained by calculating the distance difference between corresponding points of the object in the sixth two-dimensional image data and the seventh two-dimensional image data, and based on the parallax and related parameters, depth information of the object, namely three-dimensional coordinates of the object, can be obtained through a triangulation principle, and a depth map can be generated for storing the depth information. Illustratively, the sixth two-dimensional image data and the seventh two-dimensional image data are at least partially overlapped, so that part of the data in the fourth fused image data obtained by fusion is included in both the sixth two-dimensional image data and the seventh two-dimensional image data, and in this embodiment, the first part of the image data in the fourth fused image data is the image data of the overlapped part of the sixth two-dimensional image data and the seventh two-dimensional image data, and the rest of the image data in the fourth fused image data is the image data of the non-overlapped part of the sixth two-dimensional image data and the seventh two-dimensional image data. Illustratively, since the depth information of the image data of the overlapping portion of the sixth two-dimensional image data and the seventh two-dimensional image data stores three-dimensional coordinate information, the image data of the overlapping portion of the sixth two-dimensional image data and the seventh two-dimensional image data can be converted from two-dimensional image data to three-dimensional image data by an image conversion algorithm based on the depth information, that is, the first partial image data of the fourth fused image data is converted from two-dimensional image to three-dimensional image based on the depth information, resulting in fourth image data including the three-dimensional image data of the overlapping portion of the sixth two-dimensional image data and the seventh two-dimensional image data and the two-dimensional image data of the non-overlapping portion of the sixth two-dimensional image data and the seventh two-dimensional image data. Illustratively, by fusing the sixth two-dimensional image data and the seventh two-dimensional image data into the fourth fused image data first, the resolution can be improved and the field of view can be enlarged.
In one example, fusing first, second, third, and fourth image data having three-dimensional image data to obtain image data includes: fusing the first image data and the third image data to obtain first image data; and carrying out fusion processing on the second image data and the fourth image data to obtain second image data. For example, the three-dimensional image data in the first image data can provide distance information of a first side of the vehicle and an object behind the side from the vehicle, and the three-dimensional image data in the second image data can provide distance information of a second side of the vehicle and an object behind the side from the vehicle. The first image data and the second image data include three-dimensional image data and two-dimensional image data obtained by processing different collected two-dimensional image data, so that the first image data and the second image data can be smoothly amplified through algorithms such as an image smooth transition algorithm, so that the quality of the first image data and the second image data is not reduced in the amplifying process, and the effect equivalent to optical zooming is realized.
Finally, step S330 is executed to present the image data. The image data may be displayed by a display, or may be presented by way of holographic projection, for example, without limitation. For example, when the image data includes the first image data and the second image data, the first image data and the second image data may be presented separately.
In an example, the early warning method for a vehicle according to the embodiment of the present application further includes: outputting early warning information based on the image data; and presenting early warning information. For example, the early warning information may be output based on three-dimensional image data in the image data by an AI algorithm or the like. The pre-warning information may be displayed by a display, or may be presented in a holographic projection, for example, without limitation.
In one example, the pre-warning information includes at least one of: a back collision distance prompt, a collision risk prompt, a door opening risk prompt and a rear vehicle overtaking prompt. For example, three-dimensional image data in the image data stores three-dimensional coordinate information, and the distance between the object and the vehicle, even the moving speed of the object, can be calculated through an AI algorithm, etc., so as to generate different early warning information, such as a reversing collision distance prompt, a collision risk prompt, a door opening risk prompt and a rear vehicle overtaking prompt.
The foregoing exemplarily describes an early warning method for a vehicle according to an embodiment of the present application. Based on the above description, the early warning method for a vehicle according to an embodiment of the present application can obtain the first image data, the second image data, the third image data and the fourth image data with three-dimensional image data based on the collected two-dimensional image data of the first side of the vehicle, the two-dimensional image data of the second side of the vehicle and the two-dimensional image data of the rear side of the vehicle, and perform fusion processing on the first image data, the second image data, the third image data and the fourth image data to obtain image data, where the image data includes holographic image data, and distance information between surrounding objects of the vehicle and the vehicle can be provided based on the image data, so as to improve driving safety and experience. Illustratively, the collected two-dimensional image data of the first side of the vehicle, the two-dimensional image data of the second side of the vehicle, and the two-dimensional image data of the rear of the vehicle include first two-dimensional image data, second two-dimensional image data, third two-dimensional image data, fourth two-dimensional image data, fifth two-dimensional image data, sixth two-dimensional image data, and seventh two-dimensional image data, and are capable of generating first image data having three-dimensional image data based on the first two-dimensional image data and the second two-dimensional image data, second image data having three-dimensional image data based on the fourth two-dimensional image data and the fifth two-dimensional image data, third image data having three-dimensional image data based on the third two-dimensional image data and the seventh two-dimensional image data, fourth image data having three-dimensional image data based on the sixth two-dimensional image data and the seventh two-dimensional image data, and the first image data, the second image data, the third image data and the fourth image data are fused to obtain image data, the image data comprises holographic image data, the holographic image data in the image data can provide the distance between objects around the vehicle and the vehicle, so that a user can directly obtain distance information from the presented image data, the driving safety and experience are further improved, and the image data are obtained by processing different acquired two-dimensional image data, so that the image data can be smoothly amplified, the quality of the image data is kept not to be reduced in the amplifying process, the function equivalent to optical zooming is realized, and other sensor data are not needed to be matched in the scheme, so that the cost is lower.
The embodiment of the application also provides a vehicle, which comprises the early warning system for the vehicle. Illustratively, the vehicle may also include other component structures, as embodiments of the application are not limited in this regard.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present invention thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, e.g., the division of elements is merely a logical function division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted, or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the invention and aid in understanding one or more of the various inventive aspects, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the invention. However, the method of the present invention should not be construed as reflecting the following intent: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules in an item analysis device according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The above description is merely illustrative of the embodiments of the present invention and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present invention, and the changes or substitutions are covered by the protection scope of the present invention. The protection scope of the invention is subject to the protection scope of the claims.

Claims (24)

1. An early warning system for a vehicle, comprising:
the first camera module and the second camera module are respectively arranged on the first side and the second side of the vehicle and are used for collecting two-dimensional image data of the first side of the vehicle and two-dimensional image data of the second side of the vehicle;
the third camera shooting module is arranged between the first camera shooting module and the second camera shooting module and is used for collecting two-dimensional image data of the rear of the vehicle;
the controller, the controller with first module of making a video recording, the second make a video recording the module and the third make a video recording the module communication connection, the controller is configured to:
performing fusion processing on first image data, second image data, third image data and fourth image data with three-dimensional image data to obtain image data, wherein the first image data, the second image data, the third image data and the fourth image data are generated based on the two-dimensional image data acquired by the first camera module, the second camera module and the third camera module, and the image data comprise holographic image data;
and the presentation device is used for presenting the image data.
2. The early warning system for a vehicle according to claim 1, wherein the first camera module includes a first camera, a second camera, and a third camera, the second camera module includes a fourth camera, a fifth camera, and a sixth camera, a focal length of the first camera and a focal length of the second camera are smaller than a focal length of the third camera, a focal length of the fourth camera and a focal length of the fifth camera are smaller than a focal length of the sixth camera, the first camera is configured to acquire first two-dimensional image data, the second camera is configured to acquire second two-dimensional image data, the third camera is configured to acquire third two-dimensional image data, the fourth camera is configured to acquire fourth two-dimensional image data, the fifth camera is configured to acquire fifth two-dimensional image data, the sixth camera is configured to acquire sixth two-dimensional image data, the first two-dimensional image data and the second two-dimensional image data at least partially overlap, and the fourth two-dimensional image data at least partially overlap.
3. The early warning system for a vehicle of claim 2, wherein the third camera module includes a seventh camera having a focal length that is greater than the focal length of the first camera, the focal length of the second camera, the focal length of the fourth camera, and the focal length of the fifth camera, the seventh camera configured to acquire seventh two-dimensional image data that at least partially coincides with the third two-dimensional image data and the seventh two-dimensional image data that at least partially coincides with the sixth two-dimensional image data.
4. The warning system for a vehicle of claim 3, wherein the first image data, the second image data, the third image data, and the fourth image data are generated based on two-dimensional image data acquired by the first camera module, the second camera module, and the third camera module, comprising:
The first image data is generated based on the first two-dimensional image data and the second two-dimensional image data, the second image data is generated based on the fourth two-dimensional image data and the fifth two-dimensional image data, the third image data is generated based on the third two-dimensional image data and the seventh two-dimensional image data, and the fourth image data is generated based on the sixth two-dimensional image data and the seventh two-dimensional image data.
5. The early warning system for a vehicle according to claim 4, wherein the first image data is generated based on the first two-dimensional image data and the second two-dimensional image data, comprising:
Acquiring depth information of image data of a superposition part of the first two-dimensional image data and the second two-dimensional image data;
performing fusion processing on the first two-dimensional image data and the second two-dimensional image data to obtain first fusion image data, wherein a first part of image data of the first fusion image data is image data of a superposition part of the first two-dimensional image data and the second two-dimensional image data;
And converting the first part of image data of the first fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the first two-dimensional image data and the second two-dimensional image data to obtain the first image data.
6. The warning system for a vehicle according to claim 4 or 5, characterized in that the second image data is generated based on the fourth two-dimensional image data and the fifth two-dimensional image data, comprising:
acquiring depth information of image data of a superposition part of the fourth two-dimensional image data and the fifth two-dimensional image data;
performing fusion processing on the fourth two-dimensional image data and the fifth two-dimensional image data to obtain second fusion image data, wherein the first part of image data of the second fusion image data is the image data of the superposition part of the fourth two-dimensional image data and the fifth two-dimensional image data;
And converting the first part of image data of the second fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the fourth two-dimensional image data and the fifth two-dimensional image data to obtain the second image data.
7. The warning system for a vehicle according to any one of claims 4 to 6, characterized in that the third image data is generated based on the third two-dimensional image data and the seventh two-dimensional image data, comprising:
acquiring depth information of image data of a superposition portion of the third two-dimensional image data and the seventh two-dimensional image data;
Performing fusion processing on the third two-dimensional image data and the seventh two-dimensional image data to obtain third fusion image data, wherein the first part of image data of the third fusion image data is the image data of the superposition part of the third two-dimensional image data and the seventh two-dimensional image data;
And converting the first part of image data of the third fused image data into three-dimensional image data based on the depth information of the image data of the overlapped part of the third two-dimensional image data and the seventh two-dimensional image data to obtain the third image data.
8. The warning system for a vehicle according to any one of claims 4 to 7, characterized in that the fourth image data is generated based on the sixth two-dimensional image data and the seventh two-dimensional image data, comprising:
acquiring depth information of image data of a superposition portion of the sixth two-dimensional image data and the seventh two-dimensional image data;
Performing fusion processing on the sixth two-dimensional image data and the seventh two-dimensional image data to obtain fourth fused image data, wherein the first part of image data of the fourth fused image data is the image data of the superposition part of the sixth two-dimensional image data and the seventh two-dimensional image data;
And converting the first part of image data of the fourth fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the sixth two-dimensional image data and the seventh two-dimensional image data to obtain the fourth image data.
9. The warning system for a vehicle according to any one of claims 1 to 8, wherein the controller performs fusion processing of the first image data, the second image data, the third image data, and the fourth image data having three-dimensional image data to obtain image data, comprising:
Performing fusion processing on the first image data and the third image data to obtain first image data;
And carrying out fusion processing on the second image data and the fourth image data to obtain second image data.
10. The warning system for a vehicle of claim 9, wherein the presentation means comprises a first presentation means for presenting the first image data and a second presentation means for presenting the second image data.
11. The warning system for a vehicle according to any one of claims 1 to 10, characterized in that,
The controller is also used for outputting early warning information based on the image data;
The presentation device is also used for presenting the early warning information.
12. The warning system for a vehicle of claim 11, wherein the warning information includes at least one of:
A back collision distance prompt, a collision risk prompt, a door opening risk prompt and a rear vehicle overtaking prompt.
13. A warning method for a vehicle, comprising:
Respectively acquiring two-dimensional image data of a first side of the vehicle, two-dimensional image data of a second side of the vehicle and two-dimensional image data of the rear of the vehicle;
Performing fusion processing on first image data, second image data, third image data and fourth image data with three-dimensional image data to obtain image data, wherein the first image data, the second image data, the third image data and the fourth image data are generated based on acquired two-dimensional image data of a first side of the vehicle, two-dimensional image data of a second side of the vehicle and two-dimensional image data of a rear of the vehicle, and the image data comprises holographic image data;
And presenting the image data.
14. The warning method for a vehicle of claim 13, wherein the two-dimensional image data of the first side of the vehicle comprises first two-dimensional image data, second two-dimensional image data, and third two-dimensional image data, and the two-dimensional image data of the second side of the vehicle comprises fourth two-dimensional image data, fifth two-dimensional image data, and sixth two-dimensional image data, wherein the first two-dimensional image data and the second two-dimensional image data at least partially overlap, and the fourth two-dimensional image data and the fifth two-dimensional image data at least partially overlap.
15. The warning method for a vehicle according to claim 14, wherein the two-dimensional image data of the rear of the vehicle includes seventh two-dimensional image data, the third two-dimensional image data and the seventh two-dimensional image data at least partially overlap, and the sixth two-dimensional image data and the seventh two-dimensional image data at least partially overlap.
16. The warning method for a vehicle according to claim 15, wherein the first image data, the second image data, the third image data, and the fourth image data are generated based on the collected two-dimensional image data of the first side of the vehicle, the two-dimensional image data of the second side of the vehicle, and the two-dimensional image data of the rear of the vehicle, comprising:
The first image data is generated based on the first two-dimensional image data and the second two-dimensional image data, the second image data is generated based on the fourth two-dimensional image data and the fifth two-dimensional image data, the third image data is generated based on the third two-dimensional image data and the seventh two-dimensional image data, and the fourth image data is generated based on the sixth two-dimensional image data and the seventh two-dimensional image data.
17. The warning method for a vehicle according to claim 16, characterized in that the first image data is generated based on the first two-dimensional image data and the second two-dimensional image data, comprising:
Acquiring depth information of image data of a superposition part of the first two-dimensional image data and the second two-dimensional image data;
performing fusion processing on the first two-dimensional image data and the second two-dimensional image data to obtain first fusion image data, wherein a first part of image data of the first fusion image data is image data of a superposition part of the first two-dimensional image data and the second two-dimensional image data;
And converting the first part of image data of the first fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the first two-dimensional image data and the second two-dimensional image data to obtain the first image data.
18. The warning method for a vehicle according to claim 16 or 17, characterized in that the second image data is generated based on the fourth two-dimensional image data and the fifth two-dimensional image data, comprising:
acquiring depth information of image data of a superposition part of the fourth two-dimensional image data and the fifth two-dimensional image data;
performing fusion processing on the fourth two-dimensional image data and the fifth two-dimensional image data to obtain second fusion image data, wherein the first part of image data of the second fusion image data is the image data of the superposition part of the fourth two-dimensional image data and the fifth two-dimensional image data;
And converting the first part of image data of the second fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the fourth two-dimensional image data and the fifth two-dimensional image data to obtain the second image data.
19. The warning method for a vehicle according to any one of claims 16 to 18, characterized in that the third image data is generated based on the third two-dimensional image data and the seventh two-dimensional image data, comprising:
acquiring depth information of image data of a superposition portion of the third two-dimensional image data and the seventh two-dimensional image data;
Performing fusion processing on the third two-dimensional image data and the seventh two-dimensional image data to obtain third fusion image data, wherein the first part of image data of the third fusion image data is the image data of the superposition part of the third two-dimensional image data and the seventh two-dimensional image data;
And converting the first part of image data of the third fused image data into three-dimensional image data based on the depth information of the image data of the overlapped part of the third two-dimensional image data and the seventh two-dimensional image data to obtain the third image data.
20. The warning method for a vehicle according to any one of claims 16 to 19, characterized in that the fourth image data is generated based on the sixth two-dimensional image data and the seventh two-dimensional image data, comprising:
acquiring depth information of image data of a superposition portion of the sixth two-dimensional image data and the seventh two-dimensional image data;
Performing fusion processing on the sixth two-dimensional image data and the seventh two-dimensional image data to obtain fourth fused image data, wherein the first part of image data of the fourth fused image data is the image data of the superposition part of the sixth two-dimensional image data and the seventh two-dimensional image data;
And converting the first part of image data of the fourth fused image data into three-dimensional image data based on the depth information of the image data of the overlapping part of the sixth two-dimensional image data and the seventh two-dimensional image data to obtain the fourth image data.
21. The warning method for a vehicle according to any one of claims 13 to 20, characterized in that the fusion processing of the first image data, the second image data, the third image data, and the fourth image data having three-dimensional image data to obtain image data includes:
Performing fusion processing on the first image data and the third image data to obtain first image data;
And carrying out fusion processing on the second image data and the fourth image data to obtain second image data.
22. The warning method for a vehicle according to any one of claims 13 to 21, characterized by further comprising:
outputting early warning information based on the image data;
and presenting the early warning information.
23. The warning system for a vehicle of claim 22 wherein the warning information includes at least one of:
A back collision distance prompt, a collision risk prompt, a door opening risk prompt and a rear vehicle overtaking prompt.
24. A vehicle, characterized in that it comprises an early warning system for a vehicle according to any one of claims 1-12.
CN202311788623.5A 2023-12-22 2023-12-22 Early warning system and early warning method for vehicle and vehicle Pending CN118270035A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311788623.5A CN118270035A (en) 2023-12-22 2023-12-22 Early warning system and early warning method for vehicle and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311788623.5A CN118270035A (en) 2023-12-22 2023-12-22 Early warning system and early warning method for vehicle and vehicle

Publications (1)

Publication Number Publication Date
CN118270035A true CN118270035A (en) 2024-07-02

Family

ID=91644259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311788623.5A Pending CN118270035A (en) 2023-12-22 2023-12-22 Early warning system and early warning method for vehicle and vehicle

Country Status (1)

Country Link
CN (1) CN118270035A (en)

Similar Documents

Publication Publication Date Title
JP5208203B2 (en) Blind spot display device
EP2053860A1 (en) On-vehicle image processing device and its viewpoint conversion information generation method
EP2763407B1 (en) Vehicle surroundings monitoring device
CN104204847B (en) For the method and apparatus for the surrounding environment for visualizing vehicle
US8446268B2 (en) System for displaying views of vehicle and its surroundings
JP6392693B2 (en) Vehicle periphery monitoring device, vehicle periphery monitoring method, and program
CN104859538A (en) Vision-based object sensing and highlighting in vehicle image display systems
JP2004056763A (en) Monitoring apparatus, monitoring method, and program for monitor
JP4796676B2 (en) Vehicle upper viewpoint image display device
CN112074875A (en) Method and system for constructing group optimization depth information of 3D characteristic graph
CN103502876A (en) Method and device for calibrating a projection device of a vehicle
JP2008298533A (en) Obstruction measurement method, device, and system
CN111819571A (en) Panoramic looking-around system with adjusted and adapted projection surface
WO2018074085A1 (en) Rangefinder and rangefinder control method
JP2010287029A (en) Periphery display device
CN207860066U (en) A kind of panorama record supplementary controlled system
JP2015154125A (en) Vehicle periphery image display device and vehicle periphery image display method
CN112347825B (en) Adjusting method and system for vehicle body looking-around model
JP2016119558A (en) Video processing device and on-vehicle video processing system
CN116343165A (en) 3D target detection system, method, terminal equipment and storage medium
CN118270035A (en) Early warning system and early warning method for vehicle and vehicle
CN116101174A (en) Collision reminding method and device for vehicle, vehicle and storage medium
JP2013152546A (en) Driving support device, driving support method, and program
JP2019202584A (en) Image processing apparatus and image processing method
JP7207889B2 (en) Range finder and in-vehicle camera system

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination