CN111114434B - Vision-assisted imaging method, vehicle-mounted vision-assisted system and storage device - Google Patents

Vision-assisted imaging method, vehicle-mounted vision-assisted system and storage device Download PDF

Info

Publication number
CN111114434B
CN111114434B CN201911080211.XA CN201911080211A CN111114434B CN 111114434 B CN111114434 B CN 111114434B CN 201911080211 A CN201911080211 A CN 201911080211A CN 111114434 B CN111114434 B CN 111114434B
Authority
CN
China
Prior art keywords
vehicle
image information
user
information
display device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201911080211.XA
Other languages
Chinese (zh)
Other versions
CN111114434A (en
Inventor
李海宁
孙永刚
李江伟
张荃
黄继辉
唐新鲁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Desay Microelectronic Technology Co ltd
Original Assignee
Shenzhen Desay Microelectronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Desay Microelectronic Technology Co ltd filed Critical Shenzhen Desay Microelectronic Technology Co ltd
Priority to CN201911080211.XA priority Critical patent/CN111114434B/en
Publication of CN111114434A publication Critical patent/CN111114434A/en
Application granted granted Critical
Publication of CN111114434B publication Critical patent/CN111114434B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/307Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing virtually distinguishing relevant parts of a scene from the background of the scene
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8073Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for vehicle security, e.g. parked vehicle surveillance, burglar detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8093Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application provides a vision auxiliary imaging method, a vehicle-mounted vision auxiliary system and a storage device, wherein the vision auxiliary imaging method is applied to the vision auxiliary system and comprises a display device arranged in a vehicle blind area position, the system further comprises a user visual angle acquisition device, an in-vehicle camera and an out-vehicle camera, and the method comprises the following steps: acquiring visual angle information of a user; when the visual angle information meets the preset auxiliary conditions, acquiring the image information outside the vehicle of the direction corresponding to the visual angle information and the image information inside the vehicle under the user visual angle; splicing the image information outside the vehicle and the image information inside the vehicle to obtain spliced image information; intercepting target image information from the spliced image information; and imaging and displaying the target image information through a display device. The image content that this display device shows in this application more is close actual scene, is favorable to the driver to know the outside actual conditions of vision blind area, and then reduces the driving risk.

Description

Vision-assisted imaging method, vehicle-mounted vision-assisted system and storage device
Technical Field
The present disclosure relates to the field of vehicle-mounted electronics, and in particular, to a vision-assisted imaging method, a vehicle-mounted vision-assisted system, and a storage device.
Background
The support pillar in front of the vehicle body, also called the a-pillar, is responsible for connecting the front windshield and the front door of the vehicle, and is an important part concerning the safety of the vehicle body. In terms of vehicle body structure, the a-pillar is required to satisfy rollover safety and also to satisfy stricter roof crush standards. However, the A-pillar is also considered as a visual obstacle for the driver, and the front sight line of many drivers is affected by the A-pillar during the daily driving, so that the visual range is narrowed, and the danger to pedestrians and weak road users is increased.
In order to solve the problem of the blind visual zone caused by the a-pillar, there are two main solutions: firstly, change A post physical structure, add fixed small window at the front door window to promote the visual scope in side the place ahead. And secondly, a vehicle-mounted vision auxiliary system is utilized, a camera and a streaming media screen for detecting the action of a driver and shooting the conditions outside the vehicle are added into the vehicle, and the conditions of the A column blind area are displayed to the driver in a projection mode.
However, even if a general electronic vision system is adopted, because the scene mapped by the vision system is different from the real scene, the imaging effect is poor, and the actual situation outside the a column is still difficult to judge by the driver, so that the driving risk is caused.
Disclosure of Invention
The application provides a vision auxiliary imaging method, a vehicle-mounted vision auxiliary system and a storage device, which can improve the imaging effect of the vision auxiliary system and reduce the driving risk.
The application provides a vision-aided imaging method, is applied to vision auxiliary system, including the display device who locates vehicle blind area position, the system still includes user visual angle acquisition device, camera and the outer camera of car in the car, the method includes:
acquiring visual angle information of a user;
when the visual angle information meets a preset auxiliary condition, acquiring the image information outside the vehicle of the direction corresponding to the visual angle information and the image information inside the vehicle under the user visual angle;
splicing the image information outside the vehicle and the image information inside the vehicle to obtain spliced image information;
intercepting target image information from the stitched image information, wherein the target image information is associated with a display area of the display device in the stitched image information;
and imaging and displaying the target image information through the display device.
The present application further provides an on-vehicle vision assistance system, including:
the user visual angle acquisition device is opposite to the head of the user and is used for acquiring visual angle information of the user;
the in-vehicle camera is arranged near the head of the driver and used for acquiring in-vehicle image information under the user visual angle;
the vehicle exterior camera is arranged outside the vehicle and used for acquiring vehicle exterior image information of the vehicle;
the display device is arranged at the position of the blind area of the vehicle and is used for displaying the target image information;
the processor is electrically connected with the user visual angle acquisition device, the in-vehicle camera, the out-vehicle camera and the memory;
the memory has stored therein a computer program, and the processor executes the visual auxiliary imaging method as described above by calling the computer program stored in the memory.
The present application also provides a storage medium having stored therein a computer program which, when run on a computer, causes the computer to perform a visually assisted imaging method as described above.
As can be seen from the above, in the vision-assisted imaging method, the vehicle-mounted vision-assisted system, and the storage device in the embodiment of the application, by acquiring the vehicle exterior image information and the vehicle interior image information in the directions corresponding to the user viewing angles, and splicing the vehicle interior image information and the vehicle exterior image information, corresponding target image information is intercepted, so that the image content displayed by the display device can be matched with an external scene, and the imaging effect of the vision-assisted system is improved. The image content is closer to the actual scene, so that the driver can know the actual condition outside the visual blind area, and the driving risk is reduced.
Drawings
Fig. 1 is a schematic view of an application scenario of a vision-assisted imaging system according to an embodiment of the present application.
Fig. 2 is a flowchart of an implementation of a visual auxiliary imaging method according to an embodiment of the present application.
Fig. 3 is a schematic view of an application scenario of an image stitching process according to an embodiment of the present application.
Fig. 4 is a schematic view of an application scenario of the visual auxiliary imaging method according to the embodiment of the present application.
Fig. 5 is a flowchart of an implementation of obtaining image information outside a vehicle and image information inside the vehicle according to the embodiment of the present application.
Fig. 6 is a schematic view of another application scenario of the visual auxiliary imaging method according to the embodiment of the present application.
Fig. 7 is a flowchart of an implementation of adjusting an image capturing position according to an embodiment of the present application.
Fig. 8 is a flowchart of another implementation of adjusting an image capturing position according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of an in-vehicle vision assistance system according to an embodiment of the present application.
Fig. 10 is another schematic structural diagram of an in-vehicle vision assistance system provided in the embodiment of the present application.
Detailed Description
The following detailed description of the preferred embodiments of the present application, taken in conjunction with the accompanying drawings, will make the advantages and features of the present application more readily appreciated by those skilled in the art, and thus will more clearly define the scope of the invention.
The application discloses a vision-aided imaging method, which is applied to a vision-aided system.
Referring to fig. 1, an application scenario diagram of a vision assistance system provided in an embodiment of the present application is shown, where the vehicle-mounted vision assistance system may include a user viewing angle obtaining device 111, an in-vehicle camera 112, an out-vehicle camera 113, and a display device 12.
The user angle-of-view obtaining device 111 may be disposed opposite to the head of the user, and obtain the angle-of-view information of the user by obtaining the head dynamic information (such as the rotation of the head, the rotation of the eyeball, etc.) of the user to determine the angle-of-view situation of the user, that is, where the user is observing.
Specifically, the user visual angle obtaining device 111 may be at least one of a color camera, an infrared camera, and a depth-of-field camera, so as to obtain the head/eyeball characteristics of the user through the above cameras, and confirm the rotation angle parameter of the head and/or the gaze angle parameter of the eyeball through a recognition algorithm. It is understood that the recognition algorithm may be a head/eye recognition algorithm commonly used in the art, and the present application is not limited thereto.
In some embodiments, the user angle of view acquisition device 111 may be installed on the opposite side of the driver's seat, or may be designed to be worn on the head of the user, and a camera and a position sensor are used to detect the head/eyeball characteristics of the user.
The in-vehicle camera 112 may be disposed near the head of the driver to obtain in-vehicle image information from the user's perspective. Wherein the in-vehicle camera 112 can view the display device 12 of the vehicle blind spot location, as well as the view outside the vehicle.
In some embodiments, the in-vehicle camera 112 is positioned near the head of the driver such that its viewing angle is similar to the user's viewing angle, and in-vehicle image information at the user's viewing angle may be obtained through some conversion algorithm. In addition, the in-vehicle camera 112 may also be provided on a head-mounted device. The specific placement of the in-vehicle camera 112 may be determined according to actual conditions.
The vehicle exterior camera 113 may be disposed outside the vehicle to acquire image information outside the vehicle. Specifically, the off-vehicle cameras 113 may be disposed on both sides of the outside of the vehicle, for example, at the rear-view mirror of the vehicle, to ensure that the off-vehicle image information includes the outside view of the vehicle blind spot position blocked by the a-pillar.
The display device 12 is disposed in the vehicle blind area position and is used for displaying the target image information, and the display device 12 can be used for displaying the outside scene acquired by the camera 113 outside the vehicle.
Referring to fig. 2, a flow chart of implementing the visual auxiliary imaging method according to the embodiment of the present application is shown.
As shown in fig. 2, the visual aid imaging method includes:
101. and acquiring the visual angle information of the user.
The user's perspective information may be acquired by a user perspective acquisition device in the vehicle, which may detect the movement of the user's head or eyes to determine the user's current perspective.
The angle of view information may include a rotation angle parameter of the head and/or a line of sight angle parameter of the eyeball.
102. And when the visual angle information meets the preset auxiliary conditions, acquiring the image information outside the vehicle of the vehicle in the direction corresponding to the visual angle information and the image information inside the vehicle at the user visual angle.
The assist condition may be considered as a condition when the user needs to observe the vehicle blind spot position, that is, the a-pillar position. The vehicle exterior image information and the vehicle interior image information may be image information corresponding to one frame of image, or may be a video stream formed by a plurality of frames of images, and are not limited herein.
In some embodiments, the auxiliary condition may be to determine whether the rotation angle parameter of the head is within a preset parameter interval; and/or judging whether the eyeball sight angle parameter is positioned in a preset parameter interval.
For example, if the angle between the display device and the driver's seat is 30 degrees, the preset parameter interval may be set to 20-40 degrees. If it is detected that the included angle of the head of the user relative to the direct-view direction is 32 degrees, so that the rotation angle parameter of the head meets the requirement of the preset parameter interval, it can be determined that the visual angle information of the user meets the preset auxiliary condition.
And when the visual angle information meets the preset auxiliary conditions, acquiring the image information outside the vehicle at the position corresponding to the visual angle information and the image information inside the vehicle at the user visual angle.
It can be understood that the preset parameter interval is only used for example, the preset parameter interval may be replaced by one or more preset threshold values, and the specific value of the preset parameter interval may be determined according to the specific included angle value between the driver's seat and the display device and the actual situation.
The vehicle exterior image information should correspond to the direction of the angle of view information, for example, if the user's head is turned to the right a-pillar direction, the acquired vehicle exterior image information should be a vehicle exterior view other than the a-pillar direction on the right side of the vehicle, and if the user's head is turned to the left side, the acquired vehicle exterior image information should also be a vehicle exterior view other than the a-pillar direction on the left side of the vehicle. It should be noted that the obtaining angle of the vehicle exterior image information can be adjusted correspondingly according to different viewing angle information.
The in-vehicle image information may be associated with the obtained perspective information of the user, and for example, if the head of the user turns to the right a-pillar direction, the obtained in-vehicle image information may include an image of the right a-pillar direction of the vehicle.
It can be understood that, in addition to triggering the image information obtaining action when the user's view angle information satisfies the preset condition, the image information can be obtained continuously in real time, and may be determined according to the actual situation.
103. And splicing the image information outside the vehicle and the image information inside the vehicle to obtain spliced image information.
The image information outside the vehicle and the image information inside the vehicle are spliced, and the image information outside the vehicle and the image information inside the vehicle are spliced into spliced image information under the same visual angle and scene size.
In some embodiments, scene feature points of the image information outside the vehicle and the image information inside the vehicle can be extracted, and the feature points corresponding to the image information inside the vehicle and the image information inside the vehicle can be matched, so that the corresponding feature points in the spliced image information are in the same image position.
Referring to fig. 3, an application scenario of the image stitching process according to the embodiment of the present application is shown.
As shown in fig. 3, for example, if the image information 1A outside the vehicle includes image information corresponding to the trees 11 outside the vehicle, and the image information 1B inside the vehicle also includes image information corresponding to the trees 11 outside the vehicle, the two pieces of image information may be spliced and superimposed on each other by a conventional splicing means to form uniform spliced image information 1C.
Of course, other existing splicing manners may be adopted, which is not limited in this application.
104. And intercepting target image information from the spliced image information, wherein the target image information is associated with a display area of the display device in the spliced image information.
The display device may have an image area corresponding to the in-vehicle image information, and the area occupied by the display device 12 in the image area may be referred to as the display area 12.
Specifically, in fig. 3, there are outside vehicle images that can be observed in the outside vehicle image information 1A and the inside vehicle image information 1B, and after the image information corresponding to these outside vehicle images is spliced, there is an outside vehicle image blocked by the a-pillar at the position of the display area 12, so that the user cannot observe the blocked outside vehicle image, such as the tree 13 in fig. 3.
At this time, in the stitching process, the image information corresponding to the tree 13 is overlapped with the image information of the display area 12, and the image information corresponding to the tree 13 is also the image information that needs to be displayed to the user. Therefore, the target image information can be obtained by cutting the image information corresponding to the tree 13 in the display area 12 of the stitched image information.
105. And imaging and displaying the target image information through a display device.
Referring to fig. 4, an application scenario of the visual-assisted imaging method according to the embodiment of the present application is shown.
As shown in fig. 4, the use environment inside the vehicle is shown in this scenario. Wherein the display device is disposed inside an A-pillar of the vehicle.
When a user observes the A column on the right side of the vehicle through the head in the driving process, the display device can acquire target image information containing tree scenes, and the target image information is displayed on the display device through the display device. The user can know the outside scene of the vehicle outside the right A column under the current visual angle of the user by observing the image displayed by the display device.
Of course, the above target image information may be understood as target image information of a certain frame image, and may also be understood as a continuously acquired video stream. Besides, the target image information can be triggered and obtained according to the viewing angle information, and can be continuously obtained through function setting and displayed for a long time through the display device, which is not limited herein.
The visual angle and the size of the obtained target image information are similar to the real scene outside the vehicle due to the fact that the visual angle and the size of the scene inside/outside the vehicle in the spliced image information are overlapped, the effect similar to a transparent A column is formed, and the problem that the scene displayed by the display device in the prior art cannot correspond to the real scene is solved.
Therefore, in the vision-assisted imaging method in the embodiment of the application, the image information outside the vehicle and the image information inside the vehicle in the direction corresponding to the visual angle of the user are obtained, and the image information inside the vehicle and the image information outside the vehicle are spliced and then the corresponding target image information is intercepted, so that the image content displayed by the display device can be matched with the external scene, and the imaging effect of the vision-assisted system is improved. The image content is closer to the actual scene, so that the driver can know the actual condition outside the visual blind area, and the driving risk is reduced.
Referring to fig. 5, an implementation flow for obtaining image information outside a vehicle and image information inside the vehicle provided by the embodiment of the present application is shown in the figure.
As shown in fig. 5, when the viewing angle information satisfies the preset auxiliary condition, acquiring the image information outside the vehicle and the image information inside the vehicle at the user viewing angle, including:
201. and judging whether the rotation angle parameter/sight angle parameter of the head is positioned in a preset parameter interval.
For example, if the angle between the display device and the driver's seat is 30 degrees, the preset parameter interval may be set to 20-40 degrees. If it is detected that the included angle of the head of the user relative to the direct-view direction is 32 degrees, so that the rotation angle parameter of the head meets the requirement of the preset parameter interval, it can be determined that the visual angle information of the user meets the preset auxiliary condition.
202. And if the rotation angle parameter/sight angle parameter of the head is positioned in the preset parameter interval, judging whether the preset time duration lasts.
In order to avoid the misjudgment, a preset time length can be introduced as one of the conditions for triggering the auxiliary function, so that the auxiliary condition is that the rotation angle parameter/sight angle parameter of the head is located in a preset parameter interval and lasts for the preset time length.
For example, if the preset time duration is set to 1 second, it is determined that the preset auxiliary condition is satisfied only when the angle between the head of the user and the direct-view direction is detected to be 32 degrees and lasts for 1 second within the range of 20 to 40 degrees.
203. And if the preset duration is continued, acquiring the image information outside the vehicle at the position corresponding to the visual angle information and the image information inside the vehicle at the user visual angle.
It can be understood that the preset parameter interval and the preset time length are only used for example, the preset parameter interval may be replaced by one or more preset threshold values, and the specific values of the preset parameter interval and the preset time length may be determined according to the specific included angle value between the driver's seat and the display device and the actual situation.
By the aid of the method, triggering misjudgment of the auxiliary function can be avoided, and excessive interference of the visual auxiliary system to a user is avoided.
Referring to fig. 6, a method for obtaining target image information according to an embodiment of the present application is shown.
As shown in fig. 6, the intercepting the target image information from the stitched image information may include:
301. and determining a corresponding display area of the display device in the spliced image information, and intercepting the image information in the display area range.
The display area can be confirmed through an identification algorithm, or the display area in the spliced image information is selected through presetting a range frame.
Referring to fig. 3, in the stitched image information 1C, the display device is located inside the a pillar in the stitched image information, and the display area 12 (i.e., the area that can be used to display an image) of the display device can be identified by a recognition algorithm.
After the display area 12 is confirmed, image information within the range of the display area 12 is intercepted, and the image information is image information which is spliced with the vehicle exterior image information and is located within the range of the display area 12, namely image information corresponding to the trees 13.
302. And adapting the image information in the display area range according to the display specification of the display device to obtain the target image information.
For example, if the resolution of the video stream corresponding to the target image information is 900P and the resolution specification of the display device is 720P, the resolution format of the video stream corresponding to the target image information may be adjusted to 720P matching the display specification of the display device to ensure the display effect of the video stream corresponding to the target image information.
Of course, the resolution specification may be a specification problem such as an image display shape and a color, and may be determined in accordance with actual circumstances.
By the target image information obtaining mode, the target image information can be made to be attached to the display requirement of the display device, so that the display effect of the target image information on the display device is ensured, and the scene displayed in the target image information is ensured to be consistent with the scene outside the vehicle under the actual visual angle as much as possible.
Referring to fig. 7, a flow of implementing the adjustment of the image capturing position according to the embodiment of the present application is shown.
As shown in fig. 7, in order to further improve the imaging effect of the target image information, before intercepting the image information in the display area range, the method may further include:
301. and acquiring the running track information of the vehicle.
The running track information of the vehicle may include at least one of a vehicle body speed parameter, a wheel speed pulse parameter, a steering wheel angle parameter, and a lateral-longitudinal acceleration parameter of the vehicle.
The next running track of the vehicle can be predicted through the running track information of the vehicle, and the intercepting position of the image information can be compensated and adjusted according to the running track information and the system delay value.
302. And obtaining a system delay value, wherein the system delay value is related to a link delay value and an algorithm delay value of the system.
The system delay values may include link delay values, algorithm delay values of the system. Because the speed of the vehicle is fast in the moving process, the display of the target image information is delayed by the system delay value, so that the target image information cannot completely correspond to the scene outside the vehicle when being displayed.
Obtaining the system delay value can effectively confirm the specific delay time in the system.
In some embodiments, the link delay value and the algorithm delay value may be confirmed by detecting the delay time of the signal, or some calculated possible preset value may be used as the link delay value and the algorithm delay value.
303. And determining the deviation degree of the image formed by the target image information under the influence of the system delay value according to the system delay value and the running track information.
After the system delay value is obtained, the actual running speed and track of the vehicle can be obtained through the running track information of the vehicle, and the image deviation degree, such as the deviation distance and direction of the image and the like, can be obtained through the combination of the system delay value and the vehicle.
For example, if the system delay value is 100ms and the vehicle speed is 120km/h, the calculation may be performed based on the above two parameters, and the image may be obtained by the offset distance and direction generated by the system delay value when the image is displayed.
304. And adjusting the interception position of the image information based on the image deviation degree.
According to the possible offset distance and direction of the image, the system can adjust the intercepted position of the image, so that the intercepted position is the corresponding target image information after the vehicle experiences the system delay value. For example, the image cutout position is adjusted to the front position in the stitched image.
Of course, the specific adjustment mode of the interception position may be determined according to the actual situation, and the application does not limit this.
Therefore, the image interception position is adjusted according to the running track information of the vehicle and the system delay value so as to compensate the image deviation condition generated by the system delay value, further ensure that the displayed target image information is consistent with the scene outside the vehicle, and improve the imaging effect of the target image information.
Referring to fig. 8, a flow of implementing the adjustment of the image capturing position according to the embodiment of the present application is shown.
As shown in fig. 8, in order to further improve the imaging effect of the target image information, after the target image information is displayed by imaging through the display device, the method may further include:
501. the method comprises the steps of obtaining in-vehicle image information at a current visual angle and out-vehicle image information at a corresponding visual angle, wherein the in-vehicle image information comprises a target image displayed in a display area of a display device.
502. And splicing the image information inside the vehicle and the image information outside the vehicle to obtain spliced image information.
503. And judging whether image splicing of the spliced image information at the display area is discontinuous or not.
When the image displayed in the display area of the image information inside the vehicle is inconsistent with the position of the scene outside the vehicle in the display area in the image information outside the vehicle, the spliced image information after splicing can generate image double images or image splicing errors at the position.
When the above-described situation occurs, it can be confirmed that the image stitching of the stitched image information at the display area is discontinuous.
504. And if the image splicing of the spliced image information at the display area is discontinuous, performing compensation adjustment on the intercepting position for intercepting the target image information in the vehicle exterior image information so as to realize continuous image splicing of the spliced image information at the display area.
For example, if the image stitching of the stitched image information is not continuous at the display area and the image displayed at the display area of the in-vehicle image information is inclined to the right, the cut-out area of the target image information may be adjusted to the left to completely overlap the image in the display area after the in-vehicle image information and the out-vehicle image information are stitched until the image stitching of the stitched image information at the display area is continuous.
Of course, the specific adjustment manner may be determined according to the actual situation, and the present application does not limit this.
Therefore, after the target image information is displayed in the display device, whether the situation of inconsistent image splicing exists is judged by utilizing the obtained spliced image information, the display effect of the image is adjusted by the closed-loop feedback mechanism, the displayed target image information can be further ensured to be consistent with the scene outside the vehicle, and the imaging effect of the target image information is improved.
Referring to fig. 9, a vehicle-mounted vision assistance system 60 according to an embodiment of the present application is shown, where the vehicle-mounted vision assistance system 60 includes a user visual angle obtaining device 61, an in-vehicle camera 62, an out-vehicle camera 63, a display device 64, a processor 65, and a memory 66.
The user visual angle acquisition device 61 is opposite to the head of the user and is used for acquiring the visual angle information of the user;
an in-vehicle camera 62 mounted near the head of the driver for acquiring in-vehicle image information at the user's view angle;
the vehicle exterior camera 63 is installed outside the vehicle and used for acquiring vehicle exterior image information of the vehicle;
a display device 64 provided at a blind area position of the vehicle for displaying target image information;
a processor 65 and a memory 66, wherein the processor 65 is electrically connected with the user visual angle acquisition device 61, the in-vehicle camera 62, the out-vehicle camera 63 and the memory 66;
the user visual angle obtaining device 61 may be disposed opposite to the head of the user, and obtains the visual angle information of the user by obtaining the head dynamic information (such as the rotation of the head, the rotation of the eyeball, etc.) of the user to determine the visual angle situation of the user, that is, where the user is observing.
Specifically, the user visual angle acquiring device 61 may be at least one of a color camera, an infrared camera, and a depth-of-field camera, so as to obtain the head/eyeball characteristics of the user through the camera, and confirm the rotation angle parameter of the head and/or the sight angle parameter of the eyeball through a recognition algorithm. It is understood that the recognition algorithm may be a head/eye recognition algorithm commonly used in the art, and the present application is not limited thereto.
In some embodiments, the user visual angle acquiring device 61 may be installed on the opposite side of the driver's seat, or may be designed to be worn on the head of the user, and a camera and a position sensor are used to detect the head/eyeball characteristics of the user.
The in-vehicle camera 62 may be disposed near the head of the driver to acquire in-vehicle image information from the user's perspective. Wherein the in-vehicle camera 62 can view the display 64 of the vehicle blind spot location, as well as the view outside the vehicle.
In some embodiments, the in-vehicle camera 62 is positioned on the inside of the roof of the vehicle above the user and facing the a-pillar of the vehicle such that its viewing angle is similar to the user's viewing angle, and in-vehicle image information at the user's viewing angle may be obtained through some conversion algorithm. The in-vehicle camera 62 may be a wide-angle camera or a movable camera with a variable field of view, and may be physically located above the vehicle operator's seat. In addition, the in-vehicle camera 62 may also be provided on a head-mounted device. The specific placement of the in-vehicle camera 62 may be determined based on the actual situation.
The vehicle exterior camera 63 may be disposed outside the vehicle to acquire vehicle exterior image information outside the vehicle. Specifically, the external camera 63 may be disposed on both sides of the outside of the vehicle, for example, at the rear-view mirror of the vehicle. Preferably, the vehicle exterior camera 63 is located outside the a-pillar of the vehicle to ensure that the vehicle exterior image information contains the outside view of the vehicle blind spot location blocked by the a-pillar.
The display device 64 is disposed in the vehicle blind area position and is used for displaying the target image information, and the display device 64 can be used for displaying the outside scene acquired by the camera 63 outside the vehicle. Specifically, the display device 64 is located inside the a-pillar of the vehicle. So that the image corresponding to the target image information is close to the actual position.
The memory 66 has a computer program stored therein, and the processor 65 executes the following visual auxiliary imaging method by calling the computer program stored in the memory 66:
acquiring visual angle information of a user; when the visual angle information meets the preset auxiliary conditions, acquiring the image information outside the vehicle of the direction corresponding to the visual angle information and the image information inside the vehicle under the user visual angle; splicing the image information outside the vehicle and the image information inside the vehicle to obtain spliced image information; intercepting target image information from the stitched image information, the target image information being associated with a display area of the display device 64 in the stitched image information; the target image information is displayed by imaging through the display device 64.
The user's angle-of-view information may be acquired by the user angle-of-view acquisition device 61 in the vehicle, and the user angle-of-view acquisition device 61 may detect the movement of the user's head or eyeball to determine the current angle of view of the user.
The angle of view information may include a rotation angle parameter of the head and/or a line of sight angle parameter of the eyeball.
The assist condition may be considered as a condition when the user needs to observe the vehicle blind spot position, that is, the a-pillar position. The vehicle exterior image information and the vehicle interior image information may be image information corresponding to one frame of image, or may be a video stream formed by a plurality of frames of images, and are not limited herein.
In some embodiments, the auxiliary condition may be to determine whether the rotation angle parameter of the head is within a preset parameter interval; and/or judging whether the eyeball sight angle parameter is positioned in a preset parameter interval.
For example, if the angle between the display device 64 and the driver's seat is 30 degrees, the preset parameter interval may be set to 20-40 degrees. If it is detected that the included angle of the head of the user relative to the direct-view direction is 32 degrees, so that the rotation angle parameter of the head meets the requirement of the preset parameter interval, it can be determined that the visual angle information of the user meets the preset auxiliary condition.
And when the visual angle information meets the preset auxiliary conditions, acquiring the image information outside the vehicle at the position corresponding to the visual angle information and the image information inside the vehicle at the user visual angle.
It is understood that the preset parameter interval is only used for example, and the preset parameter interval may be replaced by one or more preset threshold values, and the specific value of the preset parameter interval may be determined according to the specific included angle value between the driver's seat and the display device 64 and the actual situation.
The vehicle exterior image information should correspond to the direction of the angle of view information, for example, if the user's head is turned to the right a-pillar direction, the acquired vehicle exterior image information should be a vehicle exterior view other than the a-pillar direction on the right side of the vehicle, and if the user's head is turned to the left side, the acquired vehicle exterior image information should also be a vehicle exterior view other than the a-pillar direction on the left side of the vehicle. It should be noted that the obtaining angle of the vehicle exterior image information can be adjusted correspondingly according to different viewing angle information.
The in-vehicle image information may be associated with the obtained perspective information of the user, and for example, if the head of the user turns to the right a-pillar direction, the obtained in-vehicle image information may include an image of the right a-pillar direction of the vehicle.
It can be understood that, in addition to triggering the image information obtaining action when the user's view angle information satisfies the preset condition, the image information can be obtained continuously in real time, and may be determined according to the actual situation.
The image information outside the vehicle and the image information inside the vehicle are spliced, and the image information outside the vehicle and the image information inside the vehicle are spliced into spliced image information under the same visual angle and scene size.
In some embodiments, scene feature points of the image information outside the vehicle and the image information inside the vehicle can be extracted, and the feature points corresponding to the image information inside the vehicle and the image information inside the vehicle can be matched, so that the corresponding feature points in the spliced image information are in the same image position.
Referring to fig. 3, an application scenario of the image stitching process according to the embodiment of the present application is shown.
As shown in fig. 3, for example, if the image information 1A outside the vehicle includes image information corresponding to the trees 11 outside the vehicle, and the image information 1B inside the vehicle also includes image information corresponding to the trees 11 outside the vehicle, the two pieces of image information may be spliced and superimposed on each other by a conventional splicing means to form uniform spliced image information 1C.
Of course, other existing splicing manners may be adopted, which is not limited in this application.
The display device 64 may have an image area corresponding to the in-vehicle image information, and the area occupied by the display screen in the image area may be the display area 12.
Specifically, in fig. 3, there are outside vehicle images that can be observed in the outside vehicle image information 1A and the inside vehicle image information 1B, and after the image information corresponding to these outside vehicle images is spliced, there is an outside vehicle image blocked by the a-pillar at the position of the display area 12, so that the user cannot observe the blocked outside vehicle image, such as the tree 13 in fig. 3.
At this time, in the stitching process, the image information corresponding to the tree 13 is overlapped with the image information of the display area 12, and the image information corresponding to the tree 13 is also the image information that needs to be displayed to the user. Therefore, the target image information can be obtained by cutting the image information corresponding to the tree 13 in the display area 12 of the stitched image information.
When the user observes the a pillar on the right side of the vehicle while driving, the display device 64 may acquire target image information including a tree scene, and display the target image information on the display device 64 through the display device 64. By observing the image displayed on the display device 64, the user can know the outside view of the vehicle other than the right a-pillar at the user's current viewing angle.
Of course, the above target image information may be understood as target image information of a certain frame image, and may also be understood as a continuously acquired video stream. Further, the target image information may be triggered and obtained according to the viewing angle information, and may be continuously obtained through function setting and displayed for a long time through the display device 64, which is not limited herein.
The view angle and the size of the obtained target image information are similar to the real scene outside the vehicle due to the fact that the view angle and the size of the scene outside the vehicle/inside the vehicle in the spliced image information are overlapped, the effect similar to a transparent A column is formed, and the problem that the scene displayed by the display device 64 in the prior art cannot correspond to the real scene is solved.
Referring to fig. 10, another structure of the vehicle vision assistance system according to the embodiment of the present application is shown.
In some embodiments, as shown in fig. 10, in order to achieve a more user-friendly display of the target image information, the on-board visual assistance system 60 may further include an image adjusting device 67, the image adjusting device 67 is used for adjusting the display effect of the target image information, and the image adjusting device 67 is electrically connected to the processor 65.
Specifically, the image adjusting device 67 may include an adjusting button disposed near the driving position of the vehicle, and the adjusting button may be a four-way button, so that the driver can control the image corresponding to the target image information of the screen to move around according to the viewing angle of the driver by controlling the buttons around, and then the image is placed in a reasonable range and then the image is subjected to stitching calculation by a stitching algorithm.
Alternatively, if the image adjusting device 67 is integrated in the display device, and the display device is a touch screen, the driver can stretch or shrink the image corresponding to the target image information by moving on the screen to ensure that the display effect meets the requirements of the driver.
Of course, the image adjusting device 67 may also adopt other implementation forms besides the above form, such as other dedicated or shared knobs, buttons, voice control, etc., and the specific implementation form is not limited.
As can be seen from the above, in the vehicle-mounted vision auxiliary system 60 in the embodiment of the present application, by obtaining the vehicle exterior image information and the vehicle interior image information in the directions corresponding to the user viewing angles, and splicing the vehicle interior image information and the vehicle exterior image information, corresponding target image information is captured, so that the image content displayed by the display device 64 can be matched with an external scene, thereby improving the imaging effect of the vehicle-mounted vision auxiliary system 60. The image content is closer to the actual scene, so that the driver can know the actual condition outside the visual blind area, and the driving risk is reduced.
In some embodiments, the processor 65 may be further configured to:
judging whether the rotation angle parameter of the head is in a preset parameter interval or not; and/or
And judging whether the sight angle parameter of the eyeball is positioned in a preset parameter interval.
In some embodiments, the processor 65 may be further configured to:
and judging whether the rotation angle parameter of the head and/or the sight angle parameter of the eyeball are/is positioned between preset zone parameters and lasting for a preset time.
In some embodiments, the processor 65 may be further configured to:
determining a corresponding display area of the display device 64 in the spliced image information, and intercepting the image information within the display area;
and adapting the image information in the display area range according to the display specification of the display device 64 to obtain the target image information.
In some embodiments, the processor 65 may be further configured to:
acquiring running track information and a system delay value of a vehicle;
and performing compensation adjustment on the interception position of the image information according to the running track information and the system delay value.
In some embodiments, the processor 65 may be further configured to:
obtaining a system delay value, wherein the system delay value is related to a link delay value and an algorithm delay value of a system;
determining the image deviation degree formed by the target image information under the influence of the system delay value according to the system delay value and the running track information;
and adjusting the interception position of the image information based on the image deviation degree.
In some embodiments, the processor 65 may be further configured to:
acquiring in-vehicle image information at a current viewing angle and out-vehicle image information at a corresponding viewing angle, the in-vehicle image information including a target image displayed in a display area of the display device 64;
splicing the image information inside the vehicle and the image information outside the vehicle to obtain spliced image information;
judging whether image splicing of the spliced image information at the display area is discontinuous or not;
and if so, performing compensation adjustment on the interception position for intercepting the target image information in the vehicle exterior image information so as to enable the spliced image information to realize continuous image splicing at the display area.
In the embodiment of the present application, the vehicle-mounted vision-assisted system and the vision-assisted imaging method in the above embodiments belong to the same concept, any method step provided in the embodiment of the vision-assisted imaging method may be run on the vehicle-mounted vision-assisted system, and a specific implementation process thereof is described in detail in the embodiment of the vision-assisted imaging method, and any combination may be adopted to form an optional embodiment of the application, which is not described herein again.
In some embodiments, there is also provided a storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform any of the above-described visually assisted imaging methods, for example:
acquiring visual angle information of a user; when the visual angle information meets the preset auxiliary conditions, acquiring the image information outside the vehicle of the direction corresponding to the visual angle information and the image information inside the vehicle under the user visual angle; splicing the image information outside the vehicle and the image information inside the vehicle to obtain spliced image information; intercepting target image information from the stitched image information, the target image information being associated with a display region of the display device in the stitched image information; and imaging and displaying the target image information through a display device.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The embodiments of the present application have been described in detail with reference to the drawings, but the present application is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present application within the knowledge of those skilled in the art.

Claims (12)

1. A vision-aided imaging method is applied to a vision-aided system and comprises a display device arranged in a vehicle blind area position, and is characterized in that the system further comprises a user visual angle acquisition device, an in-vehicle camera and an out-vehicle camera, and the method comprises the following steps:
acquiring visual angle information of a user;
when the visual angle information meets a preset auxiliary condition, acquiring the image information outside the vehicle of the direction corresponding to the visual angle information and the image information inside the vehicle under the user visual angle;
splicing the image information outside the vehicle and the image information inside the vehicle to obtain spliced image information;
intercepting target image information from the stitched image information, wherein the target image information is associated with a display area of the display device in the stitched image information; the intercepting of the target image information from the stitched image information includes: determining a corresponding display area of the display device in the spliced image information, and intercepting the image information in the display area range; adapting the image information in the display area range according to the display specification of the display device to obtain the target image information; before the intercepting the image information in the display area range, the method further comprises the following steps: acquiring running track information and a system delay value of a vehicle; performing compensation adjustment on the interception position of the image information according to the running track information and the system delay value; the compensation adjustment of the interception position of the image information according to the running track information and the system delay value comprises the following steps: obtaining a system delay value, wherein the system delay value is related to a link delay value and an algorithm delay value of a system; determining the image deviation degree formed by the target image information under the influence of the system delay value according to the system delay value and the running track information; adjusting the interception position of the image information based on the image deviation degree;
and imaging and displaying the target image information through the display device.
2. The visually assisted imaging method of claim 1, wherein the perspective information comprises at least:
a rotation angle parameter of the head; and/or
The eyeball sight angle parameter.
3. The visual auxiliary imaging method according to claim 2, wherein the acquiring of the image information outside the vehicle and the image information inside the vehicle at the user viewing angle when the viewing angle information satisfies a preset auxiliary condition comprises:
judging whether the rotation angle parameter of the head is in a preset parameter interval or not; and/or
And judging whether the sight angle parameter of the eyeball is positioned in a preset parameter interval.
4. The visual auxiliary imaging method according to claim 3, wherein when the viewing angle information satisfies a preset auxiliary condition, acquiring the image information outside the vehicle and the image information inside the vehicle at the user viewing angle, further comprising:
and judging whether the rotation angle parameter of the head and/or the sight angle parameter of the eyeball are/is positioned in a preset parameter interval or not, and lasting for a preset time.
5. The vision-aided imaging method of claim 1 wherein the vehicle trajectory information comprises:
at least one of a body speed parameter, a wheel speed pulse parameter, a steering wheel angle parameter, and a lateral-longitudinal acceleration parameter of the vehicle.
6. The visual aid imaging method according to claim 1, further comprising, after said image-wise displaying said target image information by said display device:
acquiring in-vehicle image information at a current viewing angle and out-vehicle image information at a corresponding viewing angle, wherein the in-vehicle image information comprises a target image displayed in a display area of the display device;
splicing the image information inside the vehicle and the image information outside the vehicle to obtain spliced image information;
judging whether image splicing of the spliced image information at the display area is discontinuous or not;
and if so, performing compensation adjustment on the interception position for intercepting the target image information in the vehicle exterior image information so as to enable the spliced image information to realize continuous image splicing at the display area.
7. An on-board vision assistance system, comprising:
the user visual angle acquisition device is opposite to the head of the user and is used for acquiring visual angle information of the user;
the in-vehicle camera is arranged near the head of the user and used for acquiring in-vehicle image information under the visual angle of the user;
the vehicle exterior camera is arranged outside the vehicle and used for acquiring vehicle exterior image information of the vehicle;
the display device is arranged at the position of the blind area of the vehicle and is used for displaying the target image information;
the processor is electrically connected with the user visual angle acquisition device, the in-vehicle camera, the out-vehicle camera and the memory;
the memory has stored therein a computer program, the processor executing the visual aid imaging method of any one of claims 1-6 by calling the computer program stored in the memory.
8. The vehicle vision assistance system of claim 7, wherein the user perspective acquisition means comprises:
at least one of a color camera, an infrared camera and a depth of field camera.
9. The vehicle vision assistance system of claim 7 wherein said in-vehicle camera is located on the inside of the roof of the vehicle above said user and toward the a-pillar of said vehicle.
10. The vehicle vision assistance system of claim 7, wherein:
the display device is positioned on the inner side of an A column of the vehicle;
the vehicle inner camera is positioned on the inner side of the vehicle roof above the vehicle driver seat;
the camera outside the car is located the A post outside of vehicle.
11. The vehicle vision assistance system of claim 7 further comprising an image adjustment device, said image adjustment device being electrically connected to said processor;
the image adjusting device is used for adjusting the display effect of the target image information.
12. A storage medium having stored thereon a computer program for causing a computer to perform the visual aid imaging method of any one of claims 1-6 when the computer program runs on the computer.
CN201911080211.XA 2019-11-07 2019-11-07 Vision-assisted imaging method, vehicle-mounted vision-assisted system and storage device Expired - Fee Related CN111114434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911080211.XA CN111114434B (en) 2019-11-07 2019-11-07 Vision-assisted imaging method, vehicle-mounted vision-assisted system and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911080211.XA CN111114434B (en) 2019-11-07 2019-11-07 Vision-assisted imaging method, vehicle-mounted vision-assisted system and storage device

Publications (2)

Publication Number Publication Date
CN111114434A CN111114434A (en) 2020-05-08
CN111114434B true CN111114434B (en) 2021-08-27

Family

ID=70495627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911080211.XA Expired - Fee Related CN111114434B (en) 2019-11-07 2019-11-07 Vision-assisted imaging method, vehicle-mounted vision-assisted system and storage device

Country Status (1)

Country Link
CN (1) CN111114434B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111277796A (en) * 2020-01-21 2020-06-12 深圳市德赛微电子技术有限公司 Image processing method, vehicle-mounted vision auxiliary system and storage device
CN112115797A (en) * 2020-08-21 2020-12-22 东风电驱动***有限公司 Man-machine interaction system and method for intelligent driving
CN112550469A (en) * 2020-12-25 2021-03-26 浙江零跑科技有限公司 Automobile perspective A column module and display method
CN112849158B (en) * 2021-01-22 2022-11-04 精电(河源)显示技术有限公司 Image display method, vehicle-mounted display system and automobile
CN113276774B (en) * 2021-07-21 2021-10-26 新石器慧通(北京)科技有限公司 Method, device and equipment for processing video picture in unmanned vehicle remote driving process

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020171738A1 (en) * 2001-04-17 2002-11-21 Jingfeng Guan Automobile-based video system, a substitution of traditional mirrors
TW200619067A (en) * 2004-12-06 2006-06-16 Arbl Co Ltd Device for transparency equivalent A-pillar equivalent transparency of vehicle
CN103358996B (en) * 2013-08-13 2015-04-29 吉林大学 Automobile A pillar perspective vehicle-mounted display device
CN104228684B (en) * 2014-09-30 2017-02-15 吉林大学 Method for eliminating dead zones of automobile A columns
CN104890576A (en) * 2015-05-22 2015-09-09 西安电子科技大学 Device capable of eliminating dead zones of automobile intelligently and omni-directionally
CN107458310A (en) * 2017-09-18 2017-12-12 诏安县鹏达机械设计部 A kind of comprehensive driving environment monitoring system
CN107738614A (en) * 2017-10-20 2018-02-27 付宇 The more picture video combination Display Realization methods in commercial car blind area
CN109934076A (en) * 2017-12-19 2019-06-25 广州汽车集团股份有限公司 Generation method, device, system and the terminal device of the scene image of vision dead zone

Also Published As

Publication number Publication date
CN111114434A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111114434B (en) Vision-assisted imaging method, vehicle-mounted vision-assisted system and storage device
CN107444263B (en) Display device for vehicle
CN102371944B (en) Driver vision support system and vehicle including the system
EP2045132B1 (en) Driving support device, driving support method, and computer program
EP3466763B1 (en) Vehicle monitor system
WO2012164729A1 (en) Vehicular field of view assistance device
CN111277796A (en) Image processing method, vehicle-mounted vision auxiliary system and storage device
WO2022052789A1 (en) Full-field dynamic display method for in-vehicle rearview mirror, storage medium, and electronic device
JP2004203126A (en) Outside monitoring device for vehicle
CN107298050A (en) Image display device
JP4855884B2 (en) Vehicle periphery monitoring device
JP2011234095A (en) Visual recognition support device for vehicle
CN108482252A (en) A kind of system, method and the vehicle of display pillar A blind obstacle multi-view image
CN112298040A (en) Auxiliary driving method based on transparent A column
CN116674468A (en) Image display method and related device, vehicle, storage medium, and program
JP2007253819A (en) Parking backup device
JP2017111739A (en) Driving support apparatus and driving support method
JP2009248812A (en) Driving assistance device
JP6115278B2 (en) Vehicle display device
CN214775848U (en) A-column display screen-based obstacle detection device and automobile
JP6365600B2 (en) Vehicle display device
JPH10246640A (en) Information display device for vehicle
KR101663290B1 (en) Parking assist system with the picture and distance correction
KR102010407B1 (en) Smart Rear-view System
JPWO2019177036A1 (en) Vehicle video system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200508

Assignee: Shenzhen Dechi micro vision technology Co.,Ltd.

Assignor: SHENZHEN DESAY MICROELECTRONIC TECHNOLOGY Co.,Ltd.

Contract record no.: X2020980002081

Denomination of invention: Vision-assisted imaging method, vehicle-mounted vision-assisted system and storage device

License type: Exclusive License

Record date: 20200509

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210827