CN111464797A - Driving assistance apparatus, method, vehicle, and storage medium - Google Patents

Driving assistance apparatus, method, vehicle, and storage medium Download PDF

Info

Publication number
CN111464797A
CN111464797A CN202010495894.1A CN202010495894A CN111464797A CN 111464797 A CN111464797 A CN 111464797A CN 202010495894 A CN202010495894 A CN 202010495894A CN 111464797 A CN111464797 A CN 111464797A
Authority
CN
China
Prior art keywords
vehicle
image
resolution
video
detection information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010495894.1A
Other languages
Chinese (zh)
Inventor
黎建平
张宣彪
李激光
王俊越
刘卫龙
孙牵宇
许亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202010495894.1A priority Critical patent/CN111464797A/en
Publication of CN111464797A publication Critical patent/CN111464797A/en
Priority to JP2022519739A priority patent/JP2022551243A/en
Priority to PCT/CN2021/098021 priority patent/WO2021244591A1/en
Priority to KR1020227010675A priority patent/KR20220054659A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The equipment comprises a camera, wherein when the driving auxiliary equipment is installed on a vehicle, a lens of the camera is arranged towards the front direction of the vehicle and is used for collecting images of the environment where the vehicle is located; a processor, configured to perform first resolution conversion processing and second resolution conversion processing on the image respectively to obtain a first resolution image and a second resolution image, where the first resolution is different from the second resolution; the driving recording module is used for storing the first resolution ratio image; and the auxiliary driving module is used for generating intelligent driving control information of the vehicle according to the second resolution ratio image.

Description

Driving assistance apparatus, method, vehicle, and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a driving assistance device, a driving assistance method, a vehicle, and a storage medium.
Background
With the development of automotive electronics, more and more vehicles are equipped with ADAS (Advanced driving assistance System) and DVR (Digital video recorder). At present, ADAS and DVR in a vehicle are mostly two independent modules, and the driving assisting function and the driving recording function cannot be combined substantially together, so that the multifunctional modularization of the vehicle is realized.
Disclosure of Invention
An object of one or more embodiments of the present disclosure is to provide a driving assistance apparatus, a driving assistance method, a vehicle, and a storage medium.
According to an aspect of the present disclosure, there is provided a driving assistance apparatus, the apparatus including: the camera is arranged in a forward direction of the vehicle towards the vehicle when the driving auxiliary equipment is installed on the vehicle, and is used for collecting images of the environment where the vehicle is located; a processor, configured to perform first resolution conversion processing and second resolution conversion processing on the image respectively to obtain a first resolution image and a second resolution image, where the first resolution is different from the second resolution; the driving recording module is used for storing the first resolution ratio image; and the auxiliary driving module is used for generating intelligent driving control information of the vehicle according to the second resolution ratio image.
In combination with any one of the embodiments provided by the present disclosure, the driving assistance apparatus further includes a display module; the processor is further configured to perform a third resolution conversion process on the image to obtain a third resolution image, and output the third resolution image for display in the display module, where the third resolution is different from at least one of the first resolution and the second resolution.
In combination with any embodiment provided by the present disclosure, the apparatus further comprises a gravity sensor; the processor is further used for acquiring the detection information of the gravity sensor when the vehicle is in a parking mode; and responding to the detection information meeting a first set condition, triggering the camera to shoot the image of the environment and/or record the video of the environment, storing the image and/or the video and/or sending the image and/or the video to a set target address.
In combination with any embodiment provided by the present disclosure, the apparatus further comprises a gravity sensor; the processor is further used for acquiring the detection information of the gravity sensor when the vehicle is in a running mode; triggering the camera to shoot the image of the environment in response to the detection information meeting a second set condition, and storing the image and/or sending the image to a set target address; and/or, in response to the detection information meeting a second set condition, determining the video of the corresponding time period of the detection information and sending the video to a set target address.
In combination with any of the embodiments provided herein, the apparatus further comprises a yaw sensor; the processor is further used for acquiring the detection information of the yaw sensor when the vehicle is in a running mode; triggering the camera to shoot the image of the environment in response to the detection information meeting a third set condition, and storing the image and/or sending the image to a set target address; and/or, in response to the detection information meeting a third set condition, determining the video of the corresponding time period of the detection information and sending the video to a set target address.
In combination with any embodiment provided by the present disclosure, the processor is further configured to obtain vehicle operation information; the driving assistance module is specifically configured to: according to the second resolution image and the vehicle running information, at least one of the following is executed: outputting the driving warning information of the vehicle, outputting the control information of the vehicle-mounted equipment arranged on the vehicle, outputting the control information of the vehicle driving mode switching, and outputting the automatic driving control information of the vehicle.
In combination with any embodiment provided by the present disclosure, the processor is further configured to trigger the camera to perform at least one of the following in response to receiving an image capturing trigger signal sent by at least one of a center control screen of a vehicle, a triggering device of a vehicle, and an external device communicatively connected to the vehicle: shooting an image of the environment, and storing the image and/or sending the image to a set target address; recording and storing a video of the environment; and determining the video of the image acquisition trigger signal in the corresponding time period and sending the video to a set target address.
According to an aspect of the present disclosure, a driving assistance method is provided, the method including: collecting images of the environment where the vehicle is located through a camera arranged on the vehicle; respectively carrying out first resolution conversion processing and second resolution conversion processing on the image to obtain a first resolution image and a second resolution image, wherein the first resolution is different from the second resolution; storing the first resolution image; and generating intelligent driving control information of the vehicle according to the second resolution ratio image.
In combination with any embodiment provided by the present disclosure, the method further comprises: and performing third resolution conversion processing on the image to obtain a third resolution image, and outputting the third resolution image to be displayed in a display device, wherein the third resolution is different from at least one of the first resolution and the second resolution.
In combination with any embodiment provided by the present disclosure, the method further comprises: acquiring detection information of a gravity sensor when a vehicle is in a parking mode; and responding to the detection information meeting a first set condition, triggering the camera to shoot the image of the environment and/or record the video of the environment, storing the image and/or the video and/or sending the image and/or the video to a set target address.
In combination with any embodiment provided by the present disclosure, the method further comprises: acquiring detection information of a gravity sensor when a vehicle is in a running mode; triggering the camera to shoot the image of the environment in response to the detection information meeting a second set condition, and storing the image and/or sending the image to a set target address; and/or, in response to the detection information meeting a second set condition, determining the video of the corresponding time period of the detection information and sending the video to a set target address.
In combination with any embodiment provided by the present disclosure, the method further comprises: acquiring detection information of a yaw sensor when a vehicle is in a running mode; triggering the camera to shoot the image of the environment in response to the detection information meeting a third set condition, and storing the image and/or sending the image to a set target address; and/or, in response to the detection information meeting a third set condition, determining the video of the corresponding time period of the detection information and sending the video to a set target address.
In combination with any embodiment provided by the present disclosure, the method further comprises: acquiring vehicle operation information; according to the second resolution ratio image, the intelligent driving control of the vehicle is carried out, and the intelligent driving control method comprises the following steps: according to the second resolution image and the vehicle running information, at least one of the following is executed: outputting the driving warning information of the vehicle, outputting the control information of the vehicle-mounted equipment arranged on the vehicle, outputting the control information of the vehicle driving mode switching, and outputting the automatic driving control information of the vehicle.
In combination with any embodiment provided by the present disclosure, the method further comprises: in response to receiving an image acquisition trigger signal sent by at least one of a central control screen of a vehicle, a triggering device of the vehicle and an external device in communication connection with the vehicle, triggering the camera to execute at least one of the following: shooting an image of the environment, and storing the image and/or sending the image to a set target address; recording and storing a video of the environment; and determining the video of the image acquisition trigger signal in the corresponding time period and sending the video to a set target address.
According to an aspect of the present disclosure, there is provided a vehicle comprising a driving assistance device according to any one of the embodiments of the present disclosure.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements a driving assistance method according to any one of the embodiments of the present disclosure.
When the driving auxiliary equipment is installed on a vehicle, the camera is used for collecting images of the environment where the vehicle is located, different resolution conversions are carried out on the images, on one hand, the first resolution images obtained through the conversion are stored, on the other hand, intelligent driving control information of the vehicle is generated according to the second resolution obtained through the conversion, the functions of driving recording and intelligent driving are achieved through the same camera, the hardware utilization rate is improved, the hardware cost is reduced, and the occupied space of an electronic module in the whole vehicle is saved.
Drawings
In order to more clearly illustrate one or more embodiments or technical solutions in the prior art in the present specification, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in one or more embodiments of the present specification, and other drawings can be obtained by those skilled in the art without inventive exercise.
Fig. 1 is a block diagram of a driving assistance device according to at least one embodiment of the present disclosure;
fig. 2 is a schematic view illustrating image processing of a driving assistance device according to at least one embodiment of the present disclosure;
fig. 3 is a flowchart of a driving assistance method according to at least one embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in one or more embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by one of ordinary skill in the art from one or more embodiments of the disclosure without making any creative effort shall fall within the scope of protection of the disclosure.
At least one embodiment of the present disclosure provides a driving assistance device that may be mounted on a vehicle, which may be any type of vehicle, and which may be any vehicle for passenger or cargo or other purposes. As shown in fig. 1, the driving assistance device may include a camera 101, a processor 102, a driving recording module 103, and a driving assistance module 104.
When the driving assistance device is installed on a vehicle, the lens of the camera 101 is arranged towards the front of the vehicle, and is used for collecting images of the environment where the vehicle is located, including shooting images of the environment, recording videos of the environment, and the like. The camera 101 may collect images of the environment and may also collect sounds in the environment.
The camera 101 may be a camera integrated in the body of the driving assistance device, i.e. integrated with other components in one housing. Under the integrated condition that sets up, can set up driving auxiliary assembly on the rear view of vehicle just the camera is preceding towards the vehicle, makes the camera gather the environment image with preferred visual angle. The camera 101 may also be a separately arranged camera, i.e. arranged outside the housing of the driving assistance device, so as to facilitate the deployment of the driving assistance device on the vehicle. For example, under the condition of separation setting, can with driving auxiliary assembly's main part sets up in the instrument desk, and with the camera installation inside interior rear-view mirror, can be when gathering the environment image with preferred visual angle, occupation space is little and pleasing to the eye.
The processor 102 is connected to the camera 101 to obtain an image collected by the camera, and perform first resolution conversion processing and second resolution conversion processing on the image respectively to obtain a first resolution image and a second resolution image, where the first resolution is different from the second resolution.
In the embodiments of the present disclosure, the processor may be a System On Chip (SOC) with built-in RAM, ROM, and capable of running an operating System, or other processor with similar processing capability.
For the images collected by the camera, including videos and images, the processor can shunt the images according to different requirements so as to perform conversion processing of different resolutions. For example, videos or images stored for driving recording generally need higher resolution, and videos or images for intelligent driving do not have a particularly high requirement for resolution, so that an original video stream acquired by a camera can be split, and a part of the split video stream is subjected to first resolution conversion processing to obtain a first resolution image for driving recording; and performing second resolution conversion processing on the other part of the video stream obtained by shunting to obtain a second resolution image lower than the first resolution for intelligent driving.
The driving recording module 103 is connected to the processor 102, and is configured to acquire a first resolution image and store the first resolution image for driving recording.
The driving assistance module 104 is connected to the processor 102, and is configured to acquire a second resolution image and generate intelligent driving control information of the vehicle according to the second resolution image.
The intelligent driving control information may include driving warning information of the vehicle, such as warning information for prompting that the driver is too close to a preceding vehicle, driving information for prompting that the driver exceeds a safe vehicle speed, and the like, so as to assist the driver in driving safely; control information of vehicle-mounted equipment arranged on the vehicle, such as information for controlling the on or off of a high beam of the vehicle, and the like, can be further included so as to realize a driving assistance function; automatic driving control information of the vehicle, such as controlling the speed, the traveling direction, and the like of the vehicle, may also be included to achieve automatic driving of the vehicle. It should be understood by those skilled in the art that other types of intelligent driving control information may also be included, not limited to the above.
In the embodiment of the disclosure, when the driving auxiliary equipment is installed on the vehicle, the camera is used for collecting the image of the environment where the vehicle is located, and the image is converted at different resolutions, on one hand, the first resolution image obtained by conversion is stored, on the other hand, the intelligent driving control information of the vehicle is generated according to the second resolution obtained by conversion, the driving record and intelligent driving functions are realized through the image collected by the same camera, the hardware utilization rate is improved, the hardware cost is reduced, and the occupied space of the electronic module in the whole vehicle is saved.
And moreover, resolution conversion is carried out on the images acquired by the camera, so that resolution images suitable for storage and image detection are respectively generated, and the quality of driving record and auxiliary driving is ensured while the hardware cost is reduced.
In some embodiments, the driving assistance device further includes a display module, and the processor is further configured to perform a third resolution conversion process on the image to obtain a third resolution image, and output the third resolution image for display in the display module, where the third resolution is different from at least one of the first resolution and the second resolution, and the third resolution may be determined according to a display resolution of the display module.
Fig. 2 shows a schematic diagram of the processing of a raw video stream captured by a camera. As shown in fig. 2, on one hand, the original video stream may be converted into a first resolution image by the processor, and the first resolution image is stored by the driving recording module, so as to implement the driving recording function. The first resolution is 920 × 1080 or 1280 × 720, for example, and the vehicle recording module may be an external storage module of the processor, such as an SD card (Secure Digital memory card), a TF card (T-Flash, Flash memory card), and the like, so as to implement the vehicle recording function.
On the other hand, the original video stream can be converted into a second-resolution image through the processor, and intelligent driving control information of the vehicle is generated according to the second-resolution image through the auxiliary driving module.
Taking driving assistance as an example, the original video may be subjected to resolution conversion according to a second resolution required by a driving assistance-related algorithm. For example, for a video used in implementing the forward collision warning, the second resolution is usually 1024 × 576, and then the original video stream may be subjected to second resolution conversion by 1024 × 576 to obtain a second resolution image, and the second resolution image is input to the driving assistance module to generate the intelligent driving control information.
In addition, the image may be subjected to a third resolution conversion process to obtain a third resolution image, and the third resolution image is output to be displayed in the display module.
Taking the display module as a control screen in a vehicle or a mobile phone APP (application program) as an example, in this case, the required resolution is usually low, for example, 720 × 480, and the third resolution may be set to 720 × 480, and the image captured by the camera is converted. The obtained third conversion rate image can be transmitted to a central control system of the vehicle through WIFI or transmitted to a cloud end and a mobile phone end, so that operations such as browsing, deleting, moving, replaying and downloading of video information or image information on a central control screen or a mobile phone APP can be realized, and a video stream collected by the camera can be displayed in real time.
In one example, the third resolution may also be output for display in the external display module by way of HDMI (High Definition Multimedia Interface)/L VDS (L ow-Voltage Differential Signaling), in which case the third resolution may be set to 1280 × 720.
The above setting of the first resolution, the second resolution and the third resolution is only an example, and it should be understood by those skilled in the art that the values of the first resolution, the second resolution and the third resolution may be set according to the requirements of storage, calculation and display, and the disclosure is not limited thereto.
In the embodiment of the disclosure, the resolution conversion is performed on the image acquired by the camera according to the requirement of the display module on the resolution, so that the processing efficiency is improved while the display effect is ensured.
The driving assistance apparatus may further include a Gravity Sensor (Gravity Sensor) for detecting acceleration information of the vehicle. The gravity sensor may be integrated in the body of the driving assistance device, i.e. integrated with other components in one housing; the driving assistance device can also be arranged separately from the driving assistance device, i.e. outside the housing of the driving assistance device, in order to facilitate the deployment of the driving assistance device on the vehicle.
In some embodiments, the detection information of the gravity sensor is acquired when the vehicle is in a parking mode; and responding to the detection information meeting a first set condition, triggering the camera to shoot the image of the environment and/or record the video of the environment, storing the image and/or the video and/or sending the image and/or the video to a set target address.
The parking mode of the vehicle includes, but is not limited to, at least one of the following: a key-off parking state, and a state in which the vehicle gear is in the parking position or the neutral position. When the vehicle is in a parking mode, and when the acceleration of the vehicle detected by a gravity sensor arranged in the vehicle is greatly changed in a short time, for example, the change rate of the acceleration is greater than a first set threshold value, the probability of the abnormal situation of the vehicle is high, for example, the vehicle body is possibly collided, or the vehicle is in a trailer state, and the like, at the moment, a camera is triggered to shoot images of the environment and/or record videos of the environment, so that the abnormal state of the vehicle can be recorded automatically and timely. The processor acquires the image and/or the video and stores the image and/or the video for use in a subsequent processing process, such as analysis or source tracing of an abnormal state; or the target address is sent to a set target address, for example, the target address is sent to a mobile phone end of the vehicle owner through an intelligent vehicle-mounted terminal (T-BOX), so that the abnormal state of the vehicle owner can be reminded in time; the images and/or videos can also be sent to a third party/cloud so that abnormal conditions of the vehicle can be handled in time.
When the vehicle is in a running mode, in response to the detection information meeting a second set condition, triggering the camera to shoot the image of the environment, and storing the image and/or sending the image to a set target address; and/or, in response to the detection information meeting a second set condition, determining the video of the corresponding time period of the detection information and sending the video to a set target address.
When the acceleration detected by a gravity sensor arranged in a vehicle is greatly changed in a short time during the running process of the vehicle, for example, the change rate of the acceleration is greater than a second set threshold value, the probability of abnormal conditions of the vehicle is high, for example, the speed of the vehicle is possibly abnormally changed, and if the running state of the vehicle is abnormal, a camera is triggered to shoot images of the environment and/or record videos of the environment, so that the abnormal running state of the vehicle can be automatically and timely recorded. The processor acquires the image and/or the video and stores the image and/or the video for use in a subsequent processing process, such as analysis or tracing of an abnormal state; or the target address is sent to a set target address, for example, the target address is sent to a mobile phone end of the vehicle owner through an intelligent vehicle-mounted terminal (T-BOX), so that the abnormal state of the vehicle owner can be reminded in time; the images and/or videos can also be sent to a third party/cloud so that abnormal conditions of the vehicle can be handled in time.
The driving assistance apparatus may further include a yaw sensor for detecting yaw information of the vehicle. The yaw sensor may be integrated in the body of the driving assistance device, i.e. integrated with other components in one housing; the driving assistance device can also be arranged separately from the driving assistance device, i.e. outside the housing of the driving assistance device, in order to facilitate the deployment of the driving assistance device on the vehicle. For example, in the case of a separate arrangement, the yaw sensor may be arranged below the shift lever, and the main body of the driving assistance apparatus may be arranged in the instrument panel.
When the vehicle is in a running mode, in response to the detection information meeting a third set condition, triggering the camera to shoot the image of the environment, and storing the image and/or sending the image to a set target address; and/or, in response to the detection information meeting a third set condition, determining the video of the corresponding time period of the detection information and sending the video to a set target address.
When the yaw information detected by a yaw sensor arranged in the vehicle indicates that the driving direction of the vehicle is greatly deviated in a short time during the driving process of the vehicle, for example, the change of the yaw information is larger than a third set threshold value, the probability of the abnormal situation of the corresponding vehicle is high, and if the driving direction of the vehicle is abnormal, a camera is triggered to shoot an image of an environment and/or record a video of the environment, so that the abnormal driving state of the vehicle can be automatically and timely recorded. The processor acquires the image and/or the video and stores the image and/or the video for use in a subsequent processing process, such as analysis or tracing of an abnormal state; or the target address is sent to a set target address, for example, the target address is sent to a mobile phone end of the vehicle owner through an intelligent vehicle-mounted terminal (T-BOX), so that the abnormal state of the vehicle owner can be reminded in time; the images and/or videos can also be sent to a third party/cloud so that abnormal conditions of the vehicle can be handled in time.
In some embodiments, the processor further triggers operation of the camera upon receipt of an image acquisition trigger signal. The image acquisition trigger signal may be sent through at least one of a central control screen of a vehicle, a triggering device of the vehicle, and an external device in communication connection with the vehicle, where the triggering device of the vehicle may be a button disposed at a suitable position inside the vehicle, and the external device in communication connection with the vehicle includes a mobile phone terminal and the like. The triggered operation of the camera includes: shooting an image of the environment, and storing the image and/or sending the image to a set target address; recording the video of the environment, storing the video and/or sending the video to a set target address; and determining the video of the image acquisition trigger signal in the corresponding time period and sending the video to a set target address, for example, sending the image and the video to a mobile phone end of a vehicle owner through an intelligent vehicle-mounted terminal, or sending the image and the video to a third party/cloud end and the like.
In the embodiment of the disclosure, the image acquisition trigger signal is sent to trigger the camera to perform operations such as shooting, storing and sending, the images to be recorded and stored can be automatically determined in an active trigger mode, and the images can be actively sent, so that diversified requirements of users of the driving auxiliary equipment are met. Compared with the automobile data recorder which is always in a recording and storing state once the automobile data recorder is started in the related technology, the automobile driving auxiliary equipment provided by the disclosure realizes more effective storage of images collected by the camera, and improves the utilization rate of a storage space.
In some embodiments, the processor is further configured to obtain vehicle operation information, and the driving assistance module outputs driving warning information of the vehicle according to the second resolution image and the vehicle operation information.
In one example, a lane line and a vehicle motion track in front can be monitored in real time according to the second resolution image, and when the deviation between the lane line and the vehicle motion track is greater than a certain degree, the processor determines that the vehicle deviates from a lane, and outputs warning information to remind a driver of paying attention.
In one example, the front vehicle and the distance between the front vehicle and the self vehicle are detected in real time according to the second resolution image, the potential collision time between the self vehicle and the front vehicle is calculated according to the vehicle speed information in the vehicle running information, when the potential collision time is smaller than a certain set threshold value, the collision risk between the self vehicle and the front vehicle is judged, and then alarm information is output to remind a driver of paying attention.
In one example, according to the second resolution image, a front pedestrian and a distance between the front pedestrian and the self-vehicle are detected in real time, potential collision time between the self-vehicle and the front pedestrian is calculated according to vehicle speed information in the vehicle running information, when the potential collision time is smaller than a certain set threshold value, it is judged that the self-vehicle and the front pedestrian have collision risk, and alarm information is output to remind a driver of paying attention.
In one example, when the vehicle is in a parking mode, the distance change between the front vehicle and the self vehicle is detected in real time according to the second resolution image, when the distance change is larger than a certain set threshold value, the front vehicle is determined to start, and warning information is output to remind a driver of paying attention.
In one example, a traffic sign, such as a speed limit and a prohibition sign, is detected from the second resolution image, and the driver is reminded to drive according to the detected traffic sign.
In one example, a traffic light is detected based on the second resolution image, and the driver is reminded to comply with the detected traffic light for safe driving.
In some embodiments, the driving assistance module outputs control information of an on-board device provided on a vehicle according to the second resolution image and the vehicle operation information.
The vehicle running information includes state information in a vehicle running mode and a vehicle parking mode, such as a turn signal state, a seat belt state, a vehicle speed, and the like of the vehicle. Those skilled in the art will appreciate that the vehicle operation information may also include other information, and the present disclosure is not limited thereto.
In one example, when a switch of a vehicle lamp, for example, a switch of a high beam lamp, is in an Automatic (AUTO) mode, a parameter judgment is performed on the second resolution image, and the vehicle lamp is automatically turned on or off according to a judgment result, so that a function of automatically controlling the vehicle lamp according to a driving environment is realized.
In some embodiments, the driving assistance module outputs automatic driving control information of the vehicle, for example, information for controlling the speed of the vehicle, according to the second resolution image and the vehicle operation information, so as to realize automatic driving of the vehicle; the control information of the vehicle driving mode switching may be output, for example, in the automatic driving mode, in the case where the vehicle driving abnormality is confirmed by the second resolution video and the vehicle information, the control information of the vehicle driving mode switching may be output so that the vehicle is switched from the automatic driving state mode to the manual driving mode.
In the disclosed embodiment, the driving assistance module may detect a safe driving condition of the vehicle, predict a safe driving condition of the vehicle, and the like according to the second resolution image obtained by converting the image collected by the camera and the vehicle operation information, thereby implementing an intelligent driving function of the vehicle.
In some embodiments, the vehicle system may further include at least one co-processor coupled to the processor. In case the processor is an SOC, the coprocessor is for example an MCU (Micro Controller Unit), or other processor with corresponding processing capabilities.
The MCU CAN be connected to at least one device in the vehicle, for example, to communicate with the at least one device through CAN (Controller Area Network)/CANFD (CAN with Flexible Data transmission rate), control Area Network with Flexible Data transmission rate)/Ethernet (Ethernet) to acquire information output from the device, and the SOC acquires the information output from the device via the MCU.
In one example, control information of an on-board device provided on a vehicle may be sent to the MCU by the SOC through SPI communication, so that the MCU can control the on-board device according to the control information.
In some embodiments, the processor, the coprocessor may be powered by the same power supply, and the power supply may be power managed by the coprocessor.
At least one embodiment of the present disclosure further provides a driving assistance method, which may be implemented according to any one of the driving assistance apparatuses of the embodiments of the present disclosure.
FIG. 3 is a flow chart of a driving assistance method according to at least one embodiment of the present disclosure, which may include steps 301 to 304.
In step 301, an image of an environment where the vehicle is located is captured via a camera provided on the vehicle.
In step 302, a first resolution conversion process and a second resolution conversion process are performed on the image respectively to obtain a first resolution image and a second resolution image, where the first resolution is different from the second resolution.
In step 303, the first resolution image is stored.
In step 304, intelligent driving control information of the vehicle is generated based on the second resolution image.
In the embodiment of the disclosure, the camera is used for collecting the image of the environment where the vehicle is located, and the image is subjected to different resolution conversions, on one hand, the first resolution image obtained by the conversion is stored, on the other hand, the intelligent driving control information of the vehicle is generated according to the second resolution obtained by the conversion, the functions of driving record and intelligent driving are realized through the same camera, the hardware utilization rate is improved, the hardware cost is reduced, and the occupied space of an electronic module in the whole vehicle is saved.
In some embodiments, the method further comprises: and performing third resolution conversion processing on the image to obtain a third resolution image, and outputting the third resolution image to be displayed in a display device, wherein the third resolution is different from at least one of the first resolution and the second resolution.
In some embodiments, the method further comprises: acquiring detection information of a gravity sensor when a vehicle is in a parking mode; and responding to the detection information meeting a first set condition, triggering the camera to shoot the image of the environment and/or record the video of the environment, storing the image and/or the video and/or sending the image and/or the video to a set target address.
In some embodiments, the method further comprises: acquiring detection information of a gravity sensor when a vehicle is in a running mode; triggering the camera to shoot the image of the environment in response to the detection information meeting a second set condition, and storing the image and/or sending the image to a set target address; and/or, in response to the detection information meeting a second set condition, determining the video of the corresponding time period of the detection information and sending the video to a set target address.
In some embodiments, the method further comprises: acquiring detection information of a yaw sensor when a vehicle is in a running mode; triggering the camera to shoot the image of the environment in response to the detection information meeting a third set condition, and storing the image and/or sending the image to a set target address; and/or, in response to the detection information meeting a third set condition, determining the video of the corresponding time period of the detection information and sending the video to a set target address.
In some embodiments, the method further comprises: acquiring vehicle operation information; according to the second resolution ratio image, the intelligent driving control of the vehicle is carried out, and the intelligent driving control method comprises the following steps: according to the second resolution image and the vehicle running information, at least one of the following is executed: outputting the driving warning information of the vehicle, outputting the control information of the vehicle-mounted equipment arranged on the vehicle, outputting the control information of the vehicle driving mode switching, and outputting the automatic driving control information of the vehicle.
In some embodiments, the method further comprises: in response to receiving an image acquisition trigger signal sent by at least one of a central control screen of a vehicle, a triggering device of the vehicle and an external device in communication connection with the vehicle, triggering the camera to execute at least one of the following: shooting an image of the environment, and storing the image and/or sending the image to a set target address; recording and storing a video of the environment; and determining the video of the image acquisition trigger signal in the corresponding time period and sending the video to a set target address.
At least one embodiment of the present disclosure also provides a vehicle including the driving assistance apparatus according to any one of the embodiments of the present disclosure.
In some embodiments, a rear view mirror is further disposed in the cabin of the vehicle, and the driving assistance device may be disposed on the rear view mirror with the camera facing forward of the vehicle.
Through with driving auxiliary assembly sets up on the rear-view mirror of vehicle and the camera is preceding towards the vehicle, can make the camera gather the image with preferred visual angle, and saved the interior space of car.
In some embodiments, the vehicle is provided with a central control screen, and the central control screen is in communication connection with the camera or the processor and is used for triggering the camera to perform image acquisition.
In some embodiments, a triggering device in communication connection with the camera or the processor is disposed in the cabin of the vehicle, and is configured to trigger the camera to perform image acquisition.
In some embodiments, a terminal device in communication connection with the camera or the processor is disposed on the vehicle, and is configured to trigger the camera to perform image acquisition.
The camera is triggered through the central control screen of the vehicle to collect images, and a driver or passengers of the vehicle can autonomously determine the images to be recorded in an active triggering mode, so that the use flexibility of vehicle equipment is improved.
In some embodiments, a gravity sensor connected to the processor is disposed on the vehicle, and the gravity sensor outputs a gravity sensing detection signal to the processor, so that the processor controls the camera to perform image acquisition according to the gravity sensing detection signal.
The camera is controlled to collect images according to the gravity sensing detection signal, so that the abnormal driving state of the vehicle can be recorded automatically and timely.
In some embodiments, a yaw sensor connected to the processor is disposed on the vehicle, and the yaw sensor outputs a yaw sensing detection signal to the processor, so that the processor controls the camera to perform image acquisition according to the yaw sensing detection signal.
The camera is controlled to collect images according to the yaw sensing detection signal, so that the abnormal driving state of the vehicle can be recorded automatically and timely.
As will be appreciated by one skilled in the art, one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The embodiments of the present specification further provide a computer-readable storage medium, on which a computer program may be stored, and the computer program, when executed by a processor, implements the steps of the driving assistance method described in any embodiment of the present specification. Wherein "and/or" means having at least one of the two, e.g., "A and/or B" includes three schemes: A. b, and "A and B".
The same and similar parts among the various embodiments in this specification are referred to each other, and each embodiment focuses on differences from the other embodiments. In particular, for the method embodiment, since each step is substantially similar to the function realized by each module of the driving assistance device, the description is simple, and relevant points can be referred to the partial description of the device embodiment.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the acts or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in: digital electronic circuitry, tangibly embodied computer software or firmware, computer hardware including the structures disclosed in this specification and their structural equivalents, or a combination of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a tangible, non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or additionally, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode and transmit information to suitable receiver apparatus for execution by the data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Computers suitable for executing computer programs include, for example, general and/or special purpose microprocessors, or any other type of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory and/or a random access memory. The basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Moreover, a computer may be embedded in another device, e.g., a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device such as a Universal Serial Bus (USB) flash drive, to name a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., an internal hard disk or a removable disk), magneto-optical disks, and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. In other instances, features described in connection with one embodiment may be implemented as discrete components or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Further, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The above description is only for the purpose of illustrating the preferred embodiments of the one or more embodiments of the present disclosure, and is not intended to limit the scope of the one or more embodiments of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the one or more embodiments of the present disclosure should be included in the scope of the one or more embodiments of the present disclosure.

Claims (16)

1. A driving assistance apparatus, characterized in that the apparatus comprises:
the camera is arranged in a forward direction of the vehicle towards the vehicle when the driving auxiliary equipment is installed on the vehicle, and is used for collecting images of the environment where the vehicle is located;
a processor, configured to perform first resolution conversion processing and second resolution conversion processing on the image respectively to obtain a first resolution image and a second resolution image, where the first resolution is different from the second resolution;
the driving recording module is used for storing the first resolution ratio image;
and the auxiliary driving module is used for generating intelligent driving control information of the vehicle according to the second resolution ratio image.
2. The apparatus of claim 1, wherein the driving assistance apparatus further comprises a display module;
the processor is further configured to perform a third resolution conversion process on the image to obtain a third resolution image, and output the third resolution image for display in the display module, where the third resolution is different from at least one of the first resolution and the second resolution.
3. The apparatus of claim 1 or 2, further comprising a gravity sensor;
the processor is further used for acquiring the detection information of the gravity sensor when the vehicle is in a parking mode; and responding to the detection information meeting a first set condition, triggering the camera to shoot the image of the environment and/or record the video of the environment, storing the image and/or the video and/or sending the image and/or the video to a set target address.
4. The apparatus of any one of claims 1 to 3, further comprising a gravity sensor;
the processor is further used for acquiring the detection information of the gravity sensor when the vehicle is in a running mode; triggering the camera to shoot the image of the environment in response to the detection information meeting a second set condition, and storing the image and/or sending the image to a set target address; and/or, in response to the detection information meeting a second set condition, determining the video of the corresponding time period of the detection information and sending the video to a set target address.
5. The apparatus of any one of claims 1 to 4, further comprising a yaw sensor;
the processor is further used for acquiring the detection information of the yaw sensor when the vehicle is in a running mode; triggering the camera to shoot the image of the environment in response to the detection information meeting a third set condition, and storing the image and/or sending the image to a set target address; and/or, in response to the detection information meeting a third set condition, determining the video of the corresponding time period of the detection information and sending the video to a set target address.
6. The apparatus of any of claims 1-5, wherein the processor is further configured to obtain vehicle operation information;
the driving assistance module is specifically configured to: according to the second resolution image and the vehicle running information, at least one of the following is executed: outputting the driving warning information of the vehicle, outputting the control information of the vehicle-mounted equipment arranged on the vehicle, outputting the control information of the vehicle driving mode switching, and outputting the automatic driving control information of the vehicle.
7. The apparatus of any one of claims 1 to 6, wherein the processor is further configured to trigger the camera to perform at least one of the following in response to receiving an image capture trigger signal sent via at least one of a center control screen of a vehicle, a triggering device of a vehicle, and an external device communicatively connected to the vehicle:
shooting an image of the environment, and storing the image and/or sending the image to a set target address;
recording and storing a video of the environment;
and determining the video of the image acquisition trigger signal in the corresponding time period and sending the video to a set target address.
8. A driving assistance method, characterized in that the method comprises:
collecting images of the environment where the vehicle is located through a camera arranged on the vehicle;
respectively carrying out first resolution conversion processing and second resolution conversion processing on the image to obtain a first resolution image and a second resolution image, wherein the first resolution is different from the second resolution;
storing the first resolution image;
and generating intelligent driving control information of the vehicle according to the second resolution ratio image.
9. The method of claim 8, further comprising: and performing third resolution conversion processing on the image to obtain a third resolution image, and outputting the third resolution image to be displayed in a display device, wherein the third resolution is different from at least one of the first resolution and the second resolution.
10. The method according to claim 8 or 9, characterized in that the method further comprises:
acquiring detection information of a gravity sensor when a vehicle is in a parking mode;
and responding to the detection information meeting a first set condition, triggering the camera to shoot the image of the environment and/or record the video of the environment, storing the image and/or the video and/or sending the image and/or the video to a set target address.
11. The method according to any one of claims 8 to 10, further comprising:
acquiring detection information of a gravity sensor when a vehicle is in a running mode;
triggering the camera to shoot the image of the environment in response to the detection information meeting a second set condition, and storing the image and/or sending the image to a set target address; and/or, in response to the detection information meeting a second set condition, determining the video of the corresponding time period of the detection information and sending the video to a set target address.
12. The method according to any one of claims 8 to 11, characterized in that the method further comprises:
acquiring detection information of a yaw sensor when a vehicle is in a running mode;
triggering the camera to shoot the image of the environment in response to the detection information meeting a third set condition, and storing the image and/or sending the image to a set target address; and/or, in response to the detection information meeting a third set condition, determining the video of the corresponding time period of the detection information and sending the video to a set target address.
13. The method according to any one of claims 8 to 12, further comprising:
acquiring vehicle operation information;
according to the second resolution ratio image, the intelligent driving control of the vehicle is carried out, and the intelligent driving control method comprises the following steps:
according to the second resolution image and the vehicle running information, at least one of the following is executed: outputting the driving warning information of the vehicle, outputting the control information of the vehicle-mounted equipment arranged on the vehicle, outputting the control information of the vehicle driving mode switching, and outputting the automatic driving control information of the vehicle.
14. The method according to any one of claims 8 to 13, further comprising:
in response to receiving an image acquisition trigger signal sent by at least one of a central control screen of a vehicle, a triggering device of the vehicle and an external device in communication connection with the vehicle, triggering the camera to execute at least one of the following:
shooting an image of the environment, and storing the image and/or sending the image to a set target address;
recording and storing a video of the environment;
and determining the video of the image acquisition trigger signal in the corresponding time period and sending the video to a set target address.
15. A vehicle, characterized by comprising a driving assistance device according to any one of claims 1 to 7.
16. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 8 to 14.
CN202010495894.1A 2020-06-03 2020-06-03 Driving assistance apparatus, method, vehicle, and storage medium Pending CN111464797A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202010495894.1A CN111464797A (en) 2020-06-03 2020-06-03 Driving assistance apparatus, method, vehicle, and storage medium
JP2022519739A JP2022551243A (en) 2020-06-03 2021-06-02 Driving support device, method, vehicle and storage medium
PCT/CN2021/098021 WO2021244591A1 (en) 2020-06-03 2021-06-02 Driving auxiliary device and method, and vehicle and storage medium
KR1020227010675A KR20220054659A (en) 2020-06-03 2021-06-02 Drive aids, methods, vehicles and storage media

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010495894.1A CN111464797A (en) 2020-06-03 2020-06-03 Driving assistance apparatus, method, vehicle, and storage medium

Publications (1)

Publication Number Publication Date
CN111464797A true CN111464797A (en) 2020-07-28

Family

ID=71680303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010495894.1A Pending CN111464797A (en) 2020-06-03 2020-06-03 Driving assistance apparatus, method, vehicle, and storage medium

Country Status (1)

Country Link
CN (1) CN111464797A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112622934A (en) * 2020-12-25 2021-04-09 上海商汤临港智能科技有限公司 Reference track point and reference track generation method, driving method and vehicle
WO2021244591A1 (en) * 2020-06-03 2021-12-09 上海商汤临港智能科技有限公司 Driving auxiliary device and method, and vehicle and storage medium
CN114025129A (en) * 2021-10-25 2022-02-08 合肥疆程技术有限公司 Image processing method and system and motor vehicle

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021244591A1 (en) * 2020-06-03 2021-12-09 上海商汤临港智能科技有限公司 Driving auxiliary device and method, and vehicle and storage medium
CN112622934A (en) * 2020-12-25 2021-04-09 上海商汤临港智能科技有限公司 Reference track point and reference track generation method, driving method and vehicle
CN112622934B (en) * 2020-12-25 2022-06-24 上海商汤临港智能科技有限公司 Reference track point and reference track generation method, driving method and vehicle
CN114025129A (en) * 2021-10-25 2022-02-08 合肥疆程技术有限公司 Image processing method and system and motor vehicle

Similar Documents

Publication Publication Date Title
US20230254451A1 (en) Vehicular monitoring system with interior viewing camera and remote communication
US12010455B2 (en) Vehicular vision system with incident recording function
WO2021244591A1 (en) Driving auxiliary device and method, and vehicle and storage medium
EP3211616A2 (en) Driver assistance apparatus
CN111464797A (en) Driving assistance apparatus, method, vehicle, and storage medium
CN109564734B (en) Driving assistance device, driving assistance method, mobile body, and program
CN109715467B (en) Vehicle control device, vehicle control method, and movable body
CN108664883B (en) Method and apparatus for initiating a hook view
JP2006295676A (en) Imaging device for mobile unit
JP6626817B2 (en) Camera monitor system, image processing device, vehicle, and image processing method
CN101655374A (en) GPS device with motion sensor control and image taking device and control method thereof
KR20190046579A (en) Multiple camera control system and method for controlling output of multiple camera image
US20200304698A1 (en) Imaging control apparatus, imaging control method, computer program, and electronic device
CN107207012A (en) Driver assistance system for motor vehicle
US11052822B2 (en) Vehicle control apparatus, control method, and storage medium for storing program
CN212086348U (en) Driving auxiliary assembly and vehicle
KR102094405B1 (en) Method and apparatus for determining an accident using an image
CN115131749A (en) Image processing apparatus, image processing method, and computer-readable storage medium
CN114619963A (en) Method and device for assisting the vision of a vehicle driver
CN115649190A (en) Control method, device, medium, vehicle and chip for vehicle auxiliary braking
CN110853389B (en) Drive test monitoring system suitable for unmanned commodity circulation car
JP2019028482A (en) On-board device and driving support device
KR20180013126A (en) Black-box for vehicle
WO2023084842A1 (en) Onboard device, information processing device, sensor data transmission method, and information processing method
JP6520634B2 (en) Video switching device for vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination