CN115278095A - Vehicle-mounted camera control method and device based on fusion perception - Google Patents

Vehicle-mounted camera control method and device based on fusion perception Download PDF

Info

Publication number
CN115278095A
CN115278095A CN202210511643.7A CN202210511643A CN115278095A CN 115278095 A CN115278095 A CN 115278095A CN 202210511643 A CN202210511643 A CN 202210511643A CN 115278095 A CN115278095 A CN 115278095A
Authority
CN
China
Prior art keywords
vehicle
road
light intensity
mounted camera
road section
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210511643.7A
Other languages
Chinese (zh)
Inventor
林洋
孙招宾
王巍
刘会凯
张刘茨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lantu Automobile Technology Co Ltd
Original Assignee
Lantu Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lantu Automobile Technology Co Ltd filed Critical Lantu Automobile Technology Co Ltd
Priority to CN202210511643.7A priority Critical patent/CN115278095A/en
Publication of CN115278095A publication Critical patent/CN115278095A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention relates to a vehicle-mounted camera control method and device based on fusion perception, wherein in the method, a vehicle acquires road planning information and road surface condition information from the current position to the destination of the vehicle from a cloud navigation server in real time; determining a scene of direct strong light or sudden light intensity change to be met by the vehicle during traveling according to the road planning information and the road surface condition information, and marking in a navigation map; when the vehicle approaches or enters the marked position, the vehicle-mounted camera image acquisition parameters are adjusted to capture images. According to the invention, the response time of adjusting the incident light intensity of the automobile in a scene with suddenly changed light intensity to be driven into can be effectively improved through the cloud navigation server, the reliability of the front camera is further improved, and the adaptability of the camera in different scenes is improved.

Description

Vehicle-mounted camera control method and device based on fusion perception
Technical Field
The invention relates to the technical field of vehicle active safety, in particular to a vehicle-mounted camera control method and device based on fusion perception.
Background
The vehicle-mounted forward-looking camera is an important perception sensor for intelligent driving, and intelligent driving functions such as ACC, AEB and LKA can be achieved based on the forward-looking camera. However, the camera has the problem of blind failure in the scene of sudden change of light rays such as direct sunlight, backlight and tunnel entrance and exit. In the prior art, a method for capturing an image by adjusting image acquisition parameters of a camera based on comparison of light intensity information acquired by the camera arranged at different positions of a vehicle body is limited in that the relative distances of the camera spaces at different positions of the vehicle body are short, the relative distance is not more than 1.5m by taking a front camera of an AVM (automatic video frame) arranged at a front grille of a vehicle and an AEB (automatic optical frame) front camera arranged at a front windshield of the vehicle as examples, if the vehicle speed is 70km/h, namely 19.5m/s, the moment when the AVM camera reaches a tunnel entrance is t, and the moment when the AEB camera reaches the tunnel entrance is t +0.07s; the adjustment time of the AEB camera is reserved to be short, and the AEB camera still has the blind risk and even causes the functional failure of the AEB.
Disclosure of Invention
The invention provides a vehicle-mounted camera control method based on fusion perception, which is used for solving the problem that a vehicle-mounted AEB camera is blinded when the light intensity of the external environment is suddenly changed.
The technical scheme for solving the technical problems is as follows:
in a first aspect, the invention provides a vehicle-mounted camera control method based on fusion perception, which includes:
acquiring road planning information and road surface condition information from the current position of the vehicle to a destination from a cloud navigation server in real time;
determining a scene of direct strong light or sudden light intensity change to be met by the vehicle during traveling according to the road planning information and the road surface condition information, and marking in a navigation map;
when the vehicle approaches or enters the marked position, the vehicle-mounted camera image acquisition parameters are adjusted to capture images.
Further, determining a scene of direct sunlight or abrupt change of light intensity to be encountered by the vehicle during traveling according to the road planning information and the road surface condition information, comprising:
acquiring an uphill road section, a downhill road section, a horizontal road section, a culvert or a tunnel road section in a planned path according to the road planning information and the road surface condition information; predicting the time of the vehicle reaching each type of road section according to the running state of the vehicle;
according to the road gradient of the uphill road section, the downhill road section and the horizontal road section and the time of the vehicle reaching the uphill road section, the downhill road section and the horizontal road section, calculating an incident angle from sunlight to a lens of a vehicle-mounted camera, and determining whether a direct sunlight scene exists in the driving process of the vehicle on the uphill road section, the downhill road section and the horizontal road section; and predicting whether the difference value of the light intensity inside and outside the culvert or the tunnel is greater than a threshold value or not according to the time when the vehicle reaches the inlet or the outlet of the culvert or the tunnel section, and if so, determining that the scene with sudden light intensity change exists when the vehicle enters or exits the culvert or the tunnel section.
Further, when the vehicle approaches or enters the mark position, the vehicle-mounted camera image acquisition parameters are adjusted to capture images, and the method comprises the following steps: when approaching or entering the mark position, the ambient light intensity at the current moment is obtained, the light transmittance is adjusted according to the ambient light intensity, and the vehicle-mounted camera captures images under the adjusted light transmittance.
Further, the method further comprises the steps of recording the environmental light intensity, the light transmittance adjusting result and the image collected by the vehicle-mounted camera, and learning and optimizing the light transmittance adjusting strategy by utilizing a neural network algorithm according to the historical record.
Further, the cloud navigation server is configured to perform road planning in response to a road planning request for accessing a vehicle, and is configured to acquire a vehicle driving state of the accessed vehicle.
Further, the vehicle running state includes position information, speed information, and vehicle body control information.
Further, the method further comprises the steps of obtaining the vehicle body control information of the oncoming vehicle at the preset distance from the cloud navigation server in real time, and if the high beam is turned on for the oncoming vehicle and the ambient light intensity is smaller than a specified value, adjusting the image acquisition parameters of the vehicle-mounted camera to capture the image.
In a second aspect, the present invention provides a vehicle-mounted camera control device based on fusion perception, including:
the data acquisition module is used for acquiring road planning information and road surface condition information from the current position of the vehicle to a destination from the cloud navigation server in real time;
the scene judging and marking module is used for determining a scene of direct strong light or sudden light intensity change when the vehicle travels according to the road planning information and the road surface condition information and marking the scene in a navigation map;
and the parameter adjusting module is used for adjusting the image acquisition parameters of the vehicle-mounted camera to capture images when the vehicle approaches or enters the marked position.
In a third aspect, the present invention provides an electronic device comprising:
a memory for storing a computer software program;
and the processor is used for reading and executing the computer software program so as to further realize the vehicle-mounted camera control method based on the fusion perception in the first aspect of the invention.
In a fourth aspect, the present invention provides a non-transitory computer readable storage medium, in which a computer software program for implementing the fusion perception-based vehicle-mounted camera control method according to the first aspect of the present invention is stored.
The beneficial effects of the invention are: according to the invention, the response time of adjusting the incident light intensity of the automobile in the scene of sudden change of the light intensity to be driven into can be effectively prolonged through the cloud navigation server, the reliability of the front camera is further improved, and the adaptability of the camera in different scenes is improved.
Drawings
Fig. 1 is a schematic flow chart of a vehicle-mounted camera control method based on fusion perception provided by an embodiment of the invention;
FIG. 2 is a schematic view of a scene in which direct sunlight appears on an uphill road section;
FIG. 3 is a diagram illustrating a sudden light intensity change scenario with an example of entering a tunnel;
FIG. 4 is a schematic view of a high beam scene of an oncoming vehicle;
fig. 5 is a schematic structural diagram of a vehicle-mounted camera control device based on fusion sensing according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an embodiment of an electronic device according to an embodiment of the invention
Fig. 7 is a schematic diagram of an embodiment of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth to illustrate, but are not to be construed to limit the scope of the invention.
As shown in fig. 1, an embodiment of the present invention provides a vehicle-mounted camera control method based on fusion perception. The method comprises the following steps:
s1, road planning information and road surface condition information from the current position of a vehicle to a destination are obtained from a cloud navigation server in real time.
The road planning information described herein includes planned path information from a current location to a destination. The road surface condition information comprises road characteristics such as road gradient, narrow bridges, culverts, tunnels and the like and road surface visibility information brought by weather conditions such as rain, snow, fog and the like.
And S2, determining a scene of direct strong light or abrupt light intensity change which is to be met by the vehicle during running according to the road planning information and the road surface condition information, and marking in a navigation map.
Specifically, an ascending road section, a descending road section, a horizontal road section, a culvert or a tunnel road section in a planned path are obtained according to the road planning information and the road surface condition information; and predicting the time when the vehicle arrives at each type of road section according to the driving state of the vehicle. Since the angle of direct sunlight is different at different times, the specific time when the vehicle is driving at a certain position needs to be considered to determine whether the direct sunlight occurs.
According to the road gradient of the uphill road section, the downhill road section and the horizontal road section and the time of the vehicle reaching the uphill road section, the downhill road section and the horizontal road section, calculating an incident angle from sunlight to a lens of a vehicle-mounted camera, and determining whether a direct sunlight scene exists in the driving process of the vehicle on the uphill road section, the downhill road section and the horizontal road section; and predicting whether the difference value of the light intensity inside and outside the culvert or the tunnel is greater than a threshold value or not according to the time when the vehicle reaches the inlet or the outlet of the culvert or the tunnel section, and if so, determining that the scene with sudden light intensity change exists when the vehicle enters or exits the culvert or the tunnel section.
It should also be understood that the position of the sun is not only related to the time of day but also to the season (date). Therefore, when the incident angle of the sunlight to the vehicle-mounted camera lens is calculated, in addition to the time when the incident angle is calculated, the date and the heading angle of the vehicle are also considered, and the incident angle of the sunlight to the vehicle-mounted camera lens is calculated by using the time and the date when the incident angle is calculated and the heading angle of the vehicle.
And S3, when the vehicle approaches or enters the marked position, adjusting image acquisition parameters of the vehicle-mounted camera to capture an image.
Specifically, when approaching or entering the mark position, the ambient light intensity at the current moment is obtained, the light transmittance is adjusted according to the ambient light intensity, and the vehicle-mounted camera captures images under the adjusted light transmittance.
Fig. 2 is a schematic view of a scene in which direct sunlight appears on an uphill road section. The road in fig. 2 includes a horizontal road section and an uphill road section, and according to the time when the vehicle runs on each road section, the situation of direct sunlight cannot occur on the horizontal road section, and when the vehicle runs on the uphill road section, due to the change of the road gradient and the incident angle of the sunlight, the scene of the vehicle-mounted camera is directly irradiated by the sunlight, and at this moment, the light transmittance of the vehicle-mounted camera needs to be reduced to avoid the glare problem of the camera when the sunlight is directly irradiated.
Fig. 3 is a diagram illustrating a scene with sudden light intensity changes as an example of entering a tunnel. In fig. 3, it is known from road plan information or from road signs that there is a tunnel section ahead of the vehicle. Light intensity in the tunnel is generally invariable, and can roughly judge the sunshine intensity outside the tunnel according to present moment and weather conditions, when sunshine intensity is greater than illumination intensity in the tunnel, then need improve vehicle-mounted camera's luminousness to avoid the adverse effect that the abrupt drop of light intensity brought for vehicle-mounted camera and driver.
As a preferred embodiment, after the vehicle-mounted camera captures an image, the vehicle-mounted system records the ambient light intensity, the light transmittance adjustment result and the image collected by the vehicle-mounted camera, and learns and optimizes the light transmittance adjustment strategy by using a neural network algorithm according to the historical record.
As a preferred embodiment, the cloud navigation server is configured to perform road planning in response to a road planning request for accessing a vehicle, and is configured to acquire a vehicle driving state of the accessed vehicle.
The vehicle running state includes position information, speed information, and vehicle body control information.
Further, the method further comprises the steps of obtaining the vehicle body control information of the oncoming vehicle at the preset distance from the cloud navigation server in real time, and if the high beam is turned on for the oncoming vehicle and the ambient light intensity is smaller than a specified value, adjusting the image acquisition parameters of the vehicle-mounted camera to capture the image.
In this embodiment, it is assumed that all the vehicles on the road have accessed the cloud navigation server, that is, the vehicles on the road are navigated by the cloud navigation server and upload the driving state information of the vehicles in real time.
When a vehicle runs on a road, there are scenes in which strong light is directly emitted or suddenly changed due to natural conditions, and scenes in which strong light is artificially emitted. For example, when the vehicle is driven at night, a high beam is turned on for a coming vehicle, which causes the glare problem of the vehicle-mounted camera. As shown in fig. 4, the current vehicle acquires whether an oncoming vehicle exists in a preset distance range ahead from the cloud navigation server in real time, if so, acquires vehicle body control information of the oncoming vehicle from the cloud navigation server, namely, determines whether the oncoming vehicle starts a high beam, and if so, the current vehicle reminds a driver of cautious driving, and simultaneously reduces the light transmittance of the vehicle-mounted camera to avoid the glare problem of the camera caused by direct irradiation of strong light.
Fig. 5 is a schematic structural diagram of a vehicle-mounted camera control device based on fusion sensing according to an embodiment of the present invention. As shown in fig. 5, the apparatus includes:
the data acquisition module is used for acquiring road planning information and road surface condition information from the current position of the vehicle to a destination from the cloud navigation server in real time;
the scene judging and marking module is used for determining a scene of direct strong light or sudden light intensity change when the vehicle travels according to the road planning information and the road surface condition information and marking the scene in a navigation map;
and the parameter adjusting module is used for adjusting the image acquisition parameters of the vehicle-mounted camera to capture images when the vehicle approaches or enters the marked position.
Specifically, in this embodiment, the data acquisition module may be implemented by a vehicle-mounted communication device, the scene determination and marking module is implemented by a vehicle-mounted central domain controller, a vehicle-mounted GPS/RTK, a vehicle-mounted IMU, an ESC system, and a light sensor, and the parameter adjustment module includes an electrically controlled light transmission device.
The vehicle accesses the cloud navigation server through the vehicle-mounted communication device, and uploads the running state information of the current vehicle, such as speed positioning information (output data of a GPS, IMU and ESC system). And after the driver inputs the navigation destination, the cloud navigation server performs path planning and extracts road surface condition information. And the current vehicle downloads a path planning result and road surface condition information from the cloud navigation server. When meeting the scene of highlight direct irradiation or sudden change of light intensity, automatically controlled printing opacity device is controlled according to set rule to central domain controller adjusts the luminousness to avoid vehicle-mounted camera glare problem, if vehicle-mounted camera includes normal operating mode, low light state mode of operation and highlight state mode of operation simultaneously, central domain controller can also adjust vehicle-mounted camera's operating mode according to actual conditions.
Referring to fig. 6, fig. 6 is a schematic view of an embodiment of an electronic device according to an embodiment of the invention. As shown in fig. 6, an embodiment of the present invention provides an electronic device 500, which includes a memory 510, a processor 520, and a computer program 511 stored in the memory 520 and executable on the processor 520, wherein the processor 520 executes the computer program 511 to implement the following steps:
s1, acquiring road planning information and road surface condition information from the current position of a vehicle to a destination from a cloud navigation server in real time.
And S2, determining a scene with direct light or sudden light intensity change when the vehicle travels according to the road planning information and the road surface condition information, and marking in a navigation map.
And S3, when the vehicle approaches or enters the mark position, adjusting image acquisition parameters of the vehicle-mounted camera to capture an image.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating an embodiment of a computer-readable storage medium according to the present invention. As shown in fig. 7, the present embodiment provides a computer-readable storage medium 600 on which a computer program 611 is stored, the computer program 611 implementing the following steps when executed by a processor:
s1, road planning information and road surface condition information from the current position of a vehicle to a destination are obtained from a cloud navigation server in real time.
And S2, determining a scene with direct light or sudden light intensity change when the vehicle travels according to the road planning information and the road surface condition information, and marking in a navigation map.
And S3, when the vehicle approaches or enters the mark position, adjusting image acquisition parameters of the vehicle-mounted camera to capture an image.
It should be noted that, in the foregoing embodiments, the description of each embodiment has an emphasis, and reference may be made to the related description of other embodiments for a part that is not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A vehicle-mounted camera control method based on fusion perception is characterized by comprising the following steps:
acquiring road planning information and road surface condition information from the current position of the vehicle to a destination from a cloud navigation server in real time;
determining a scene of direct strong light or sudden light intensity change to be met by the vehicle during traveling according to the road planning information and the road surface condition information, and marking in a navigation map;
when the vehicle approaches or enters the marked position, the vehicle-mounted camera image acquisition parameters are adjusted to capture images.
2. The method of claim 1, wherein determining a scene of direct or abrupt light intensity that the vehicle will encounter while traveling based on the road planning information and the road surface condition information comprises:
acquiring an ascending road section, a descending road section, a horizontal road section, a culvert or a tunnel road section in a planned path according to the road planning information and the road surface condition information; predicting the time of the vehicle reaching each type of road section according to the running state of the vehicle;
according to the road gradient of the uphill road section, the downhill road section and the horizontal road section and the time of the vehicle reaching the uphill road section, the downhill road section and the horizontal road section, calculating an incident angle from sunlight to a lens of a vehicle-mounted camera, and determining whether a direct sunlight scene exists in the driving process of the vehicle on the uphill road section, the downhill road section and the horizontal road section; and predicting whether the difference value of the light intensity inside and outside the culvert or the tunnel is greater than a threshold value or not according to the time when the vehicle reaches the inlet or the outlet of the culvert or the tunnel section, and if so, determining that the scene with sudden light intensity change exists when the vehicle enters or exits the culvert or the tunnel section.
3. The method of claim 1, wherein adjusting onboard camera image acquisition parameters to capture images as the vehicle approaches or enters the marked location comprises: when approaching or entering the mark position, the ambient light intensity at the current moment is obtained, the light transmittance is adjusted according to the ambient light intensity, and the vehicle-mounted camera captures images under the adjusted light transmittance.
4. The method of claim 3, further comprising recording the ambient light intensity, the transmittance adjustment result, and the image collected by the vehicle-mounted camera, and learning and optimizing the transmittance adjustment strategy by using a neural network algorithm according to a history.
5. The method of claim 1, wherein the cloud navigation server is configured to perform road planning in response to a road planning request for accessing the vehicle, and is configured to obtain a vehicle driving status of the accessing vehicle.
6. The method of claim 5, wherein the vehicle travel state comprises position information, speed information, and body control information.
7. The method of claim 6, further comprising obtaining the body control information of the oncoming vehicle at a preset distance from the cloud navigation server in real time, and if the oncoming vehicle has a high beam and the ambient light intensity is less than a specified value, adjusting the vehicle-mounted camera image acquisition parameters to capture the image.
8. The utility model provides an on-vehicle camera controlling means based on fuse perception which characterized in that includes:
the data acquisition module is used for acquiring road planning information and road surface condition information from the current position of the vehicle to a destination from the cloud navigation server in real time;
the scene judging and marking module is used for determining a scene with direct strong light or abrupt light intensity change when the vehicle travels according to the road planning information and the road surface condition information and marking the scene in a navigation map;
and the parameter adjusting module is used for adjusting the image acquisition parameters of the vehicle-mounted camera to capture images when the vehicle approaches or enters the marked position.
9. An electronic device, comprising:
a memory for storing a computer software program;
a processor, configured to read and execute the computer software program, so as to implement the fusion perception-based vehicle-mounted camera control method according to any one of claims 1 to 7.
10. A non-transitory computer readable storage medium, wherein the storage medium stores therein a computer software program for implementing the fusion perception-based vehicle camera control method according to any one of claims 1-7.
CN202210511643.7A 2022-05-11 2022-05-11 Vehicle-mounted camera control method and device based on fusion perception Pending CN115278095A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210511643.7A CN115278095A (en) 2022-05-11 2022-05-11 Vehicle-mounted camera control method and device based on fusion perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210511643.7A CN115278095A (en) 2022-05-11 2022-05-11 Vehicle-mounted camera control method and device based on fusion perception

Publications (1)

Publication Number Publication Date
CN115278095A true CN115278095A (en) 2022-11-01

Family

ID=83759372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210511643.7A Pending CN115278095A (en) 2022-05-11 2022-05-11 Vehicle-mounted camera control method and device based on fusion perception

Country Status (1)

Country Link
CN (1) CN115278095A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117889870A (en) * 2024-03-14 2024-04-16 腾讯科技(深圳)有限公司 Method and device for judging entrance and exit tunnel, electronic equipment and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104038684A (en) * 2013-03-08 2014-09-10 联想(北京)有限公司 Information processing method and electronic device
WO2014195406A1 (en) * 2013-06-07 2014-12-11 Continental Automotive Gmbh Method for acquiring on-vehicle navigation information and on-vehicle navigation system
CN105635597A (en) * 2015-12-21 2016-06-01 湖北工业大学 Auto-exposure method and system for vehicle-mounted camera
CN107621267A (en) * 2017-09-05 2018-01-23 上海博泰悦臻网络技术服务有限公司 A kind of navigation method and system, car-mounted terminal based on road conditions camera
WO2018040416A1 (en) * 2016-08-30 2018-03-08 深圳市元征科技股份有限公司 Driving road condition prompt method and device
CN109167929A (en) * 2018-09-30 2019-01-08 深圳市商汤科技有限公司 Adjusting method, device and the electronic equipment of in-vehicle camera parameter
WO2019037489A1 (en) * 2017-08-25 2019-02-28 腾讯科技(深圳)有限公司 Map display method, apparatus, storage medium and terminal
WO2019042503A1 (en) * 2017-09-01 2019-03-07 Conti Temic Microelectronic Gmbh Method and device for predictable exposure control of at least one first vehicle camera
WO2020082745A1 (en) * 2018-10-26 2020-04-30 华为技术有限公司 Camera apparatus adjustment method and related device
CN111756962A (en) * 2019-03-29 2020-10-09 上海擎感智能科技有限公司 Camera device and control method thereof
CN112365544A (en) * 2019-07-26 2021-02-12 北京百度网讯科技有限公司 Image recognition interference detection method and device, computer equipment and storage medium
CN113223312A (en) * 2021-04-29 2021-08-06 重庆长安汽车股份有限公司 Camera blindness prediction method and device based on map and storage medium
CN113689505A (en) * 2020-05-18 2021-11-23 魔门塔(苏州)科技有限公司 Method and device for adjusting exposure parameters of vehicle-mounted camera based on positioning information
CN113780062A (en) * 2021-07-26 2021-12-10 岚图汽车科技有限公司 Vehicle-mounted intelligent interaction method based on emotion recognition, storage medium and chip
JP2022027232A (en) * 2020-07-31 2022-02-10 浜井電球工業株式会社 Straight tube type led lamp with camera, lighting/monitoring camera system incorporating the same, method for controlling straight tube type led lamp with camera, and method for controlling lighting/monitoring camera system

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104038684A (en) * 2013-03-08 2014-09-10 联想(北京)有限公司 Information processing method and electronic device
WO2014195406A1 (en) * 2013-06-07 2014-12-11 Continental Automotive Gmbh Method for acquiring on-vehicle navigation information and on-vehicle navigation system
CN105635597A (en) * 2015-12-21 2016-06-01 湖北工业大学 Auto-exposure method and system for vehicle-mounted camera
WO2018040416A1 (en) * 2016-08-30 2018-03-08 深圳市元征科技股份有限公司 Driving road condition prompt method and device
WO2019037489A1 (en) * 2017-08-25 2019-02-28 腾讯科技(深圳)有限公司 Map display method, apparatus, storage medium and terminal
WO2019042503A1 (en) * 2017-09-01 2019-03-07 Conti Temic Microelectronic Gmbh Method and device for predictable exposure control of at least one first vehicle camera
CN107621267A (en) * 2017-09-05 2018-01-23 上海博泰悦臻网络技术服务有限公司 A kind of navigation method and system, car-mounted terminal based on road conditions camera
CN109167929A (en) * 2018-09-30 2019-01-08 深圳市商汤科技有限公司 Adjusting method, device and the electronic equipment of in-vehicle camera parameter
WO2020082745A1 (en) * 2018-10-26 2020-04-30 华为技术有限公司 Camera apparatus adjustment method and related device
CN111756962A (en) * 2019-03-29 2020-10-09 上海擎感智能科技有限公司 Camera device and control method thereof
CN112365544A (en) * 2019-07-26 2021-02-12 北京百度网讯科技有限公司 Image recognition interference detection method and device, computer equipment and storage medium
CN113689505A (en) * 2020-05-18 2021-11-23 魔门塔(苏州)科技有限公司 Method and device for adjusting exposure parameters of vehicle-mounted camera based on positioning information
JP2022027232A (en) * 2020-07-31 2022-02-10 浜井電球工業株式会社 Straight tube type led lamp with camera, lighting/monitoring camera system incorporating the same, method for controlling straight tube type led lamp with camera, and method for controlling lighting/monitoring camera system
CN113223312A (en) * 2021-04-29 2021-08-06 重庆长安汽车股份有限公司 Camera blindness prediction method and device based on map and storage medium
CN113780062A (en) * 2021-07-26 2021-12-10 岚图汽车科技有限公司 Vehicle-mounted intelligent interaction method based on emotion recognition, storage medium and chip

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
国网江苏省电力有限公司、东南大学: "《‘三合一’电子公路技术及应用》", 南京东南大学出版社, pages: 319 - 320 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117889870A (en) * 2024-03-14 2024-04-16 腾讯科技(深圳)有限公司 Method and device for judging entrance and exit tunnel, electronic equipment and storage medium
CN117889870B (en) * 2024-03-14 2024-05-28 腾讯科技(深圳)有限公司 Method and device for judging entrance and exit tunnel, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN103874931B (en) For the method and apparatus of the position of the object in the environment for asking for vehicle
KR101768500B1 (en) Drive assistance apparatus and method for controlling the same
EP3475121B1 (en) Imaging system with adaptive high beam control
CN113950703A (en) With detectors for point cloud fusion
US11676403B2 (en) Combining visible light camera and thermal camera information
CN114103946A (en) Dynamic stop time threshold selection for hands-free driving
CN114730186A (en) Method for operating an autonomous driving function of a vehicle
CN113135183A (en) Control system of vehicle, control method of control system of vehicle, and computer-readable recording medium
CN112440881B (en) Self-adaptive adjusting method and device for rearview mirror
CN115278095A (en) Vehicle-mounted camera control method and device based on fusion perception
US20240020988A1 (en) Traffic light detection and classification for autonomous driving vehicles
CN113642372B (en) Method and system for recognizing object based on gray image in operation of autonomous driving vehicle
CN113848702A (en) Adaptive sensor control
CN112449153B (en) Systems and methods for vision sensor detection
CN112046480A (en) Control method and device for vehicle speed limit
JP7125893B2 (en) TRIP CONTROL DEVICE, CONTROL METHOD AND PROGRAM
US20220121216A1 (en) Railroad Light Detection
JP2020181310A (en) Vehicular illumination control system
US11288528B2 (en) Differentiation-based traffic light detection
US20240010228A1 (en) Method for operating a transportation vehicle and a transportation vehicle
CN114771568B (en) Speed limiting fusion method and system based on camera and map information and vehicle
RU2807793C1 (en) Method for automatically limiting vehicle speed
US20240029450A1 (en) Automated driving management system and automated driving management method
CN116027375B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115230692B (en) Vehicle control method and device, vehicle and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20221101