WO2021159279A1 - Procédé et appareil de commande d'exposition, support de stockage lisible et dispositif informatique - Google Patents

Procédé et appareil de commande d'exposition, support de stockage lisible et dispositif informatique Download PDF

Info

Publication number
WO2021159279A1
WO2021159279A1 PCT/CN2020/074759 CN2020074759W WO2021159279A1 WO 2021159279 A1 WO2021159279 A1 WO 2021159279A1 CN 2020074759 W CN2020074759 W CN 2020074759W WO 2021159279 A1 WO2021159279 A1 WO 2021159279A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
area
image
exposure control
sub
Prior art date
Application number
PCT/CN2020/074759
Other languages
English (en)
Chinese (zh)
Inventor
杨超
刘念邱
Original Assignee
深圳元戎启行科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳元戎启行科技有限公司 filed Critical 深圳元戎启行科技有限公司
Priority to PCT/CN2020/074759 priority Critical patent/WO2021159279A1/fr
Priority to CN202080003169.6A priority patent/CN113545031A/zh
Publication of WO2021159279A1 publication Critical patent/WO2021159279A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Definitions

  • This application relates to image processing technology, in particular to exposure control methods, devices, readable storage media, and computer equipment.
  • the camera device can be used to obtain images, thereby facilitating the recording and preservation of the images.
  • camera devices have been widely used in electronic equipment such as drones, vehicle recorders, and mobile phones.
  • the image is usually divided into the area of interest and the area of non-interest in advance, so that the exposure adjustment is performed according to the brightness change of the area of interest.
  • the inventor found that the exposure effect of the camera device in the related technology is poor.
  • the embodiments of the present application provide an exposure control method, device, readable storage medium, and computer equipment, which can improve the exposure effect of a camera device.
  • An exposure control method including:
  • the exposure adjustment is performed according to the brightness statistical value.
  • An exposure control device including:
  • An image division module configured to detect a target object in a captured image, and divide the image into a first area and a second area according to the position of the target object in the image;
  • a brightness calculation module configured to calculate the brightness statistical value of the image according to the first area and the second area
  • the exposure adjustment module is configured to perform exposure adjustment according to the brightness statistical value.
  • a computer-readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the steps of the exposure control method as described in the above-mentioned embodiments are realized.
  • a computer device includes a memory and a processor, and the memory stores computer-readable instructions.
  • the processor executes the exposure control method described in the above-mentioned embodiment .
  • the above exposure control method, device, readable storage medium and computer equipment detect the target object in the captured image, and divide the image into a first area and a second area according to the position of the target object in the image; according to the first area And the second area calculates the brightness statistics of the image, and adjusts the exposure according to the brightness statistics.
  • the exposure control method, device, readable storage medium and computer equipment of the present application perform real-time detection of the position of the target object in the image, and recalculate the brightness statistical value according to the real-time position of the target object, and adjust the exposure according to the brightness statistical value , Make the brightness matching degree of the exposure adjustment scheme and the target object in the image higher, and the exposure effect is better.
  • FIG. 1 is a schematic flowchart of an exposure control method provided in an embodiment of this application
  • FIG. 2 is a schematic flowchart of an exposure control method provided in an embodiment of the application.
  • step S002 of the exposure control method provided in an embodiment of the application
  • FIG. 4 is a schematic flowchart of an exposure control method provided in an embodiment of the application.
  • FIG. 5 is a schematic flowchart of an exposure control method provided in an embodiment of the application.
  • step S100 of the exposure control method provided in an embodiment of the application
  • FIG. 7 is a schematic flowchart of step S200 of the exposure control method provided in an embodiment of the application.
  • FIG. 8 is a schematic flowchart of an exposure control method provided in an embodiment of the application.
  • step S008 of the exposure control method provided in an embodiment of the application.
  • FIG. 10 is a schematic diagram of the frame structure of an exposure control device provided in an embodiment of the application.
  • FIG. 11 is a schematic diagram of the frame structure of the exposure control device provided in an embodiment of the application.
  • FIG. 12 is a schematic diagram of the frame structure of the exposure control device provided in an embodiment of the application.
  • FIG. 13 is a schematic diagram of the frame structure of the exposure control device provided in an embodiment of the application.
  • FIG. 14 is a schematic diagram of the frame structure of the exposure control device provided in an embodiment of the application.
  • the exposure control method provided by the present application is applied to an imaging device.
  • the camera device uses the exposure control method of the present application to adjust the exposure according to the image acquired in real time, thereby improving the exposure effect.
  • the exposure control method provided by the present application may be as shown in FIG. 1, and includes the following steps:
  • S100 Detect a target object in the captured image, and divide the image into a first area and a second area according to the position of the target object in the image.
  • the captured image refers to the image captured in real time during the working process of the camera device.
  • the exposure control method provided by the present application is to adjust the exposure of the camera device based on the image taken by the camera device at the current moment, so as to improve the exposure effect of the camera device when the image is taken at the next moment.
  • the target object refers to the target object in the captured image.
  • the target object is a traffic light; in other embodiments, the target object is a moving animal or human body.
  • the position of the target object in the captured image is constantly moving. Based on this, the exposure control method of the present application needs to detect the position of the target object in the image in real time after the image is taken.
  • the image is further divided into a first area and a second area according to the position of the target object.
  • the first area of the image is an area that contains the target object
  • the second area of the image is an area that does not contain the target object. It should be understood that the first and second here are only used to divide different regions in the image, and no other explanation is provided. In some other embodiments, it is also possible that the first area of the image is an area that does not contain the target object, and the second area is an area that contains the target object, which is not limited.
  • S200 Calculate the brightness statistical value of the image according to the first area and the second area.
  • the brightness statistical value refers to the statistical result obtained by performing brightness statistics on the captured image.
  • the brightness statistical value of the image is calculated according to the division result of the first area and the second area.
  • the division of the first area and the second area changes; after the division of the first area and the second area changes, the The area and the second area calculate the brightness statistics of the image.
  • the brightness statistical value of the image calculated by the exposure control method is based on the real-time position of the target object in the image.
  • S300 Perform exposure adjustment according to the brightness statistical value.
  • the exposure adjustment can be performed according to the calculated brightness statistical value.
  • the exposure control method of the present application detects the position of the target object in the image in real time during the process of capturing the image by the camera device, and divides the image into a first area and a second area. After the method divides the image, the brightness statistical value is calculated according to the division result of the first area and the second area, and the exposure adjustment is performed according to the brightness statistical value.
  • the exposure control method can make the brightness matching degree of the exposure adjustment scheme and the target object in the image higher, thereby improving the exposure effect of the camera device.
  • the exposure control method of the present application before step S100, further includes:
  • S002 Acquire location information of the target object.
  • the target object is a fixed-position object such as a traffic light; the camera device using the exposure control method of the present application is mounted on a vehicle-mounted recorder or a drone.
  • the exposure control method of the present application first obtains the position information of the target object before performing step S100.
  • the position information of the target object refers to the position information of the target object relative to the imaging device that executes the exposure control method of the present application.
  • obtaining the location information of the target object is achieved by the following method: at least one of the infrared detection device, the ultraviolet detection device, and the laser detection device emits infrared, ultraviolet or laser to detect the target object to obtain the location information of the target object , And passed to the camera device. At this time, the camera device can obtain the location information of the target object.
  • the location information of the target object is obtained by the following method: GPS (Global Positioning System, Global Positioning System) is used to locate the location of the camera device on a map; the map is marked with the target object at the same time. Location. At this time, the camera device can obtain the position information of the target object relative to the camera device according to the location of the target object on the map and the location of the camera device.
  • GPS Global Positioning System, Global Positioning System
  • S004 Calculate the position information of the target object in the image according to the position information.
  • the position information of the target object in the image is calculated according to the position information of the target object.
  • the exposure control method of the present application calculates the orientation information of the target object in the image captured by the camera device based on the position information of the target object and the camera angle of the camera device.
  • the lens direction of the camera device is the same as the movement direction, and the central axis of the lens is parallel to the movement direction.
  • the position information of the target object is: the target object is located at the upper left of the imaging device
  • the orientation information of the target object in the image is: the target object is located at the upper left of the captured image.
  • step S100 may include: S110, detecting the target object in the captured image, and S120, dividing the image into a first area and a second area according to the position of the target object in the image. area.
  • S110 is specifically: detecting the target object in the captured image according to the orientation information of the target object in the image. That is, in step S004, after calculating the position information of the target object in the image according to the position information, the target object in the image is detected according to the position information.
  • step S004 if the position information of the target object in the image has been calculated in step S004, the target object is located at the upper left of the captured image. Then in step S110, priority and focus is on detecting the target object at the upper left of the image.
  • the above exposure control method before detecting the target object in the image, also calculates the position information of the target object in the image according to the position information of the target object, so as to provide auxiliary assistance to the detection in step S110 and improve the detection speed of the target object in the image And accuracy.
  • step S002 is implemented by GPS (Global Positioning System, Global Positioning System) positioning.
  • GPS Global Positioning System, Global Positioning System
  • step S002 may include:
  • S0022 Acquire map information, where the map information has a location mark of the target object.
  • the map information refers to a map with a geographical indication where the camera is located, and at the same time, the map information also has a location mark of the target object. At this time, after the location of the camera device is marked on the map information, the position information of the target object corresponding to the location mark relative to the camera device can be obtained.
  • the map information may be a two-dimensional map or a three-dimensional map, without limitation.
  • the current location refers to the current location of the camera device that executes the exposure control method of the present application.
  • This step is to locate the location of the camera on the aforementioned map information.
  • This step can be implemented by positioning the camera device through GPS (Global Positioning System, Global Positioning System).
  • the position information of the target object can be obtained according to the position information of the camera device on the map information and the position mark of the target object.
  • the position information of the target object refers to the position information of the target object relative to the imaging device that executes the exposure control method of the present application.
  • step S100 may include: S110, detecting a target object in the captured image, and S120, dividing the image into The first area and the second area.
  • step S110 includes:
  • S112 Acquire detection features of the target object.
  • the exposure control method of the present application can detect the target object according to the detection characteristics of the target object.
  • the detection characteristics of the target object may include color or/and shape. After the image is captured, the target object can be detected according to the color and shape of each object in the captured image.
  • step S110 it may further include:
  • S005 Pre-store the label and detection feature of the target object.
  • the label and detection feature of the target object are stored in the imaging device that executes the exposure control method of the present application in advance.
  • the target object may include one or more of traffic lights, pedestrians, and non-motorized vehicles.
  • the tags of the target objects are: traffic lights, pedestrians and non-motorized vehicles.
  • the detection characteristics of the traffic light such as shape and color, are stored.
  • the detection features of pedestrians such as height, driving speed, and contour are stored.
  • the detection features of the non-motorized vehicle such as contour and driving speed, are stored.
  • step S112 specifically includes:
  • the camera device executes the exposure control method of the present application, it can first obtain the label of the target object to be detected, and then extract the detection feature of the target object based on the label of the target object, so as to detect the target object.
  • the exposure control method of the present application may obtain one or more tags when obtaining the tags of the target object. That is, in the exposure control method of the present application, only one target object can be detected at the same time, or multiple target objects can be detected at the same time.
  • the exposure control method of the present application, before step S100, may further include:
  • S006 Divide the image into several sub-areas, and the several sub-areas are distributed in an array.
  • the exposure control method of the present application further divides the image into several sub-areas to facilitate the division of the first area and the second area.
  • several refers to four or more integers.
  • Several sub-areas are arranged horizontally and vertically at the same time, thus presenting an array distribution.
  • the exposure control method of the present application divides the image into 16 sub-regions, and the 16 sub-regions are distributed in four rows and four columns. In other embodiments, the exposure control method of the present application divides the image into 25 sub-areas, and the 25 sub-areas are distributed in five rows and five columns.
  • the several sub-regions have the same shape and the same size, so as to facilitate the calculation of the subsequent brightness statistics.
  • each of the several sub-regions is rectangular.
  • each sub-region After the display image is divided into several sub-regions, each sub-region generally has several light-emitting pixels.
  • the light-emitting pixels in all sub-regions constitute an image when they are working and emitting light.
  • the exposure control method of the present application in step S100, may include: S110, detecting a target object in the captured image, and S120, dividing the image into a first area and a second area according to the position of the target object in the image. area.
  • step S110 is executed after step S006, and step S120 is executed after step S110.
  • step S120 includes:
  • S122 Determine whether each sub-region is at least partially used to display the target object.
  • step S006 the image has been divided into several sub-areas distributed in an array. Therefore, when dividing the first area and the second area, each sub-area in the image is divided. The division is based on whether the sub-region is at least partially used to display the target object.
  • At least part of the sub-region is used to display the target object, which means that at least one light-emitting pixel in the sub-region is used to display the target object.
  • the sub-region is divided into the first region.
  • the first area is an area containing the target object in the image.
  • the sub-region is divided into the second region.
  • the second area is an area that does not contain the target object in the image.
  • the captured image is divided into several sub-regions, and then each sub-region is judged whether at least part of it is used to display the target object, so as to distinguish the different sub-regions, and divide the first region and the second region. .
  • This division method can make the calculation amount of the exposure control method of the present application relatively small.
  • the pixel unit used to display the target object is divided into the first region by judging whether each pixel unit is used to display the target object; it will not be used to display the target object.
  • the pixel unit of the object is divided into the second area. This division method can make the division of the first area and the second area more precise.
  • step S200 includes:
  • S210 Obtain a first brightness value of each sub-areas in the first area, and obtain a second brightness value of each sub-area in the second area.
  • the brightness value in each sub-area is obtained by taking the sub-area as a unit.
  • the brightness value of each sub-region in the first region as the first brightness value
  • the brightness value of each sub-region in the second region as the second brightness value.
  • S220 Acquire a first weight corresponding to the first area and a second weight corresponding to the second area.
  • the first weight and the second weight are obtained.
  • the first weight is used to match the first brightness value during the calculation of the brightness statistical value;
  • the second weight is used to match the second brightness value during the calculation of the brightness statistical value. Therefore, the first weight corresponds to the first area, and the second weight corresponds to the second area.
  • the first area is an area containing the target object
  • the second area is an area not containing the target object. Therefore, in order to make the calculation result of the brightness statistical value more affected by the first region, the first weight corresponding to the first region should be greater than the second weight of the second region. If the first weight can be adjusted within the first threshold and the second weight can be adjusted within the second threshold, the lowest value of the first threshold should be greater than the highest value of the second threshold.
  • the first threshold of the first weight may be 1.7 to 1.9; the second threshold of the second weight may be 0.1 to 0.3.
  • S230 Calculate a brightness statistical value according to each first brightness value and first weight, and each second brightness value and second weight.
  • the brightness statistical value is calculated.
  • the brightness statistics value is:
  • Luminance statistical value ⁇ first luminance value ⁇ first weight+ ⁇ second luminance value ⁇ second weight.
  • the brightness statistical value is equal to the cumulative sum of the product of each first brightness value and the first weight, plus the cumulative sum of the product of each second brightness value and the second weight.
  • the calculation of the brightness statistical value in step S200 needs to be based on the first weight and the second weight to obtain the product of the first brightness value and the first weight, and the product of the second brightness value and the second weight.
  • the values of the first weight and the second weight are fixed values, the first weight and the second weight are preset, so that the first weight and the second weight can be obtained when step S220 is executed.
  • the exposure control method of the present application before step S200, further includes:
  • step S008 may include:
  • S0082 Acquire a first number of sub-areas in the first area.
  • the image has been divided into several sub-regions, and each sub-region has been divided.
  • the first area is composed of several sub-areas; the second area is also composed of several sub-areas.
  • the number of sub-regions in the first region is acquired. To facilitate distinction, the number of sub-regions in the first region is named the first number.
  • S0084 Acquire a second number of sub-regions in the image.
  • the total number of sub-regions in the image is named the second number.
  • S0086 Set the first weight and the second weight according to the percentage of the first quantity to the second quantity.
  • the first weight and the second weight are set.
  • the first number accounts for a larger percentage of the second number, it means that the first area in the image occupies a larger area.
  • the percentage of the first quantity to the second quantity is small, it means that the area occupied by the first area in the image is small.
  • the first threshold is 1.7 to 1.9
  • the second threshold is 0.1 to 0.3.
  • the percentage of the first quantity to the second quantity is 80%
  • the first weight is set to 1.7
  • the second weight is set to 0.3
  • the percentage of the first quantity to the second quantity is 40%
  • the first weight is set to 1.9
  • the second weight is set to 0.1.
  • step S300 is specifically: judging the magnitude relationship between the brightness statistical value and the adjustment threshold, and performing exposure adjustment according to the magnitude relationship.
  • the adjustment threshold is also a range of values. In order to be distinguished from the above-mentioned first and second thresholds, we name this threshold as the adjustment threshold.
  • this threshold we name this threshold as the adjustment threshold.
  • the exposure adjustment includes adjusting at least one of an exposure gain, an exposure time, and an aperture of the exposure.
  • the brightness statistical value is greater than the maximum value of the adjustment threshold, it indicates that the brightness of the image is too high. At this time, you can reduce the exposure gain, shorten the exposure time and reduce the aperture. If the brightness statistical value is less than the minimum value of the adjustment threshold, it indicates that the brightness of the image is too low. At this time, you can increase the exposure gain, prolong the exposure time, and increase the aperture. If the brightness statistical value is between the minimum and maximum values of the adjustment threshold, it means that the brightness of the image is normal. At this time, no exposure adjustment is made.
  • the exposure control method of the present application detects the position of the target object in the image in real time, recalculates the brightness statistical value according to the real-time position of the target object, and adjusts the exposure according to the brightness statistical value, so that the exposure adjustment scheme is consistent with the target in the image.
  • the brightness matching degree of the object is higher, and the exposure effect is better.
  • the exposure control device 10 includes an image division module 100, a brightness calculation module 200 and an exposure adjustment module 300.
  • the image dividing module 100 is configured to detect a target object in the captured image, and divide the image into a first area and a second area according to the position of the target object in the image.
  • the brightness calculation module 200 is configured to calculate the brightness statistical value of the image according to the first area and the second area.
  • the exposure adjustment module 300 is configured to perform exposure adjustment according to the brightness statistical value.
  • the exposure control device 10 of the present application further includes a positioning module 002 and an orientation calculation module 004.
  • the positioning module 002 is positioned to obtain the position information of the target object.
  • the position calculation module 004 is configured to calculate the position information of the target object in the image according to the position information.
  • the image division module 100 is configured to: detect the target object in the captured image according to the position information of the target object in the image; and according to the target object in the image The position in divides the image into a first area and a second area.
  • the positioning module 002 is configured to: obtain map information, the map information has the location mark of the target object; detect the location information of the current location on the map information; Mark the position relative to the positioning information to obtain the position information of the target object.
  • the image division module 100 is configured to obtain the detection feature of the target object; and detect the target object from the captured image according to the detection feature.
  • the exposure control device 10 of the present application further includes a storage module 005 connected to the image dividing module 100.
  • the storage module is configured to pre-store the tags and detection features of the target object.
  • the image division module 100 is configured to obtain the label of the target object, and extract the detection feature of the target object based on the label.
  • the exposure control device 10 of the present application further includes a sub-region dividing module 006.
  • the sub-region dividing module 006 is configured to divide the image into several sub-regions, and the several sub-regions are distributed in an array.
  • the image division module 100 is configured to: determine whether each sub-region is at least partially used to display the target object; if at least part of the sub-region is used to display the target object, The sub-area is divided into the first area; if the sub-area is not used to display the target object, the sub-area is divided into the second area.
  • the brightness calculation module 200 is configured to: obtain the first brightness value of each sub-area in the first area, and obtain the first brightness value of each sub-area in the second area. Two brightness values; obtaining the first weight corresponding to the first region and the second weight corresponding to the second region; calculating the brightness statistical value according to each first brightness value and first weight, and each second brightness value and second weight.
  • the exposure control device 10 of the present application further includes a weight setting module 008.
  • the weight setting module 008 is configured to set a first weight corresponding to the first area and a second weight corresponding to the second area according to the first area and the second area.
  • the weight setting module 008 is configured to: obtain a first number of sub-areas in the first area; obtain a second number of sub-areas in the image; The percentage of the quantity to the second quantity, set the first weight and the second weight.
  • the exposure adjustment module 300 of the above-mentioned exposure control device 10 is configured to determine the magnitude relationship between the brightness statistical value and the adjustment threshold, and adjust the exposure according to the magnitude relationship.
  • the exposure adjustment includes adjusting at least one of exposure gain, exposure time, and aperture.
  • the sub-region dividing module 006 is configured to divide the image into a plurality of sub-regions with the same shape and the same size.
  • the weight setting module 008 is configured to: the first weight is greater than the second weight.
  • the division of the modules in the above exposure control device is only for example. In other embodiments, the exposure control device of the present application can be divided into different modules as needed to complete all or part of the functions of the above exposure control device.
  • Each module in the above-mentioned exposure control device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • this application also provides a computer device.
  • the computer equipment includes a processor, a memory, and a display screen connected through a system bus.
  • the processor is used to provide calculation and control capabilities to support the operation of the entire computer equipment.
  • the memory is used to store data, programs, and/or instruction codes, etc., and at least one computer program is stored in the memory, and the computer program can be executed by a processor to implement the exposure control method suitable for computer equipment provided in the embodiments of the present application.
  • the memory may include non-volatile storage media such as magnetic disks, optical disks, read-only memory (Read-Only Memory, ROM), or random-access-memory (Random-Access-Memory, RAM).
  • the memory includes a non-volatile storage medium and internal memory.
  • the non-volatile storage medium stores an operating system, a database, and a computer program.
  • the database stores data related to implementing an exposure control method provided by the above embodiments, for example, information such as the name of each process or application can be stored.
  • the computer program can be executed by a processor to implement an exposure control method provided by each embodiment of the present application.
  • the internal memory provides a cached operating environment for the operating system, database and computer program in the non-volatile storage medium.
  • the display screen can be a touch screen, such as a capacitive screen or an electronic screen, which is used to display the interface information of the application corresponding to the foreground process. It can also be used to detect touch operations on the display screen and generate corresponding instructions, such as performing front-end and back-end operations. Application switching instructions, etc.
  • Detect the target object in the captured image and divide the image into a first area and a second area according to the position of the target object in the image;
  • the processor further implements the following steps when executing the computer program: acquiring position information of the target object;
  • the orientation information of the target object in the image is calculated.
  • the processor further implements the following steps when executing the computer program: detecting the target object in the captured image according to the position information of the target object in the image.
  • the processor further implements the following steps when executing the computer program: acquiring map information, where the map information has a location mark of the target object;
  • the position information of the target object is obtained.
  • the processor further implements the following steps when executing the computer program: acquiring the detection feature of the target object;
  • the target object is detected from the captured image.
  • the processor further implements the following steps when executing the computer program: pre-store the label and detection feature of the target object; at this time, obtaining the detection feature of the target object includes: obtaining the label of the target object, and extracting the label according to the label. The detection characteristics of the target object.
  • the processor further implements the following steps when executing the computer program: dividing the image into several sub-areas, and the several sub-areas are distributed in an array.
  • the processor further implements the following steps when executing the computer program: judging whether each sub-region is at least partially used to display the target object;
  • the sub-region is divided into the first region.
  • the processor further implements the following steps when executing the computer program: if the sub-region is not used to display the target object, dividing the sub-region into the second region.
  • the processor further implements the following steps when executing the computer program: acquiring the first brightness value of each sub-region in the first region, and acquiring the second brightness value of each sub-region in the second region;
  • the brightness statistical value is calculated according to each first brightness value and the first weight, and each second brightness value and the second weight.
  • the processor further implements the following steps when executing the computer program: according to the first area and the second area, a first weight corresponding to the first area and a second weight corresponding to the second area are set.
  • the processor further implements the following steps when executing the computer program: acquiring the first number of sub-regions in the first region;
  • the first weight and the second weight are set.
  • the first weight is greater than the second weight when the processor executes the computer program.
  • the several sub-regions when the processor executes the computer program, the several sub-regions have the same shape and the same size.
  • the processor further implements the following steps when executing the computer program: judging the magnitude relationship between the brightness statistical value and the adjustment threshold, and adjusting the exposure according to the magnitude relationship.
  • the processor further implements the following steps when executing the computer program: adjusting at least one of the exposure gain, the exposure time, and the aperture of the exposure.
  • the application also provides a computer-readable storage medium.
  • One or more non-volatile computer-readable storage media containing computer-executable instructions when the computer-executable instructions are executed by one or more processors, cause the processors to execute the following steps of the exposure control method:
  • Detect the target object in the captured image and divide the image into a first area and a second area according to the position of the target object in the image;
  • the following steps are further implemented: obtaining position information of the target object;
  • the orientation information of the target object in the image is calculated.
  • the following steps are further implemented: detecting the target object in the captured image according to the position information of the target object in the image.
  • the following steps are further implemented: acquiring map information, the map information having the location mark of the target object;
  • the position information of the target object is obtained.
  • the following steps are further implemented: acquiring the detection feature of the target object;
  • the target object is detected from the captured image.
  • the processor further implements the following steps when executing the computer program: pre-store the label and detection feature of the target object; at this time, obtaining the detection feature of the target object includes: obtaining the label of the target object, and extracting the label according to the label. The detection characteristics of the target object.
  • the following steps are further implemented: the image is divided into several sub-areas, and the several sub-areas are distributed in an array.
  • the following steps are further implemented: judging whether each sub-region is at least partially used to display the target object;
  • the sub-region is divided into the first region.
  • the following steps are further implemented: if the sub-region is not used to display the target object, the sub-region is divided into the second region.
  • the following steps are further implemented: obtaining the first brightness value of each sub-area in the first area, and obtaining the second brightness value of each sub-area in the second area;
  • the brightness statistical value is calculated according to each first brightness value and the first weight, and each second brightness value and the second weight.
  • the following steps are further implemented: according to the first area and the second area, a first weight corresponding to the first area and a second weight corresponding to the second area are set.
  • the following steps are further implemented: obtaining the first number of sub-regions in the first region;
  • the first weight and the second weight are set.
  • the first weight is greater than the second weight when the computer program is executed by the processor.
  • the several sub-regions when the computer program is executed by the processor, the several sub-regions have the same shape and the same size.
  • the following steps are further implemented: determining the magnitude relationship between the brightness statistical value and the adjustment threshold, and adjusting the exposure according to the magnitude relationship.
  • the following steps are further implemented: adjusting at least one of the exposure gain, the exposure time, and the aperture of the exposure.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM), which acts as external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

La présente invention concerne un procédé et un appareil de commande d'exposition, un support de stockage lisible et un dispositif informatique. Le procédé de commande d'exposition comprend les étapes consistant à : détecter la position d'un objet cible dans une image en temps réel et recalculer une valeur statistique de luminosité en fonction de la position en temps réel de l'objet cible ; et effectuer un ajustement d'exposition en fonction de la valeur statistique de luminosité, de sorte que le degré de correspondance de luminosité d'un schéma d'ajustement d'exposition et de l'objet cible dans l'image est plus élevé et que l'effet d'exposition est meilleur.
PCT/CN2020/074759 2020-02-11 2020-02-11 Procédé et appareil de commande d'exposition, support de stockage lisible et dispositif informatique WO2021159279A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/074759 WO2021159279A1 (fr) 2020-02-11 2020-02-11 Procédé et appareil de commande d'exposition, support de stockage lisible et dispositif informatique
CN202080003169.6A CN113545031A (zh) 2020-02-11 2020-02-11 曝光控制方法、装置、可读存储介质及计算机设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/074759 WO2021159279A1 (fr) 2020-02-11 2020-02-11 Procédé et appareil de commande d'exposition, support de stockage lisible et dispositif informatique

Publications (1)

Publication Number Publication Date
WO2021159279A1 true WO2021159279A1 (fr) 2021-08-19

Family

ID=77291924

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/074759 WO2021159279A1 (fr) 2020-02-11 2020-02-11 Procédé et appareil de commande d'exposition, support de stockage lisible et dispositif informatique

Country Status (2)

Country Link
CN (1) CN113545031A (fr)
WO (1) WO2021159279A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114007019A (zh) * 2021-12-31 2022-02-01 杭州魔点科技有限公司 一种逆光场景下基于图像亮度预测曝光的方法和***
CN117496486A (zh) * 2023-12-27 2024-02-02 安徽蔚来智驾科技有限公司 红绿灯形状识别方法、可读存储介质及智能设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0568203A (ja) * 1991-09-06 1993-03-19 Sanyo Electric Co Ltd ビデオカメラ
KR0185909B1 (ko) * 1994-07-11 1999-05-01 김광호 비데오 카메라의 노출 조절장치
CN101175157A (zh) * 2006-11-02 2008-05-07 松下电器产业株式会社 数码相机
CN102665047A (zh) * 2012-05-08 2012-09-12 北京汉邦高科数字技术股份有限公司 一种cmos 图像传感器成像中的曝光控制方法
CN104917975A (zh) * 2015-06-01 2015-09-16 北京空间机电研究所 一种基于目标特征的自适应自动曝光方法
CN105184776A (zh) * 2015-08-17 2015-12-23 中国测绘科学研究院 目标跟踪方法
CN106791475A (zh) * 2017-01-23 2017-05-31 上海兴芯微电子科技有限公司 曝光调整方法及所适用的车载摄像装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0568203A (ja) * 1991-09-06 1993-03-19 Sanyo Electric Co Ltd ビデオカメラ
KR0185909B1 (ko) * 1994-07-11 1999-05-01 김광호 비데오 카메라의 노출 조절장치
CN101175157A (zh) * 2006-11-02 2008-05-07 松下电器产业株式会社 数码相机
CN102665047A (zh) * 2012-05-08 2012-09-12 北京汉邦高科数字技术股份有限公司 一种cmos 图像传感器成像中的曝光控制方法
CN104917975A (zh) * 2015-06-01 2015-09-16 北京空间机电研究所 一种基于目标特征的自适应自动曝光方法
CN105184776A (zh) * 2015-08-17 2015-12-23 中国测绘科学研究院 目标跟踪方法
CN106791475A (zh) * 2017-01-23 2017-05-31 上海兴芯微电子科技有限公司 曝光调整方法及所适用的车载摄像装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114007019A (zh) * 2021-12-31 2022-02-01 杭州魔点科技有限公司 一种逆光场景下基于图像亮度预测曝光的方法和***
CN117496486A (zh) * 2023-12-27 2024-02-02 安徽蔚来智驾科技有限公司 红绿灯形状识别方法、可读存储介质及智能设备
CN117496486B (zh) * 2023-12-27 2024-03-26 安徽蔚来智驾科技有限公司 红绿灯形状识别方法、可读存储介质及智能设备

Also Published As

Publication number Publication date
CN113545031A (zh) 2021-10-22

Similar Documents

Publication Publication Date Title
US11093801B2 (en) Object detection device and object detection method
WO2021159279A1 (fr) Procédé et appareil de commande d'exposition, support de stockage lisible et dispositif informatique
WO2018201809A1 (fr) Dispositif et procédé de traitement d'image basé sur des caméras doubles
WO2017171659A1 (fr) Détection de lumière de signal
EP3771198B1 (fr) Procédé et dispositif de suivi de cible, plate-forme mobile et support de stockage
CN109922275B (zh) 曝光参数的自适应调整方法、装置及一种拍摄设备
EP3798975A1 (fr) Procédé et appareil d'identification de sujet, dispositif électronique et support d'enregistrement lisible par ordinateur
US20220222830A1 (en) Subject detecting method and device, electronic device, and non-transitory computer-readable storage medium
WO2020258978A1 (fr) Procédé et dispositif de détection d'objets
CN112396116A (zh) 一种雷电检测方法、装置、计算机设备及可读介质
US11836903B2 (en) Subject recognition method, electronic device, and computer readable storage medium
WO2019037038A1 (fr) Procédé et dispositif de traitement d'image, et serveur
CN111246100B (zh) 防抖参数的标定方法、装置和电子设备
CN111860352A (zh) 一种多镜头车辆轨迹全跟踪***及方法
CN112668462B (zh) 车损检测模型训练、车损检测方法、装置、设备及介质
EP3850827A1 (fr) Éclairage adaptatif pour une caméra à temps de vol sur un véhicule
WO2019084712A1 (fr) Procédé et appareil de traitement d'image, et terminal
CN110866486A (zh) 主体检测方法和装置、电子设备、计算机可读存储介质
CN107343154B (zh) 一种确定摄像装置的曝光参数的方法、装置和***
US11417125B2 (en) Recognition of license plate numbers from Bayer-domain image data
CN112804463A (zh) 一种曝光时间控制方法、装置、终端及可读存储介质
US20210231459A1 (en) Apparatus and method for collecting data for map generation
JP2018067305A (ja) ビジュアルオドメトリ方法及び装置
US10638045B2 (en) Image processing apparatus, image pickup system and moving apparatus
CN110796084A (zh) 车道线识别方法、装置、设备和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20918929

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16-01-2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20918929

Country of ref document: EP

Kind code of ref document: A1