CN114173064A - Multiple exposure method and device and electronic equipment - Google Patents
Multiple exposure method and device and electronic equipment Download PDFInfo
- Publication number
- CN114173064A CN114173064A CN202111488986.8A CN202111488986A CN114173064A CN 114173064 A CN114173064 A CN 114173064A CN 202111488986 A CN202111488986 A CN 202111488986A CN 114173064 A CN114173064 A CN 114173064A
- Authority
- CN
- China
- Prior art keywords
- scene
- video channel
- video
- exposure
- video stream
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000012544 monitoring process Methods 0.000 claims abstract description 23
- 238000004458 analytical method Methods 0.000 claims abstract description 20
- 238000001514 detection method Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 7
- 238000013135 deep learning Methods 0.000 claims description 4
- 238000003672 processing method Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000012806 monitoring device Methods 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
Abstract
The invention provides a multiple exposure method, a multiple exposure device and electronic equipment, belongs to the technical field of monitoring equipment, and solves the technical problem that the existing monitoring equipment cannot simultaneously monitor people and vehicles. A multiple exposure method is applied to monitoring equipment, and comprises the following steps: acquiring a first video stream through a first video channel, and performing scene analysis on the first video stream; if the scene is a man-car scene, setting the second video channel into a long-short alternative exposure mode; if the person is a scene, setting the second video channel to be in a long exposure mode; a second video stream is acquired through a second video channel.
Description
Technical Field
The invention relates to the technical field of monitoring equipment, in particular to a multiple exposure method, a multiple exposure device and electronic equipment.
Background
Along with the intellectualization of the monitoring equipment, the monitoring equipment is more and more applied to the field of road traffic, and provides powerful support for the safety management of the road traffic.
The monitoring cameras on the current road are different in functional emphasis and exposure mode. The sensor used for monitoring has long exposure time and bright picture, but the license plate is easy to overexpose; and when the automobile snapshot system is used for vehicle snapshot, the sensor exposure time is short, the license plate is clear, but the picture is dark, and pedestrians and the surrounding environment cannot be seen clearly. If the wide dynamic mode is adopted for fusion, the vehicle and the pedestrian have displacement, and the image quality is poor.
Therefore, the technical problem that the existing monitoring equipment cannot give consideration to simultaneous monitoring of people and vehicles exists.
Disclosure of Invention
The invention aims to provide a multiple exposure method, a multiple exposure device and electronic equipment, so as to solve the technical problem that the existing monitoring equipment cannot monitor people and vehicles simultaneously.
In a first aspect, the present invention provides a multiple exposure method applied to a monitoring device, where the method includes:
acquiring a first video stream through a first video channel, and performing scene analysis on the first video stream;
if the scene is a man-car scene, setting the second video channel into a long-short alternative exposure mode;
if the person is a scene, setting the second video channel to be in a long exposure mode;
a second video stream is acquired through a second video channel.
In a possible implementation, the step of obtaining a first video stream through a first video channel, and performing scene analysis on the first video stream includes:
judging whether a vehicle exists in the first video stream in real time through a vehicle detection algorithm;
if yes, the scene is a human-vehicle scene;
if not, the scene is a human scene.
In one possible embodiment, the vehicle detection algorithm is generated by a YOLO deep learning algorithm.
In one possible embodiment, the first video channel is in a long exposure mode.
In a possible embodiment, the step of setting the second video channel to be in the long and short alternative exposure mode includes;
setting the second video channel to be in a long-short alternative exposure mode, and recording the exposure type of each frame of image;
the step of setting the second video channel to the long exposure mode includes:
the second video channel is set to a long exposure mode and the exposure type of each frame of image is recorded.
In a possible implementation, after the step of acquiring the second video stream through the second video channel, the method further includes:
analyzing the short-exposure type image in the second video stream by adopting a license plate detection algorithm;
and analyzing the long-exposure type image in the second video stream by adopting a face recognition algorithm.
In one possible embodiment, the initial exposure mode of the second video channel is a short exposure.
In a second aspect, the present invention also provides a multiple exposure apparatus comprising:
a judging module: the system comprises a first video channel, a second video channel and a third video channel, wherein the first video channel is used for acquiring a first video stream and carrying out scene analysis on the first video stream;
the man-vehicle scene module: when the video processing method is used for a man-car scene, the second video channel is set to be in a long-short alternative exposure mode;
a human scene module: when the method is used for a human scene, the second video channel is set to be in a long exposure mode;
an acquisition module: for obtaining a second video stream via a second video channel.
In a third aspect, the present invention further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program operable on the processor, and the processor implements the steps of the method provided in the first aspect when executing the computer program.
In a fourth aspect, the present invention also provides a computer readable storage medium having stored thereon machine executable instructions which, when invoked and executed by a processor, cause the processor to perform the method provided by the first aspect.
The invention provides a multiple exposure method, which is applied to monitoring equipment and comprises the following steps: acquiring a first video stream through a first video channel, and performing scene analysis on the first video stream; if the scene is a man-car scene, setting the second video channel into a long-short alternative exposure mode; if the person is a scene, setting the second video channel to be in a long exposure mode; a second video stream is acquired through a second video channel.
By adopting the multiple exposure method provided by the invention, the exposure mode of the second video channel is set according to the scene analysis result by carrying out scene analysis on the first video stream, and the second video channel is set to be the long and short alternative exposure mode in the human-vehicle scene, so that the situation that the license plate is easy to be overexposed or the face is too dark is avoided by adopting the long and short alternative exposure mode, and the second video stream can simultaneously meet the monitoring requirements of the human scene and the human-vehicle scene; in a human scene, the second video channel is set to be in a long exposure mode, the long exposure mode is adopted to enable the brightness of a picture to be higher, and the shooting of the human scene is clearer; the problem that monitoring equipment cannot monitor people and vehicles simultaneously is effectively solved, and the monitoring effect is improved.
Accordingly, the multiple exposure apparatus, the electronic device and the computer readable storage medium provided by the invention also have the technical effects.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a multiple exposure method according to an embodiment of the present invention;
FIG. 2 is a detailed flowchart of the multiple exposure method S1 according to the embodiment of the present invention;
FIG. 3 is a schematic view of a multiple exposure apparatus according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "comprising" and "having," and any variations thereof, as referred to in embodiments of the present invention, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The monitoring cameras on the current road are different in functional emphasis and exposure mode. The sensor used for monitoring has long exposure time and bright picture, but the license plate is easy to overexpose; and when the automobile snapshot system is used for vehicle snapshot, the sensor exposure time is short, the license plate is clear, but the picture is dark, and pedestrians and the surrounding environment cannot be seen clearly. If the wide dynamic mode is adopted for fusion, the vehicle and the pedestrian have displacement, and the image quality is poor.
Therefore, the technical problem that the existing monitoring equipment cannot give consideration to simultaneous monitoring of people and vehicles exists.
To solve the above problems, embodiments of the present invention provide a multiple exposure method.
As shown in fig. 1, an embodiment of the present invention provides a multiple exposure method applied to a monitoring device, where the method includes:
s1: and acquiring a first video stream through the first video channel, and performing scene analysis on the first video stream.
S2A: and if the scene is a man-car scene, setting the second video channel into a long-short alternative exposure mode.
S2B: and if the person is a scene, setting the second video channel to be in a long exposure mode.
S3: a second video stream is acquired through a second video channel.
By adopting the multiple exposure method provided by the embodiment of the invention, the first video stream is subjected to scene analysis, the exposure mode of the second video channel is set according to the scene analysis result, the second video channel is set to be the long and short alternative exposure mode in the human-vehicle scene, and the long and short alternative exposure mode is adopted, so that the condition that the license plate is easy to be overexposed or the human face is too dark is avoided, and the second video stream can simultaneously meet the monitoring requirements of the human scene and the human-vehicle scene; in a human scene, the second video channel is set to be in a long exposure mode, the long exposure mode is adopted to enable the brightness of a picture to be higher, and the shooting of the human scene is clearer; the problem that monitoring equipment cannot monitor people and vehicles simultaneously is effectively solved, and the monitoring effect is improved.
As shown in fig. 2, in one possible implementation, the step of S1 includes:
and judging whether the vehicle exists in the first video stream in real time through a vehicle detection algorithm. If yes, the scene is a human-vehicle scene; if not, the scene is a human scene.
And detecting the first video stream in real time, judging whether a vehicle exists or not, if the vehicle is detected, determining that the first video stream is a human-vehicle scene, and if the vehicle is not detected, determining that the first video stream is a human scene. The first video stream is used as the object of scene analysis, so that the second video channel can be set to the most applicable exposure mode according to the scene analysis result.
In one possible embodiment, the vehicle detection algorithm is generated by a YOLO (young Only Look once) deep learning algorithm, which is a network for object detection, and the object detection task includes determining the positions of certain objects in the image and classifying the objects. The previous method uses a pipeline to execute the task in multiple steps, which causes slow operation and is difficult to optimize, because each individual component must be trained independently, the YOLO can be determined by only using a neural network, and the YOLO deep learning algorithm has the advantage of high target detection speed.
In one possible embodiment, the first video channel is in a long exposure mode. The first video channel keeps a long exposure mode, the image brightness is higher, and the analysis of the scene of the first video stream is facilitated.
In one possible embodiment, the step of setting the second video channel to an alternate long and short exposure mode includes;
S2A 1: and setting the second video channel into a long-short alternative exposure mode, and recording the exposure type of each frame of image.
In one possible embodiment, the step of setting the second video channel to the long exposure mode comprises.
S2B 1: the second video channel is set to a long exposure mode and the exposure type of each frame of image is recorded.
After the second video stream is acquired, different algorithm analyses need to be performed on the long-exposure image and the short-exposure image respectively, so that the exposure type of each frame of image is recorded in the frame information, and the second video stream can be conveniently and correctly distinguished during analysis.
In a possible implementation, after the step of acquiring the second video stream through the second video channel, the method further includes:
and S4A, analyzing the short-exposure type image in the second video stream by adopting a license plate detection algorithm.
And S4B, analyzing the image with the long exposure type in the second video stream by adopting a face recognition algorithm.
The short-exposure image has high picture brightness and is suitable for the analysis of human scenes, so the short-exposure image is subjected to a face recognition algorithm; the long exposure image has low picture brightness and is suitable for analyzing the license plate, so that the long exposure image is subjected to a license plate detection algorithm.
In one possible embodiment, the initial exposure mode of the second video channel is a short exposure.
As shown in fig. 3, an embodiment of the present invention further provides a multiple exposure apparatus, including:
a judging module 1: the system comprises a first video channel, a second video channel and a third video channel, wherein the first video channel is used for acquiring a first video stream and carrying out scene analysis on the first video stream;
the man-vehicle scene module 2: when the video processing method is used for a man-car scene, the second video channel is set to be in a long-short alternative exposure mode;
the human scene module 3: when the method is used for a human scene, the second video channel is set to be in a long exposure mode;
the acquisition module 4: for obtaining a second video stream via a second video channel.
The embodiment of the present invention further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that can be run on the processor, and the processor implements the steps of the method provided in the above embodiment when executing the computer program.
Embodiments of the present invention further provide a computer-readable storage medium, where a machine executable instruction is stored in the computer-readable storage medium, and when the machine executable instruction is called and executed by a processor, the machine executable instruction causes the processor to execute the method provided by the foregoing embodiments.
The multiple exposure apparatus, the electronic device and the computer readable storage medium provided by the embodiment of the invention have the same technical characteristics as the multiple exposure method provided by the embodiment of the invention, so the same technical problems can be solved, and the same technical effects can be achieved.
The apparatus provided by the embodiment of the present invention may be specific hardware on the device, or software or firmware installed on the device, etc. The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the foregoing systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
For another example, the division of the unit is only one division of logical functions, and there may be other divisions in actual implementation, and for another example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided by the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; and the modifications, changes or substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. A multiple exposure method applied to a monitoring apparatus, the method comprising:
acquiring a first video stream through a first video channel, and performing scene analysis on the first video stream;
if the scene is a man-car scene, setting the second video channel into a long-short alternative exposure mode;
if the person is a scene, setting the second video channel to be in a long exposure mode;
a second video stream is acquired through a second video channel.
2. The multiple exposure method according to claim 1, wherein the step of acquiring the first video stream through the first video channel and performing scene analysis on the first video stream comprises:
judging whether a vehicle exists in the first video stream in real time through a vehicle detection algorithm;
if yes, the scene is a human-vehicle scene;
if not, the scene is a human scene.
3. The multiple exposure method of claim 2, wherein the vehicle detection algorithm is generated by a YOLO deep learning algorithm.
4. The multiple exposure method of claim 1, wherein the first video channel is in a long exposure mode.
5. The multiple exposure method according to claim 1, wherein the step of setting the second video channel to the long and short alternation exposure mode comprises;
setting the second video channel to be in a long-short alternative exposure mode, and recording the exposure type of each frame of image;
the step of setting the second video channel to the long exposure mode includes:
the second video channel is set to a long exposure mode and the exposure type of each frame of image is recorded.
6. The multiple exposure method of claim 5, wherein the step of acquiring the second video stream via the second video channel is followed by the step of:
analyzing the short-exposure type image in the second video stream by adopting a license plate detection algorithm;
and analyzing the long-exposure type image in the second video stream by adopting a face recognition algorithm.
7. The multiple exposure method according to claim 1, wherein the initial exposure mode of the second video channel is short exposure.
8. A multiple exposure apparatus, comprising:
a judging module: the system comprises a first video channel, a second video channel and a third video channel, wherein the first video channel is used for acquiring a first video stream and carrying out scene analysis on the first video stream;
the man-vehicle scene module: when the video processing method is used for a man-car scene, the second video channel is set to be in a long-short alternative exposure mode;
a human scene module: when the method is used for a human scene, the second video channel is set to be in a long exposure mode;
an acquisition module: for obtaining a second video stream via a second video channel.
9. An electronic device comprising a memory and a processor, wherein the memory stores a computer program operable on the processor, and wherein the processor implements the steps of the method of any of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium having stored thereon machine executable instructions which, when invoked and executed by a processor, cause the processor to execute the method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111488986.8A CN114173064B (en) | 2021-12-07 | 2021-12-07 | Multiple exposure method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111488986.8A CN114173064B (en) | 2021-12-07 | 2021-12-07 | Multiple exposure method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114173064A true CN114173064A (en) | 2022-03-11 |
CN114173064B CN114173064B (en) | 2024-04-09 |
Family
ID=80484167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111488986.8A Active CN114173064B (en) | 2021-12-07 | 2021-12-07 | Multiple exposure method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114173064B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103213540A (en) * | 2012-01-18 | 2013-07-24 | 富士重工业株式会社 | Vehicle driving environment recognition apparatus |
CN104134352A (en) * | 2014-08-15 | 2014-11-05 | 青岛比特信息技术有限公司 | Video vehicle characteristic detection system and detection method based on combination of long exposure and short exposure |
JP2015204488A (en) * | 2014-04-11 | 2015-11-16 | ハンファテクウィン株式会社Hanwha Techwin Co.,Ltd. | Motion detection apparatus and motion detection method |
US20150334283A1 (en) * | 2007-03-05 | 2015-11-19 | Fotonation Limited | Tone Mapping For Low-Light Video Frame Enhancement |
DE102014209863A1 (en) * | 2014-05-23 | 2015-11-26 | Robert Bosch Gmbh | Method and device for operating a stereo camera for a vehicle and stereo camera for a vehicle |
CN105450936A (en) * | 2014-05-30 | 2016-03-30 | 杭州海康威视数字技术股份有限公司 | Method and device for intelligently adjusting camera during automatic exposure |
CN106254782A (en) * | 2016-09-28 | 2016-12-21 | 北京旷视科技有限公司 | Image processing method and device and camera |
CN108156390A (en) * | 2016-12-06 | 2018-06-12 | 宝利通公司 | For providing the system and method for image and video with high dynamic range |
CN109068068A (en) * | 2018-10-23 | 2018-12-21 | 天津天地伟业信息***集成有限公司 | For the exposure method and device of traffic scene |
CN111447371A (en) * | 2020-03-12 | 2020-07-24 | 努比亚技术有限公司 | Automatic exposure control method, terminal and computer readable storage medium |
CN112738414A (en) * | 2021-04-06 | 2021-04-30 | 荣耀终端有限公司 | Photographing method, electronic device and storage medium |
-
2021
- 2021-12-07 CN CN202111488986.8A patent/CN114173064B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150334283A1 (en) * | 2007-03-05 | 2015-11-19 | Fotonation Limited | Tone Mapping For Low-Light Video Frame Enhancement |
CN103213540A (en) * | 2012-01-18 | 2013-07-24 | 富士重工业株式会社 | Vehicle driving environment recognition apparatus |
JP2015204488A (en) * | 2014-04-11 | 2015-11-16 | ハンファテクウィン株式会社Hanwha Techwin Co.,Ltd. | Motion detection apparatus and motion detection method |
DE102014209863A1 (en) * | 2014-05-23 | 2015-11-26 | Robert Bosch Gmbh | Method and device for operating a stereo camera for a vehicle and stereo camera for a vehicle |
CN105450936A (en) * | 2014-05-30 | 2016-03-30 | 杭州海康威视数字技术股份有限公司 | Method and device for intelligently adjusting camera during automatic exposure |
CN104134352A (en) * | 2014-08-15 | 2014-11-05 | 青岛比特信息技术有限公司 | Video vehicle characteristic detection system and detection method based on combination of long exposure and short exposure |
CN106254782A (en) * | 2016-09-28 | 2016-12-21 | 北京旷视科技有限公司 | Image processing method and device and camera |
CN108156390A (en) * | 2016-12-06 | 2018-06-12 | 宝利通公司 | For providing the system and method for image and video with high dynamic range |
CN109068068A (en) * | 2018-10-23 | 2018-12-21 | 天津天地伟业信息***集成有限公司 | For the exposure method and device of traffic scene |
CN111447371A (en) * | 2020-03-12 | 2020-07-24 | 努比亚技术有限公司 | Automatic exposure control method, terminal and computer readable storage medium |
CN112738414A (en) * | 2021-04-06 | 2021-04-30 | 荣耀终端有限公司 | Photographing method, electronic device and storage medium |
Non-Patent Citations (1)
Title |
---|
王光霞;冯华君;徐之海;李奇;陈跃庭;: "基于块匹配的低光度图像对融合方法", 光子学报, no. 04, 2 February 2019 (2019-02-02) * |
Also Published As
Publication number | Publication date |
---|---|
CN114173064B (en) | 2024-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8457408B2 (en) | Method and system of identifying one or more features represented in a plurality of sensor acquired data sets | |
CN107404628B (en) | Image processing apparatus and method, and monitoring system | |
CN105144705B (en) | Object monitoring system, object monitoring method, and program for extracting object to be monitored | |
EP1560161B1 (en) | Method and system for searching for events in video surveillance | |
US11479260B1 (en) | Systems and methods for proximate event capture | |
US10467742B2 (en) | Method and image capturing device for detecting fog in a scene | |
WO2023124387A1 (en) | Photographing apparatus obstruction detection method and apparatus, electronic device, storage medium, and computer program product | |
US20160210759A1 (en) | System and method of detecting moving objects | |
JP4999794B2 (en) | Still region detection method and apparatus, program and recording medium | |
CN101411190B (en) | Spurious motion filter | |
CN113869137A (en) | Event detection method and device, terminal equipment and storage medium | |
CN110913209B (en) | Camera shielding detection method and device, electronic equipment and monitoring system | |
KR20160037480A (en) | Method for establishing region of interest in intelligent video analytics and video analysis apparatus using the same | |
KR20090044957A (en) | Theft and left baggage survellance system and meothod thereof | |
US20200394802A1 (en) | Real-time object detection method for multiple camera images using frame segmentation and intelligent detection pool | |
CN111079612A (en) | Method and device for monitoring retention of invading object in power transmission line channel | |
US10916016B2 (en) | Image processing apparatus and method and monitoring system | |
US20140147011A1 (en) | Object removal detection using 3-d depth information | |
CN114173064A (en) | Multiple exposure method and device and electronic equipment | |
CN110647858B (en) | Video occlusion judgment method and device and computer storage medium | |
CN114973135A (en) | Head-shoulder-based sequential video sleep post identification method and system and electronic equipment | |
CN115861624B (en) | Method, device, equipment and storage medium for detecting occlusion of camera | |
KR20150041433A (en) | Method and Apparatus for Detecting Object of Event | |
CN111062337B (en) | People stream direction detection method and device, storage medium and electronic equipment | |
CN115601606B (en) | Store state detection method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |