WO2020029921A1 - 一种监控方法与装置 - Google Patents

一种监控方法与装置 Download PDF

Info

Publication number
WO2020029921A1
WO2020029921A1 PCT/CN2019/099275 CN2019099275W WO2020029921A1 WO 2020029921 A1 WO2020029921 A1 WO 2020029921A1 CN 2019099275 W CN2019099275 W CN 2019099275W WO 2020029921 A1 WO2020029921 A1 WO 2020029921A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
images
main camera
sub
monitoring
Prior art date
Application number
PCT/CN2019/099275
Other languages
English (en)
French (fr)
Inventor
李瑞华
胡红旗
赖昌材
王庆平
陈晓雷
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP19848312.5A priority Critical patent/EP3829163A4/en
Publication of WO2020029921A1 publication Critical patent/WO2020029921A1/zh
Priority to US17/168,781 priority patent/US11790504B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Definitions

  • the present application relates to the field of video technology, and in particular, to a monitoring method and device.
  • the monitoring and shooting function is an important application of current video surveillance systems. It uses IPC (Internet Protocal Camera) equipment to image vehicles and people in the field of view.
  • IPC Internet Protocal Camera
  • For the target of interest in the video can include: motor vehicles, Non-motorized vehicles, pedestrians, and any other objects or objects that need to be monitored) to detect; and track the motion trajectory of each target in the video, and capture the target from the monitoring area to the monitoring area and display it on the video screen in.
  • monitoring snapshot systems can be divided into portrait snapshot systems, vehicle snapshot systems, and robotic (motorized, non-motorized, pedestrian) snapshot systems.
  • the portrait capture system also known as the portrait bayonet
  • the portrait bayonet is mainly installed on sidewalks, crosswalks, and indoor passages. It focuses on pedestrians, detects and captures faces and human bodies, and outputs faces, human snapshots, and face recognition, Intelligent applications such as human attribute recognition.
  • Vehicle capture system focusing on motor vehicles, for vehicle detection and capture, recording vehicle snapshots, docking license plate recognition, vehicle type identification such as vehicle model / body color, and other intelligent applications.
  • the bayonet snapping system is a vehicle snapping system. The electronic police is mainly installed at the intersection of urban roads.
  • the bayonet is generally common on high-speed, national highways, and urban arterial roads, and is mainly used to capture speeding behavior.
  • the robot-inhuman capture system detects, classifies, and tracks the targets of motor vehicles, non-motor vehicles, and pedestrians in scenarios where motor vehicles, non-motor vehicles, and pedestrians are mixed, and outputs snapshots of various targets, mainly installed at intersections. , Urban villages and other key public security areas, used for public security monitoring, to achieve comprehensive control of various targets.
  • the existing IPC snapshot system evolved from the video recording system.
  • the same device needs to implement both video recording and target snapshot functions.
  • the snapshot is obtained by cutting or cropping a certain frame of video from the video recording.
  • the video stream comes from the same set of imaging elements, including the lens, image sensor, etc., and various imaging parameters are consistent, including: exposure time, contrast, etc.
  • FIG. 1 A prior art technical solution is shown in FIG. 1.
  • the front-end video capture module collects image data at a fixed frame rate (such as 30fps). Among the collected image data, face detection, tracking, and filtering are performed, and the image data is extracted from a certain frame. Optimal face capture picture. At the same time, the captured image data is compressed into a compressed video stream. Snapshot pictures and video streams are transmitted to the backend for storage via the network and other methods, and further feature extraction, target recognition, retrieval and other processing are performed on the snapshot pictures.
  • a fixed frame rate such as 30fps
  • the imaging parameters of the captured image are consistent with the imaging parameters of the video stream, which results in a low resolution of the imaging effect of the target region of interest.
  • unsatisfactory imaging conditions such as low illumination
  • the complex lighting environment such as long target distance, backlighting / wide dynamics, and poor imaging quality of the target area of interest affect the performance indicators of subsequent intelligent processing (face recognition, license plate recognition, vehicle model / model recognition, etc.). It mainly includes the following points:
  • the embodiments of the present application provide a monitoring method and device, which can solve the problems of blurred captured images, low image brightness, large noise, and small captured target size in monitoring application scenarios such as low illumination, long target distance, and backlighting / wide dynamics.
  • Performance indicators for subsequent intelligent processing of the graph face recognition, license plate recognition, vehicle model / model recognition, etc.).
  • an embodiment of the present application provides a surveillance camera module.
  • the module includes: a main camera and N sub cameras, where N is an integer greater than 1, and the main camera and N sub cameras are used to collect images.
  • the frame rate of the image collected by any one of the secondary cameras is smaller than the frame rate of the image collected by the main camera; wherein the monitoring areas of the N secondary cameras respectively cover N different areas in the monitoring area of the main camera; and the focal length of any one of the secondary cameras Greater than the focal length of the main camera.
  • the FOV of the main camera is greater than 60 °
  • the focal length of the main camera is in the range of [4, 8] mm
  • the aperture value is in the range of [1.4, 2.0].
  • FOV is greater than 60 ° to ensure a sufficient monitoring field of view
  • the focal length and aperture are configured so that the main focal length can focus on the central area or core area at a short distance when capturing video data and clear imaging.
  • the focal length of at least one secondary camera is in the range of [8, 15] mm.
  • the configuration of the "medium and short focus secondary camera" to collect images can supplement the monitoring capabilities of the central region of the main camera monitoring area.
  • the focal length of at least one secondary camera is in the range of [15, 25] mm.
  • the "telephoto secondary camera” is configured to collect images, which can supplement the monitoring capabilities of the remote areas in the main camera monitoring area.
  • the focal length of at least three of the secondary cameras is [12,18] mm, and the focal length of the other four secondary cameras is [21, 25] mm.
  • the aperture value of the main camera is in the range of [1.4, 2.0]
  • the aperture value of at least one sub-camera is in the range of [0.8, 1.6]
  • the aperture value of at least one sub-camera Less than the aperture value of the main camera.
  • the secondary camera with a large aperture can increase the amount of light entering the captured image, which can make the remote camera imaging clearer, improve the picture quality of the snapshot, and facilitate the target recognition.
  • N 4, where the focal lengths of the four sub-cameras are in the range of [18, 21] mm.
  • This is a possible module design. This design can ensure that clear images can be collected in the monitoring area within 25m. Compared with the simple main camera monitoring (usually monitoring within 10m), it expands the high-quality monitoring range. .
  • N 7, where the focal lengths of the three secondary cameras are [12, 18] mm, and the focal lengths of the other four secondary cameras are [21, 25] mm.
  • This is a possible module design. This design can ensure that clear images can be collected in the monitoring area within 35m. Compared with the simple main camera monitoring (usually monitoring within 10m), it greatly expands the high-quality monitoring. range. Multiple secondary cameras can also adopt a multifocal segment design. Through the combination of FOVs at different focal lengths, it is ensured that the FOV monitored by the main camera is covered. Multiple secondary cameras are designed with the overlapping field of view of adjacent cameras to ensure that the area of the overlapping area can cover the complete target. .
  • video data is collected through the main camera, and captured images are captured at multiple frames by multiple sub-cameras at a lower frame rate, which is similar to taking pictures. Compared to all video data collection, power consumption is greatly saved.
  • an embodiment of the present application provides a monitoring method.
  • the method is applied to a monitoring camera module.
  • the monitoring camera module includes: a main camera and N sub cameras; wherein, the monitoring of N sub cameras The areas respectively cover N different areas in the monitoring area of the main camera; the focal length of any sub-camera is greater than the focal length of the main camera; N is an integer greater than 1;
  • the above method includes: using the main camera and N sub-cameras to collect images, where: The frame rate of the images captured by any one of the secondary cameras is smaller than the frame rate of the images collected by the main camera; M images containing the target object are selected from the images collected by the main camera and the N secondary cameras, and M is an integer greater than 1; according to the target object M images are cropped to obtain M small images containing the target object; quality evaluation is performed on the M small images; at least one small image with the best quality evaluation result is displayed in the M small images.
  • an embodiment of the present application provides a monitoring device.
  • the device is applied to a monitoring camera module.
  • the monitoring camera module includes: a main camera and N sub cameras; wherein, the N sub cameras The monitoring areas respectively cover N different areas in the monitoring area of the main camera; the focal length of any one secondary camera is greater than the focal length of the main camera; N is an integer greater than 1; the device includes:
  • An acquisition module for acquiring images using a main camera and N sub cameras, wherein a frame rate of an image acquired by any one of the sub cameras is smaller than a frame rate of an image acquired by the main camera (generally not lower than 25 fps);
  • a selection module for selecting M images containing the target object from the images collected by the main camera and the N sub cameras, where M is an integer greater than 1;
  • a cropping module configured to crop M images according to the target object to obtain M small images containing the target object
  • the display module is used to display at least one small picture with the best quality evaluation result in the M small pictures.
  • the resolution of the primary camera and the secondary camera is not less than 2 million pixels.
  • the main camera and the sub camera are both fixed focus lenses; or the main camera is a zoom lens and the sub camera is a fixed focus lens.
  • the parameter settings of the secondary camera ensure that the face image collected within a large range of monitoring distance (such as 20 meters) is larger than 50 * 50 pixels, achieving 4K quality snapshots Illustration.
  • the actual image captured by the main camera is small, and the actual image captured by the sub-camera is "telephoto", which is Large; because the actual captured images of the same target have different sizes, it is necessary to determine the mapping relationship between the images captured by the two cameras, specifically the correspondence between the positions in the image. The mapping relationship can be confirmed by calibration.
  • demosaic demosaicing can be performed on the RGB data, 3A (AE, Auto Exposure; AWB, Auto White Balance Automatic white balance; AF, Auto (Focus), denoising, RGB to YUV and other image processing operations to get YUV image data,
  • the aperture value of the main camera is in the range of [1.4, 2.0]
  • the aperture value of at least one sub-camera is in the range of [0.8, 1.6]
  • the at least one sub-camera is smaller than the aperture value of the main camera.
  • the secondary camera with a large aperture can increase the amount of light entering the captured image, which can make the remote camera imaging clearer, improve the picture quality of the snapshot, and facilitate the target recognition.
  • selecting the M images containing the target object from the images collected by the main camera and the N sub cameras includes: using the main camera detection thread to It is detected that M1 images contain the target object in the images collected by the main camera, and the M1 images are saved from the cache; detected by the main camera detection thread in the images collected in the N sub cameras M2 images contain the target object, and the M2 images are saved from the cache; or, the main camera detection thread is used to detect that M1 images contain the target object from the images collected by the main camera, and are saved from the cache
  • the M1 images; using the secondary camera detection thread to detect M2 images in the N secondary cameras that contain the target object, and storing the M2 images from the cache; according to the primary camera and each secondary The image mapping relationship between the cameras, and according to the timestamps of the images captured by the main camera and each of the sub cameras, and the target position, the M1 images and The M2 image is identified as an image containing the target object; where M M1 +
  • the main camera detection thread can be used to uniformly detect the target, and the image collected by the main camera and the secondary camera can be saved. This scenario can help the surveillance system to uniformly identify the ID; in other scenarios, the secondary camera The storage of captured images is dominated by the sub-camera detection thread, which reduces the thread burden on the main camera.
  • the monitoring system will detect the image according to all preset targets, and the detected targets may be one or more; therefore, the same image contains multiple This goal is also possible. Therefore, when detecting targets, ID numbers of different targets need to be numbered.
  • the image mapping relationship (spatial correspondence) between the calibrated primary camera and each secondary camera can be based on the time stamp (time dimension) recorded by the monitoring system and the location of the target (spatial dimension) ) And other information for correlation matching, and the same object in the saved image is matched as the same target.
  • the unit can find its corresponding preferred thumbnail. Any object in the surveillance field of view can have high-quality image presentation, greatly improving the quality and capability of surveillance.
  • the second aspect or the third aspect in a possible design, based on the quality evaluation results of the images collected by the main camera and the sub camera, calculate and generate the ideal shooting parameters of each imaging camera, including: exposure time, gain, Noise parameters, etc .; feedback to adjust the shooting parameters of the main camera or sub camera.
  • an embodiment of the present application provides a device, where the device includes multiple cameras, a display screen, a processor, a memory, and a bus; multiple cameras, a display screen, a processor, and a memory are connected through a bus;
  • the display is used to collect video;
  • the display is used to display video or images;
  • the memory is used to store data and program instructions;
  • the processor is used to call the data and program instructions to complete the method provided by any of the above aspects in cooperation with the camera and the display and Any possible design method.
  • the main camera is used to collect monitoring video, and a plurality of "telephoto" sub cameras are used to collect captured images.
  • the sub camera covers the monitoring area of the main camera.
  • the video collected by the main camera meets the needs of human eyes, and the captured images collected by the sub camera are used for algorithm identification.
  • the secondary camera can use multiple large apertures to improve the sensitivity of the low-light environment.
  • the optional multi-focus overlap combination improves the size and clarity of the monitoring target at a long distance. In the monitoring field of view of the main camera, 4K levels can be achieved. Imaging of the monitoring area of the secondary camera improves the wide dynamic capability. It can better solve the problems of low illumination, small target size and wide dynamic pain points in monitoring application scenarios.
  • FIG. 1 is a schematic diagram of a monitoring technology solution in the prior art
  • FIG. 2 is a schematic diagram of a signal flow of a monitoring system according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a monitoring camera module according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a hall entrance and exit monitoring environment according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a monitoring system according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a monitoring method according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a monitoring device according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a monitoring device according to an embodiment of the present application.
  • the monitoring system based on the embodiment of the present invention can be shown in FIG. 2.
  • the image is captured by the shooting module (CCD / CMOS image sensor imaging), the image is converted into digital signals by analog / digital conversion, and the digital signals are processed and intelligently analyzed by signals. To obtain the processed data.
  • the processed data undergoes information extraction, encoding, compression, and transmission to the Web display via the network.
  • the decoded data is then presented to the user.
  • the photographing module can also be understood as a photographing system, a photographing component, a photographing component, a camera unit, a camera, a photographing component, a photographing unit, an imaging device, an imaging unit, etc. It should be understood that these are similar expressions commonly used by those skilled in the art. It will not be described in detail and limited.
  • FIG. 3 An implementation form of the monitoring camera module of the present invention may be shown in FIG. 3, and is composed of a main camera unit (also called a main camera video component and a main camera) and a sub camera unit; the sub camera unit is composed of multiple (mesh) units.
  • the secondary camera capture component also called the secondary camera is composed.
  • the main camera can use conventional surveillance video camera components to capture video in the monitoring area;
  • the multi-camera secondary camera can use high-definition camera components designed with different focal lengths and large apertures to capture images / snapshots of their respective monitoring areas.
  • the field of view area covered by the multi-camera secondary camera covers the area where the shooting quality of the main camera is low but at the same time is of interest to the user; therefore, the multi-camera secondary cameras need to be arranged according to certain rules to ensure the user's sense of The field of view of the monitoring area of interest (such as the black and gray parts of the field of view in the monitoring area) is completely covered without wasting more overlapping space between the sub-cameras.
  • the invention clearly captures the images in the unclear area captured by the main camera by adding multiple secondary cameras, so that the monitoring area of interest to the user can obtain good monitoring performance.
  • the captured image in the main field of view area which can be the corresponding image in each image frame or a part of the image frame in the video stream;
  • the captured image in the area of the coverage field of view may be a captured image of a target in a coverage area of a sub-photographing component.
  • the same target can be identified based on the time and space dimensions, and the ID can be unified to identify several images containing the same target.
  • further cropping To obtain a more significant small image; use the image quality evaluation methods in the prior art to evaluate the quality of these cropped small images, and select the preferred small image from them.
  • the rationality of the imaging parameters of the main camera and each secondary camera can be evaluated based on the quality evaluation results, and the updated imaging parameters can be fed back to the main camera and the corresponding secondary camera, and the shooting parameters can be adjusted for better To adapt to the current environment, and then capture better quality video and snapshots.
  • the application scenarios of the present invention include but are not limited to the following two monitoring systems or environments:
  • the target imaging size is small for a distant target face, so the prior art shooting scheme will not be clearly identified.
  • the dynamic range of the picture is large, and the unified imaging parameters cause the image of the target area of the face to be dark and cannot be identified.
  • a design scheme of “1 main camera + multiple sub cameras” can be adopted.
  • the multiple sub cameras can solve the problem of small target size in the distance.
  • the multiple sub cameras can be divided into regions for imaging, which can further solve the wide dynamic range.
  • the image of the target area of the scene is dark.
  • the non-human snapshot system for public security monitoring needs to detect, track, and classify vehicles, non-motor vehicles, and pedestrians in a large field of view.
  • small-size targets at the edge of the field of view there are serious leaks.
  • Check the problem Due to the uneven illumination in the field of view, there is insufficient illumination in some areas, resulting in low brightness of the collected images; for example, in low-light scenes such as evenings and rainy days, the imaging is blurred and noisy.
  • multiple secondary cameras can solve the problem of small target size at the edge of the field of view; by using multiple sub-cameras to image in different regions, dark images of target areas in wide dynamic scenes can be solved.
  • multiple sub-cameras can adopt a large aperture design to solve the problem of blurred imaging and large noise in low-light scenes.
  • the form of the landing product of the present invention may be a monitoring system, or it may include a shooting component / system / module and the like.
  • the structure of the monitoring system of the present invention may be shown in FIG. 6 and mainly includes:
  • the main camera unit 101 includes a main camera (main video component), and a conventional surveillance video module or component may be used to collect and monitor global image data of the field of view of the main camera.
  • the general shooting frame rate is not less than 25fps, and bandwidth and storage resources can be used to increase it as required. It is mainly responsible for collecting real-time video streams in the monitoring field of vision to meet the needs of human eyes.
  • the focal length of the lens of the main video component can be any value within 4mm-10mm, which can be a fixed focus lens or a power zoom. Because the power focus is adjustable, the entire focal length range of the lens can be adjusted. Beyond the above range, such as 2.8mm-50mm, it should be understood that the lens should be adjusted within a certain range during the working process, such as the above 4-10mm; after the general system is set up, once the main camera starts to work continuously according to a certain mode At this time, it can be understood as equivalent to a fixed focus lens.
  • the focal length in the present application may refer to an equivalent focal length commonly used in the industry, and the parameters in the embodiments of the present invention are merely examples and are not limited.
  • Sub camera unit 102
  • a multi-camera secondary camera (secondary camera capture component); components can include lenses, sensors, and housings.
  • the shooting frame rate is lower than the main camera to save power.
  • the secondary camera may use a fixed focus lens and a zoom lens, but the focal length of any secondary camera when collecting images needs to be greater than the focal distance of the primary camera when collecting images, in order to clearly obtain images of farther areas.
  • the multi-lens secondary camera can have different focal lengths.
  • a telephoto lens can be used.
  • the equivalent focal length can be in the range of 15mm-30mm. Such as 18 or 21 or 24mm; ensure that the image size of the target of interest is met.
  • the equivalent focal length can be in the range of 6mm-15mm, such as 8 or 10 or 12mm; it should be understood that the field of view of the short and medium focal length lenses is large, and the field of view of the monitoring is slightly close; the field of telephoto is small, and the distance that can be monitored is relatively long.
  • the purpose is to increase the light sensitivity in low-light scenes and improve the low-light imaging effect.
  • the focal length and aperture value further determine the depth of field.
  • the number of secondary cameras is determined according to the field of view coverage of the primary camera that needs to be covered and the FOV of the secondary camera. For example, the larger the surveillance field of view that needs to be covered, the greater the number of secondary cameras.
  • the longer the distance to be monitored the greater the number of rows of secondary cameras. For example, a 10m-20m monitoring area requires a row of secondary cameras. Camera coverage, such as 3 secondary cameras with a focal length of 8mm; 20m-30m monitoring area requires another row of secondary cameras to cover, such as 4 21mm secondary cameras.
  • the secondary camera may be called “telephoto” relative to the main camera.
  • the field of view / framing area of multiple secondary cameras can cover the area where the main camera's view is not clear but the user will be interested (requires focused monitoring).
  • This area can be understood as a supplementary monitoring area and can be freely defined by the user according to the monitoring needs.
  • the purpose is to Make up for the unclear area of the main camera in the monitoring area, and enhance the monitoring ability of the main camera in the entire monitoring field of view.
  • multiple sub-cameras should maximize their respective framing capabilities. Ideally, the entire target area is monitored with the minimum number of sub-cameras. However, in actual implementation, framing between multiple sub-cameras may exist. Small overlap.
  • the setting of the secondary camera can ensure that the face image collected within a large range of monitoring distance (such as 20 meters) is larger than 50 * 50 pixels, reaching a 4K quality snapshot, which meets the high standards of important target recognition such as human faces. This can greatly make up for the lack of monitoring capabilities of the main camera.
  • the farther distance may be defined as a horizontal distance from the surveillance camera module between 5m and 25m.
  • the combination of the main camera and the sub camera can be flexibly defined by the user according to specific needs, such as "1 main camera (close-range monitoring) + N sub- cameras (long-range monitoring)” or “1 main camera (close-range monitoring) "+ N1 sub-cameras (medium and long-distance monitoring) + N2 sub-cameras (long-distance monitoring)", N, N1, and N2 are positive integers greater than or equal to 2.
  • the main camera and the sub camera are usually placed downward.
  • the actual image size obtained is different.
  • the actual image captured by the main camera is small, and the actual image captured by the secondary camera is "telephoto". Because the actual captured image size of the same target is different, you need to determine
  • the mapping relationship between the images collected by the two cameras is specifically the corresponding relationship of each position in the image.
  • the mapping relationship can be confirmed by calibration; the calibration method can use various existing methods, including: feature point calibration. This mapping relationship is used to determine whether the targets in the images acquired by different cameras are at the same position, and then determine whether they are the same target by combining time factors.
  • the mapping relationship can be given when the floor-standing product leaves the factory or obtained through an algorithm. This calibrated mapping relationship can also be periodically calibrated during use. For example, an automatic calibration method based on feature point matching can be used.
  • Multi-channel ISP processing unit 103 Multi-channel ISP processing unit 103:
  • this unit can demosaic demodulate RGB data, 3A (AE, Auto Exposure automatic exposure; AWB, Auto White White Balance automatic white Balance; AF, Auto Focus), denoising, RGB to YUV and other image processing operations to get YUV image data, that is, YUV image data corresponding to the main camera and YUV image data corresponding to the sub camera.
  • RGB Auto Exposure automatic exposure
  • AWB Auto White White Balance automatic white Balance
  • AF Auto Focus
  • RGB image data that is, YUV image data corresponding to the main camera and YUV image data corresponding to the sub camera.
  • the parameters of AE and AWB can be adjusted automatically according to the light and color of the image.
  • the multi-channel ISP processing unit 103 may also connect an external buffer to store YUV data and process data of previous intermediate processing.
  • the multi-eye cooperative processing unit 104 includes:
  • Multi-eye target detection and tracking unit 1041 receives YUV data obtained by the multi-channel ISP processing unit 103, including YUV image data corresponding to the main camera and YUV image data corresponding to the sub-camera; for these images to perform target detection and tracking, the target here Including but not limited to: motor vehicles, non-motor vehicles, pedestrians, faces, important objects, etc., are related to monitoring needs.
  • target detection and tracking are performed, and a target ID and a target position in each image frame are recorded to form a target track.
  • the position of the target can be represented by two-dimensional coordinate values in the image, such as pixel coordinates.
  • the method of detecting images can be analyzed by deep learning networks.
  • target detection is performed, and the target ID and target position in each detection frame are recorded.
  • the unit may include a detection thread that detects the images collected by the main camera, that is, the main camera detection thread; it may also include a detection thread that detects the images collected by the secondary camera, that is, the secondary camera detection thread;
  • the main camera can detect the detection thread of the image collected by the secondary camera, that is, the total detection thread. For example, after the main camera detection thread detects the target, it will feed back multiple ISP processing units and save the YUV image containing the target corresponding to the main camera from the cache; for example, after the sub camera inspection thread detects the target, it will feed back multiple ISP processing.
  • Unit 103 stores the YUV image containing the target corresponding to the secondary camera from the cache; for example, after the target is detected by the total detection thread, it will feed back the multi-channel ISP processing unit 103 and save the target containing the primary camera and the secondary camera from the cache. YUV image. It should be understood that, for a frame that cannot detect any target, it can be discarded by feeding back the multiple ISP processing unit 103.
  • the multi-eye target detection and tracking unit 1041 multiple saved images can be obtained; at the same time, the unit records information such as the ID of each target and the position of each target in these images.
  • the multi-target detection and tracking unit 1041 may further control the shooting frequency of the main camera and the sub camera according to the detected target type; for example, when the target is detected as a car, the shooting frame rate becomes higher; when a pedestrian is detected , Shooting frame rate becomes low.
  • Multi-eye snapshot optimization unit 1042 For the images saved in the previous module, according to the image mapping relationship (spatial correspondence) between the calibrated primary camera and each secondary camera, and according to the time stamp (time dimension) recorded by the monitoring system , And the target location (spatial dimension) and other information for correlation matching, matching the same object in the saved image as the same target, and unified ID value (subject to the ID number in the image collected by the main camera); as optional When more than two objects appear at the same time and at the same location, you can additionally use the method of feature matching calculation to match the targets; reassign new ID values for different targets.
  • this target is used as the target in the main camera monitoring area.
  • the M images need to be cropped according to the target object to obtain M small images containing the target object.
  • a small image can be cropped that can significantly display the target features.
  • the shape of the thumbnail can be a square, a circle, or the outline of the target object. Cropping means separating a part of a picture or image from the original picture or image. It can also be understood as a cutout.
  • Available methods include: specific area cropping, lasso tool, marquee tool, eraser tool, etc. direct selection, quick masking, pen selection after path selection, extraction filter, plug-in filter extraction, channel, calculation, application image Law, etc.
  • the image cropped by the present invention may be the shape, square, circle, or other forms of the target object.
  • a deep learning-based quality evaluation algorithm can be adopted to obtain the quality snapshot results corresponding to the target object.
  • the top X small pictures of the quality snapshot results are produced; specifically, for example, for this target object, a small picture with the best quality evaluation result is selected.
  • the finally selected small picture can come from the captured picture obtained by the main camera or the captured picture obtained by the secondary camera, which is determined by the quality evaluation result or algorithm.
  • the multi-eye snap picture optimization unit 1042 After processing by the multi-eye snap picture optimization unit 1042, for the target object, X small pictures with quality satisfying the conditions can be obtained; for further identification and verification of the subsequent monitoring system, the multi-eye snap picture optimization unit 1042 will also obtain the target object from the target object.
  • the original images corresponding to the X small images are stored in the corresponding M images.
  • this unit finds the preferred small graph, especially the optimal small graph, for any one of the goals in a similar way; specifically, when it is sent to the front-end identification or back-end server, the evaluation results can also be packaged to the top.
  • X small images further, the original images corresponding to the X small images can also be packed. Therefore, for any target, the unit can find its corresponding preferred thumbnail. Any object in the surveillance field of view can have high-quality image presentation, greatly improving the quality and capability of surveillance.
  • the monitoring system structure of the present invention may further include an imaging parameter generation and feedback control unit 1043:
  • the quality evaluation result of the multi-eye snapshot picture optimization unit 102 also evaluates the imaging quality of the images acquired by the main camera and the multi-eye secondary camera at the same time. Including image brightness, contrast, blur degree, noise level, etc. Based on the evaluation results and the desired imaging effect, ideal shooting parameters of each imaging camera can be generated through calculation, including: exposure time, gain, denoising parameters, etc .;
  • the multi-channel ISP processing unit 103 feedback-controls the imaging parameters to the main camera and each sub-camera, and adjusts the current shooting parameters of each camera. Adjusting the camera with shooting parameters with high imaging quality, this feedback can be continuous.
  • Video / image encoding unit 105 On the one hand, this unit can perform video encoding on the images collected by the main camera after the ISP processing, that is, video data, and image encoding on the image data collected by each of the secondary cameras after the ISP processing. On the other hand, it is also possible to image-encode the preferred map of the preferred target object; it is also possible to image-encode the original image of the preferred map of the preferred target object.
  • the encoded data can be transmitted via the network and other methods.
  • the video encoding format can select mainstream formats such as H.264, H.265, and AVS, and the image encoding can adopt encoding formats such as JPEG and PNG.
  • the main display area of the video web display interface displays the video
  • the surrounding area of the video may display some captured images of the target, especially the preferred captured image of the target.
  • the real-time monitoring picture will be displayed in the display interface, such as the monitoring preview video stream collected by the main camera in real time; optionally, a part of the image captured by the sub camera in real time can also be displayed; optionally, In the real-time video stream of the main camera displayed, the monitoring system will detect and track the target, such as displaying a prompt box for any target object on the screen, and at the same time, a preferred small image of the target object may be displayed around the video display area. If verification is required, the original image corresponding to the target object is displayed. In some implementation scenarios, for example, when no one is monitoring or there are too many monitoring targets, some preferred maps may not be displayed, and they will be displayed when there is a need to call data.
  • the main camera 101 collects the real-time video stream at the first frame rate and processes it through the multiple ISP processing units 103.
  • the processed data can be video-encoded by the video / image encoding unit 105, transmitted through the transmission unit 106, decoded by the display unit 107, and decoded. Displays the video stream monitored by the main camera.
  • the processed data is also subjected to target detection by the multi-eye target detection and tracking unit 1041, and when a target (an identification object preset by a user, such as a car, a person, or the like) is detected in any image, it is fed back to the multi-path ISP processing unit 103, Save the image from the system cache.
  • real-time image data is collected at a second frame rate (lower than the first frame rate), processed by the multi-channel ISP processing unit 103, and the processed image data is subjected to target detection by the multi-eye target detection and tracking unit 1041.
  • an object an identification object preset by a user, such as a car, a person, or the like
  • the image is saved from the system cache.
  • M21 images collected by all the sub-cameras 102 there are M21 images collected by all the sub-cameras 102, and the saved images containing the target (which can be any preset recognition object or some preset recognition objects) (already in YUV format) )
  • the target which can be any preset recognition object or some preset recognition objects
  • the YUV image data corresponding to the secondary camera may also be video or image encoded by the video / image encoding unit 105, transmitted through the transmission unit 106, and decoded by the display unit 107 and displayed the image data stream monitored by the secondary camera; it should be understood that To save storage resources and display resources, these data may not be transmitted to the display unit 107, or may be transmitted to the display unit 107 but not displayed.
  • the multi-eye snapshot image optimization unit 1042 can according to the image mapping relationship between the calibrated main camera and each sub camera (spatial correspondence). , And perform correlation matching based on the time stamp (time dimension) recorded by the monitoring system, and the location (spatial dimension) of the target. The same object in the saved image is matched as the same target, and the ID value is unified to identify the Which targets in the saved image are the same target.
  • M13 images containing a target object which can be related to the needs of the user
  • M11 ⁇ M12 ⁇ M13 among the above M22 images, there are M23 images containing the target object Among them, M21 ⁇ M22 ⁇ M23; that is, a total of M13 + M23 images containing the target object (can be understood as the original image of the target object).
  • the multi-eye snapshot image optimization unit 1042 crops the M13 images according to the target object to obtain M13 small images including the target object, which is used to present the target object more intuitively and efficiently.
  • the multi-eye snapshot image optimization unit 1042 performs image cropping on the M23 images according to the target object to obtain M23 small images including the target object, which is used to present the target more intuitively and effectively.
  • M13 images (original images), M13 small images, M23 images (original images), and M23 small images are obtained by the multi-eye snapshot image optimization unit 1042.
  • M13 small images and M23 small images are composed of multiple images.
  • the snapshot image optimization unit 1042 performs quality evaluation to obtain a preferred X small image.
  • the multi-eye snapshot picture optimization unit 1042 transmits X small pictures and X original pictures corresponding to the X small pictures to the video / image encoding unit 105 for image encoding, and transmits to the display unit 107 through the transmission unit 106.
  • the display unit 107 can perform only the X small pictures Decode and display, optionally, X large images can also be decoded and displayed.
  • the original image contains richer image details and backgrounds, which can facilitate the verification and proofreading of monitoring results. In some scenarios, to save the display area, it can be stored in the terminal side where the display unit is located without real-time display.
  • the multi-eye snapshot optimization unit 1042 sends the imaging quality evaluation result to the imaging parameter generation and feedback control unit 1043; the imaging parameter generation and feedback control unit 1043 synthesizes the quality evaluation results of the multi-eye snapshot optimization unit 1042.
  • the imaging quality of the images collected by the camera and the multi-camera secondary camera is evaluated. Among them, it includes image brightness, contrast, blur degree, noise level, etc.
  • the ideal shooting parameters of each imaging camera can be calculated and generated, including: exposure time, gain, denoising parameters, etc. And through the multi-channel ISP processing unit feedback control imaging parameters to the main camera and each sub-camera, adjust the current shooting parameters of each camera.
  • a monitoring camera module In the process of implementing the present invention, a monitoring camera module must first be set up.
  • the mode established by the monitoring camera module of the present invention is "1 main camera + N sub cameras".
  • the N sub-cameras may also include a medium-short-focus secondary camera and / or a tele-focus secondary camera; the parameters such as the focal length of different medium-short-focus secondary cameras may be the same or different; the focal length of different tele-focus secondary cameras The other parameters may be the same or different; there may be only one of a short-medium focal length camera or a long-focus secondary lens; the number N of the secondary cameras may be any integer not less than 2.
  • a surveillance camera module uses "1 main camera (6mm) + 4 sub cameras (15mm) + 3 sub cameras (8mm)".
  • the core idea of setting up a camera is to "cover the monitoring area of interest to the user, and can capture a complete high-definition target image in the monitoring area”; the design of the specific camera parameters and the selection of the number are covered by the area, The FOV of the camera, the height of the camera, the user's requirements for the quality of the collected images, the monitoring distance and other factors are jointly determined.
  • the main camera can use conventional surveillance video components or cameras.
  • the secondary camera if you want to monitor the area far from the camera installation position, you can use a telephoto lens to ensure that the size of the target of interest meets the requirements; if you want to monitor the area of the field of view close to the camera installation position, you can use the medium Short-focus lens.
  • the total FOV of the main camera is> 60 °, the middle and short focal cameras overlap> 1.6 °, and the telephoto secondary cameras overlap> 1 °; an optional requirement is to ensure that the overlapping area between adjacent sub cameras is complete Cover certain types of detection objects, such as human faces, to ensure that there is no dead angle in monitoring.
  • the FOV of the sub camera is related to the FOV of the main camera and the number of the sub cameras; for example, assuming that the FOV value of the main camera is constant, the greater the number of the sub cameras, the smaller the FOV of the sub camera, and vice versa.
  • the equivalent focal length of the main camera can be selected from 2mm to 8mm; the typical value is 6 or 8mm. In the range of the monitoring distance less than 5m, the pixels of the face image are far more than 50 * 50; the medium and short focus secondary camera The equivalent focal length can be selected from 6mm-15mm; the typical value is 10mm or 12mm, and the pixel of the face image can be far more than 50 * 50 within the monitoring distance of 5m-15m; the equivalent focal length of the telephoto secondary camera can be selected 15mm-25mm; typical value is 18mm or 21mm or 24mm, and the pixel of the face image can be far more than 50 * 50 in the range of 15m-25m.
  • This design can guarantee that within 25 meters, the face image that can be collected is larger than 50 * 50 pixels, and it can reach 4K quality snapshots.
  • high-definition image capture in a range of 5m can only be guaranteed, and the sharpness of the image capture in other ranges will deteriorate, and the performance of the monitoring system will be greatly reduced.
  • mapping relationship can be established with the overlapping monitoring area of the primary camera by means of calibration. That is, for this common monitoring area, the images collected by the primary camera and the secondary camera To establish a mapping relationship.
  • the calibration method can use existing methods, including: feature point calibration.
  • the mapping relationship can be initialized and calibrated at the installation site of the monitoring device, or can be set when the monitoring device is shipped from the factory. After the installation is completed, the mapping relationship can be confirmed and corrected according to the field of view of the camera installation site.
  • the monitoring system starts to collect images, detects, tracks and selects the target, and finally presents the preferred snapshot on the screen.
  • the surveillance camera module includes: a main camera and N sub-cameras; wherein the monitoring areas of the N sub-cameras respectively cover N different areas in the main camera monitoring area; any The focal length when an image is captured by one secondary camera is greater than the focal length when an image is captured by the main camera; N is an integer not less than 2.
  • S11 Use the main camera and N sub cameras to collect images.
  • the main camera collects the image data in the monitoring field of view area covered by the main camera at the first frame rate (for example, not less than 25fps), which can be understood as video; each sub-camera uses the second frame rate (such as 0.5fps) -25fps) to collect the image data in the surveillance field of view area covered by each, which can be understood as a picture.
  • the second frame rate is lower than the first frame rate, and the secondary camera acquires image data in a manner similar to taking pictures.
  • the sub-camera can choose different frame rates according to the type of monitoring target or the monitoring scene.
  • the shooting frame rate can be selected from 0.5fps-3fps; for example, when monitoring sidewalks, 1fps-5fps can be selected, and when monitoring vehicles, 3fps-10fps can be selected.
  • the video collected by the main camera and the RGB images collected by each sub-camera are processed by the ISP into YUV image data.
  • the corresponding ones can be referred to as the YUV image data of the main camera and the YUV image data of the sub-camera.
  • S12 Select M images containing the target object from the images collected by the main camera and the N sub cameras.
  • the YUV image data corresponding to the main camera and the multi-camera sub-camera processed by the ISP is received, that is, the YUV image data of the main camera and the YUV image data of the sub-camera, and target detection and tracking are performed for each frame of image
  • the targets can include: motorized vehicles, non-motorized vehicles, pedestrians, faces and other key monitoring objects.
  • Target detection is performed for the YUV image data of the main camera of each frame, and target detection is performed for the YUV image data of each sub-camera.
  • the method of target detection can adopt image analysis method or neural network discrimination method.
  • the multi-eye target detection and tracking unit 1041 when the multi-eye target detection and tracking unit 1041 performs target detection, if it detects that a target exists in the image (such as a recognition object preset by a user such as a car or a person), it feeds it back to the multi-channel ISP processing unit 103, and feeds back
  • the multi-ISP processing unit 103 stores the image from the system buffer. From the description of the above examples, it should be understood that the multi-eye target detection and tracking unit 1041 can store both the YUV image data corresponding to the main camera and the YUV image data corresponding to the sub-camera. It is assumed that during a monitoring period, M0 containing targets are stored. Image.
  • the tracking method can use, but is not limited to, two-way optical flow tracking, Kalman prediction, and Hungarian matching algorithms.
  • the target ID can be marked in the form of a number or a letter number, and the data numbering method for any different camera can be the same or different.
  • the images collected by different cameras are usually The corresponding method of recording the target ID is different.
  • the recording target position may be in a coordinate marking manner, but is not limited thereto.
  • each frame of the image captured by the main camera (which can be referred to as the main shot snapshot) and each image of the saved sub camera (which can be referred to as the sub-shot snapshot), according to the calibrated main camera and each sub-camera
  • the image mapping relationship between the framing areas, the time stamp of the image shooting, the target position and other information are correlated and matched to identify the same target in different images.
  • the same ID value is established, that is, the same target is established
  • a feature matching algorithm may be used to further determine which targets in different images are the same target.
  • the total detection thread feeds back to the multi-channel ISP processing unit 103 to save the images collected by the main camera and the sub-camera at the same time, the ID of the same target captured by the main camera and the sub-camera can be directly unified.
  • the number of images containing the target object can be determined, and M is used here.
  • S13 Crop M images according to the target object to obtain M small images containing the target object.
  • a small image can be cropped that can significantly display the target features.
  • the shape of the thumbnail can be a square, a circle, or the outline of the target object.
  • Cropping means separating a part of a picture or image from the original picture or image. It can also be understood as a cutout. Available methods include: specific area cropping, lasso tool, marquee tool, eraser tool, etc. direct selection, quick masking, pen selection after path selection, extraction filter, plug-in filter extraction, channel, calculation, application image Law, etc.
  • the image cropped by the present invention may be the shape, square, circle, or other forms of the target object.
  • a quality evaluation algorithm based on deep learning or other quality evaluation methods in the prior art may be used to perform quality evaluation on the M small images.
  • the quality snapshot result corresponding to the target object is obtained, and the quality evaluation result can be expressed by a score.
  • the X thumbnails with the highest quality snapshot results can be selected from the M thumbnails.
  • a small picture with the best quality evaluation result is selected.
  • the finally selected small picture can come from the captured picture obtained by the main camera or the captured picture obtained by the secondary camera, which is determined by the quality evaluation result or algorithm.
  • the quality of the images collected by the secondary camera is higher than that of the primary camera.
  • an optimal thumbnail or multiple preferred thumbnails can be obtained.
  • the optimal small image or multiple optimal small images are transmitted to the display terminal after being encoded by the video encoding unit and network transmission, and the optimal small image is displayed on the display screen of the terminal after being decoded by the display terminal.
  • the X small pictures with the highest quality evaluation results can also be selected and presented to the user.
  • the display form can be: the display screen includes a main display area and a sub display area.
  • the main display area is used to display the real-time video stream collected by the main camera, and the sub display area is used to display a small image of any target captured by the sub camera. Further, the original images corresponding to the X small images may be packed and transmitted to the display terminal for subsequent display and verification.
  • the preferred small images can be displayed in real time when the target object appears in the monitoring field of view, so the quality evaluation can only be based on the currently collected and saved image containing the target object. Once it is detected subsequently The small picture of another target object with better quality evaluation results is updated in real time.
  • the preferred small images can be displayed when the target object disappears in the monitoring field of view, so the quality evaluation can only be based on all the images containing the target object that are collected and saved, so there is no need to update in real time.
  • step S16 is optional.
  • the quality evaluation results of the M small images will also have certain quality feedback on the imaging quality of the main camera and multiple sub cameras, such as including the brightness, contrast, blurring degree, and noise level of the image.
  • the effect can be calculated by generating imaging parameters of each imaging camera, including: exposure duration, gain, denoising parameters, etc., and feeding back the imaging parameters of the main camera and each secondary camera through the multi-channel ISP processing unit 103. If the current shooting parameters of the main camera and each secondary camera are insufficient, adaptive adjustment and optimization can be performed according to the feedback parameters.
  • the surveillance system's area of interest is the surveillance area of the four ABCD sub-cameras (also referred to as the ABCD area), and the four areas of ABCD are within the main surveillance range (enclosed areas of k1, k2, and k3) );
  • the area other than ABCD is the area of inattention during monitoring, and the dashed line represents the actual target trajectory of the target object.
  • the main camera has taken 50 frames of the target, denoted as z1, z2 ... z50.
  • the main camera detection thread or the sub-camera detection thread in the tracking unit 1041 detects the target at A1, A2, C1, D1, D2, D3, and feeds back the multiple ISP processing unit 103 to save the two images collected by the sub-camera A (Denoted as a1, a2), two images acquired by the sub-camera C (denoted as c2), and three images acquired by the sub-camera D (denoted as d1, d2, d3).
  • a1, a2, c1, d1, d2, and d3 will have a larger presentation than images captured by the main camera in the corresponding area, and may further have higher light sensitivity or higher pixels.
  • the multi-eye snapshot optimization unit 1042 determines the detected targets a1, a2 and the main camera detection in the area A according to the position correspondence between the main camera and the sub camera through area A, the time stamp of the captured image, and the position of the detected target.
  • the targets z1-z50 are the same target; for the same reason, the position correspondence between the main camera and the sub camera through the C area, the time stamp of the collected image, and the position of the detected target determine the detected targets c1 and C1 in the C area.
  • the multi-eye snapshot image optimization unit 1042 will crop the 56 images according to the size or shape of the target object. We get 56 small images that can show the target object prominently. The 56 small images are evaluated for quality according to a uniform preset evaluation standard, and topX images that meet the standard are selected as the preferred small images.
  • the typical value of X is 1, and if there are other requirements, X can be other positive integers. It is freely defined by users according to specific needs, and is not limited in the present invention.
  • the X small pictures are transmitted to the display terminal after being encoded, and then presented to the user after being decoded by the display terminal. For example, if the quality evaluation result of d1 is the best, d1 is correspondingly displayed on the display end for user's reference to monitor and analyze the target object. At this time, the display terminal can display the video monitored by the main camera in real time, and the user can see the movement track of the target object, but only the image d1 can be displayed in other display areas.
  • the actual multi-eye target detection and tracking unit 1041 detects far more images than the above images
  • the multi-channel ISP processing unit 103 also stores much more images than the above-mentioned images.
  • the target object is used as an example. It should be understood that, for any other object, the monitoring method and process are the same as the example of the target object, which will not be described in detail in the present invention.
  • the embodiments of the present invention provide a monitoring shooting module and a monitoring method, which use "1 main camera + N sub cameras” as a basic component for collecting images, and lay out a plurality of sub cameras to cover the main camera. Clear disadvantages.
  • the "telephoto” and “large aperture” design of the secondary camera greatly compensates for the lack of imaging quality of the main camera, which enables clear images of the target object to be collected in most areas of the main camera monitoring.
  • the recognition accuracy of the monitoring system can be enhanced, which will undoubtedly provide a stronger user base for the commercial success of the monitoring system.
  • an embodiment of the present invention provides a monitoring device, which is applied to a monitoring camera module.
  • the monitoring camera module includes: a main camera and N sub cameras; where N The monitoring areas of the secondary cameras respectively cover N different areas in the monitoring area of the primary camera; the focal length of any secondary camera is greater than the focal length of the primary camera; N is an integer greater than 1; as shown in FIG. 9, the device 200 may include: Acquisition module 201, selection module 202, cropping module 203, evaluation module 204, and display module 205; optionally, the aperture value of any one of the secondary cameras is smaller than the aperture value of the main camera;
  • An acquisition module 201 is configured to acquire an image by using a main camera and N secondary cameras, wherein a frame rate of an image acquired by any one of the secondary cameras is lower than a frame rate of an image acquired by the main camera;
  • the instructions or externally input program instructions are implemented, and the image is acquired by cooperating with the camera and performing some calculation processing on the image.
  • a selection module 202 is used to select M images containing the target object from the images collected by the main camera and the N sub cameras, where M is an integer greater than 1; this module can be called by the processor through program instructions in memory or external input The program instructions are implemented; the M images containing the target object are filtered by the algorithm.
  • a cropping module 203 is used to crop M images according to the target object to obtain M small images containing the target object; this module can be implemented by the processor calling program instructions in the memory or externally input program instructions; such as image cropping or Cutout algorithm or program.
  • the evaluation module 204 is configured to perform quality evaluation on the above-mentioned M small pictures; the module may be implemented by a processor calling a program instruction in a memory or a program instruction input from outside.
  • the display module 205 is configured to display a small picture with the best quality evaluation result in the M small pictures.
  • This module can be implemented by the processor calling program instructions in the memory or externally input program instructions and working with the display screen.
  • the acquisition module 201 is specifically used to execute the methods mentioned in S11 and equivalent replacement methods; the selection module 202 is specifically used to execute the methods mentioned in S12 and equivalent replacement methods; the cutting module 203 is specifically used to execute the methods mentioned in S13 and equivalent replacement methods; the evaluation module 204 is specifically used to execute the methods mentioned in S14 and equivalent replacement methods; the display module 204 is specifically used to execute the methods described in S15 The methods mentioned and the equivalent alternatives.
  • the apparatus 200 may further include a feedback module 206, which is specifically configured to execute the method mentioned in S16 and a method that can be equivalently replaced.
  • the acquisition module 201 may have some of the functions mentioned in the main camera 101, the sub camera 102, and the multi-channel ISP processing unit 103, and functions that can be replaced equivalently, including collecting images at respective preset frequencies, performing Raw image processing and other functions of the image;
  • the selection module 202 may have a part of the functions mentioned in the multi-eye target detection and tracking unit 1041, the multi-eye snap picture optimization unit 1042 and equivalent replacement functions, which may specifically include target detection, feedback and more
  • the ISP processing unit 103 stores functions such as images, target tracking, target ID tagging, and filtering for the same target;
  • the cropping module 203 may have some functions mentioned in the multi-eye snap picture optimization unit 1042 and equivalent replacement functions, which may specifically include The function of cropping the image;
  • the evaluation module 204 may have some of the functions mentioned in the multi-eye snap picture optimization unit 1042 and functions that can be replaced equivalently, and may specifically include a function of evaluating the quality of the cropped image according to a preset quality evaluation method ,
  • the monitoring camera module to which the device is applied may be any of the possible monitoring camera modules in the method embodiments. The number of cameras and hardware parameters are not repeated here.
  • An embodiment of the present application further provides a monitoring device 300.
  • the device includes a processor 301, a memory 302, multiple cameras 303, a display screen 304, and a bus 305; the processor 301, the memory 302, and multiple cameras 303
  • the display screen 304 is connected through the bus 305; the program instructions and data are stored in the memory 302, the camera 303 is used to collect images, the display screen 304 is used to display video or images, and the processor 301 is used to call the data and program instructions in the memory , In cooperation with multiple cameras 303 and display screens 304; to complete any of the methods and possible design methods provided in the above embodiments.
  • multiple cameras 303 may be configured as a monitoring camera module that may be obtained in any one of the method embodiments. The number of cameras and hardware parameters are not described herein again.
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Therefore, the embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Moreover, the embodiments of the present application may take the form of a computer program product implemented on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) containing computer-usable program code.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • Embodiments of the present application are described with reference to flowcharts and / or block diagrams of methods, devices (systems), and computer program products according to the embodiments of the present application. It should be understood that each process and / or block in the flowcharts and / or block diagrams, and combinations of processes and / or blocks in the flowcharts and / or block diagrams can be implemented by computer program instructions.
  • These computer program instructions may be provided to a processor of a general-purpose computer, special-purpose computer, embedded processor, or other programmable data processing device to produce a machine, so that the instructions generated by the processor of the computer or other programmable data processing device are used to generate instructions Means for implementing the functions specified in one or more flowcharts and / or one or more blocks of the block diagrams.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing device to work in a particular manner such that the instructions stored in the computer-readable memory produce a manufactured article including an instruction device, the instructions
  • the device implements the functions specified in one or more flowcharts and / or one or more blocks of the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing device, so that a series of steps can be performed on the computer or other programmable device to produce a computer-implemented process, which can be executed on the computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more flowcharts and / or one or more blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

一种监控拍摄模组,该模组包括:1个主摄像头和N个副摄像头,N为大于1的整数;主摄像头和N个副摄像头用于采集图像,且任意一个副摄像头采集图像的帧率小于主摄像头采集图像的帧率;其中,N个副摄像头的监控区域分别覆盖主摄像头监控区域中的N个不同区域;且任意一个副摄像头的焦距大于所述主摄像头的焦距。本发明的模组可以解决低照度、目标距离远、逆光/宽动态等监控场景下抓拍图像模糊、图像亮度低、噪声大、抓拍目标尺寸小等问题,提升监控***的监控能力。

Description

一种监控方法与装置 技术领域
本申请涉及视频技术领域,特别涉及一种监控方法与装置。
背景技术
监控拍摄功能是当前视频监控***的一个重要应用,利用网络摄像机IPC(Internet Protocal Camera)设备对视场中的车辆、人员进行成像,对于进入视频中的感兴趣的目标(可以包括:机动车、非机动车、行人等任何需要监控的物体或对象)进行检测;并跟踪各个目标在视频中的运动轨迹,并从目标进入监控区域到离开监控区域的过程中对其进行抓拍,显示在视频画面中。
根据应用目的不同,监控抓拍***可分为人像抓拍***、车辆抓拍***及机非人(机动车、非机动车、行人)抓拍***。其中,人像抓拍***,又称为人像卡口,主要安装在人行道、人行横道及室内通道,重点针对行人,进行人脸、人体的检测及抓拍,输出人脸、人体抓拍图,对接人脸识别、人体属性识别等智能应用。车辆抓拍***,重点针对机动车,进行车辆的检测及抓拍,记录车辆抓拍图,对接车牌识别、车型/车款/车身颜色等车辆属性识别等智能应用。当前常见的电子警察,卡口抓拍***均为车辆抓拍***,其中电子警察主要安装在城市道路的十字路口,用于监测闯红灯、压线、逆行、违规变道、占用非机动车道、不按导向行驶等违章行为。卡口一般在高速、国道以及城市主干道比较常见,主要用于抓拍超速行为。机非人抓拍***,针对机动车、非机动车、行人混行的场景,对机动车、非机动车及行人目标进行检测、分类、跟踪,输出各类目标的抓拍图,主要安装在十字路口,城中村等治安重点区域,用于治安监控,实现对于各类目标的全面管控。
现有IPC抓拍***由视频录像***演进而来,同一个设备上需要同时实现视频录像和目标抓拍功能,其中抓拍图是从视频录像某一帧图像中抠图或裁剪得到的,即抓拍图与视频流来自于同一套成像元件,包括镜头、图像传感器等,且各种成像参数一致,包括:曝光时长、对比度等。
一种现有技术的技术方案如图1所示。图1中,前端视频采集模组以固定的帧率(如30fps)采集图像数据,在采集到的图像数据中,进行人脸检测、跟踪、筛选,并从某一帧的图像数据中抠出最优的人脸抓拍图片。同时,所采集到的图像数据经过视频压缩后,变成压缩后的视频流。抓拍图片和视频流以网络等方式传输到后端分别进行存储,并针对抓拍图进一步进行特征提取、目标识别、检索等处理。
从上述现有技术方案的处理过程可见,由于抓拍图的成像参数与视频流成像参数一致,导致感兴趣目标区域抓拍图成像效果分辨率低,在成像条件不理想的情况下,如低照度、目标距离远、逆光/宽动态等复杂光照环境,感兴趣目标区域成像质量差, 影响后续智能处理(人脸识别、车牌识别、车型/车款识别等)的性能指标。主要包括如下几点:
1)低照度下成像质量差,主要包括图像模糊、图像暗及噪声大,能抓拍到部分人脸但无法用于识别算法,可见光补光对行人干扰大。
2)远视场区域人脸成像模糊,人脸区域小,无法用于人脸识别。
3)宽动态场景下,全视场内成像质量有差异,抗环境干扰能力差。
发明内容
本申请实施例提供一种监控方法和装置,解决低照度、目标距离远、逆光/宽动态等监控应用场景下抓拍图像模糊、图像亮度低、噪声大、抓拍目标尺寸小等问题,提升基于抓拍图的后续智能处理(人脸识别、车牌识别、车型/车款识别等)的性能指标。
本申请实施例提供的具体技术方案如下:
第一方面,本申请实施例提供一种监控拍摄模组,该模组包括:1个主摄像头和N个副摄像头,N为大于1的整数;主摄像头和N个副摄像头用于采集图像,且任意一个副摄像头采集图像的帧率小于所述主摄像头采集图像的帧率;其中,N个副摄像头的监控区域分别覆盖主摄像头监控区域中的N个不同区域;且任意一个副摄像头的焦距大于主摄像头的焦距。
根据第一方面,在一种可能的设计中,主摄像头的FOV大于60°,主摄像头焦距在[4,8]mm范围内,光圈值在[1.4,2.0]范围内。其中,FOV大于60°保证足够的监控视野,焦距和光圈的配置尽量使得主摄焦距采集视频数据时能聚焦到近距离的中心区域或者说核心区域,并清晰成像。
根据第一方面,在一种可能的设计中,至少一个副摄像头的焦距在[8,15]mm范围内。配置“中短焦副摄像头”采集图像,可以补充主摄像头监控区域中中部区域的监控能力。
根据第一方面,在一种可能的设计中,至少一个副摄像头的焦距在[15,25]mm范围内。配置“长焦副摄像头”采集图像,可以补充主摄像头监控区域中远方区域的监控能力。
根据第一方面,在一种可能的设计中,至少一其中三个副摄像头的焦距为[12,18]mm,另外四个副摄像头的焦距为[21,25]mm。
根据第一方面,在一种可能的设计中,主摄像头光圈值在[1.4,2.0]范围内,至少一个个副摄像头光圈值在[0.8,1.6]范围内,且至少一个副摄像头的光圈值小于主摄像头的光圈值。在一些场景下,远处光照不足时,可以通过大光圈的副摄像头增加采集图像的进光量,能使得远处摄像头成像更清晰,提升抓拍图画面质量,有利于目标的识别。
根据第一方面,在一种可能的设计中,N=4,其中,4个副摄像头的焦距在[18,21]mm范围内。这是一种可能的模组设计,该设计可以保证25m范围内的监控区域都能够采集到清晰的图像,相比于单纯的主摄像头监控(通常监控10m以内),拓展了 高质量的监控范围。
根据第一方面,在一种可能的设计中,N=7,其中,三个副摄像头的焦距为[12,18]mm,另外四个副摄像头的焦距为[21,25]mm。这是一种可能的模组设计,该设计可以保证35m范围内的监控区域都能够采集到清晰的图像,相比于单纯的主摄像头监控(通常监控10m以内),大大拓展了高质量的监控范围。多个副摄像头还可以采用多焦段设计,通过不同焦段的FOV组合,确保覆盖主摄像头监控的FOV,多个副摄像头采用相邻相机的边缘视场重叠设计,确保交叠区域面积能够覆盖完整目标。
此外,通过主摄像头采集视频数据,通过多个副摄像头以较低帧率采集抓拍图,即类似拍照的方式,相比于全部采集视频数据来说,大大节约了功耗。
第二方面,本申请实施例提供一种监控方法,该方法应用于一种监控拍摄模组,该监控拍摄模组包括:1个主摄像头和N个副摄像头;其中,N个副摄像头的监控区域分别覆盖主摄像头监控区域中的N个不同区域;任意一个副摄像头的焦距大于主摄像头的焦距;N为大于1的整数;上述方法包括:利用主摄像头和N个副摄像头采集图像,其中,任意一个副摄像头采集图像的帧率小于主摄像头采集图像的帧率;在主摄像头和N个副摄像头采集的图像中选出含有目标对象的M个图像,M为大于1的整数;根据目标对象对M个图像进行裁剪,得到M个含有目标对象的小图;对M个小图进行质量评价;至少显示出M个小图中质量评价结果最好的一张小图。
第三方面,本申请实施例提供一种监控装置,该装置应用于一种监控拍摄模组,该监控拍摄模组包括:1个主摄像头和N个副摄像头;其中,所述N个副摄像头的监控区域分别覆盖所述主摄像头监控区域中的N个不同区域;任意一个副摄像头的焦距大于所述主摄像头的焦距;N为大于1的整数;该装置包括:
采集模块,用于利用主摄像头和N个副摄像头采集图像,其中,任意一个副摄像头采集图像的帧率小于主摄像头采集图像的帧率(一般不低于25fps);
选择模块,用于在主摄像头和N个副摄像头采集的图像中选出含有目标对象的M个图像,M为大于1的整数;
裁剪模块,用于根据所述目标对象对M个图像进行裁剪,得到M个含有目标对象的小图;
评价模块,用于对M个小图进行质量评价;
显示模块,用于至少显示出M个小图中质量评价结果最好的一个小图。
根据第二方面或第三方面,在一种可能的设计中,主摄像头和副摄像头的分辨率不低于200万像素。
根据第二方面或第三方面,在一种可能的设计中,主摄像头和副摄像头均为定焦镜头;或者主摄像头为变焦镜头,副摄像头为定焦镜头。
根据第二方面或第三方面,在一种可能的设计中,副摄像头的参数设置保证大范围监控距离(如20米)内采集到的人脸图像大于50*50像素,达到4K品质的抓拍图。
根据第二方面或第三方面,在一种可能的设计中,对于远处的某一目标,主摄像头拍摄到的实际图像较小,副摄像头采用“长焦”因而拍摄到的实际图像会较大;由于同一目标的实际采集图像的大小不同,因此需要确定两个摄像头采集的图像之间的映射关系,具体为图像中各个位置的对应关系。可以通过标定的方式确认映射关系。
根据第二方面或第三方面,在一种可能的设计中,对于主摄像头和副摄像头采集的图像,可以对RGB数据进行demosaicing去马赛克、3A(AE,Auto Exposure自动曝光;AWB,Auto White Balance自动白平衡;AF,Auto Focus自动聚焦)、去噪、RGB转YUV等图像处理操作,得到YUV图像数据,
根据第二方面或第三方面,在一种可能的设计中,主摄像头焦距在[4,8]mm范围内,N=4,且4个副摄像头的焦距在[18,21]mm范围内。这是一种可能的模组设计,该设计可以保证25m范围内的监控区域都能够采集到清晰的图像,相比于单纯的主摄像头监控(通常监控10m以内),拓展了高质量的监控范围。
根据第二方面或第三方面,在一种可能的设计中,主摄像头光圈值在[1.4,2.0]范围内,至少一个副摄像头光圈值在[0.8,1.6]范围内,且至少一个副摄像头的光圈值小于所述主摄像头的光圈值。在一些场景下,远处光照不足时,可以通过大光圈的副摄像头增加采集图像的进光量,能使得远处摄像头成像更清晰,提升抓拍图画面质量,有利于目标的识别。
根据第二方面或第三方面,在一种可能的设计中,在所述主摄像头和所述N个副摄像头采集的图像中选出含有目标对象的M个图像包括:利用主摄像头检测线程在所述主摄像头采集的图像中检测到M1个图像含有所述目标对象,从缓存中保存所述M1个图像;利用所述主摄像头检测线程在所述N个副摄像头中采集的图像中检测到M2个图像含有所述目标对象,从缓存中保存所述M2个图像;或,利用主摄像头检测线程在所述主摄像头采集的图像中检测到M1个图像含有所述目标对象,从缓存中保存所述M1个图像;利用副摄像头检测线程在所述N个副摄像头中采集的图像中检测到M2个图像含有所述目标对象,从缓存中保存所述M2个图像;根据主摄像头和各副摄像头之间的图像映射关系,并根据所述主摄像头和各副摄像头拍摄图像的时间戳、以及目标位置,将所述M1个图像和所述M2图像识别为含有所述目标对象的图像;其中M=M1+M2。该方法可以由上述选择模块来执行。在一些场景中,可以由主摄检测线程来统一检测目标,并且主导主摄像头采集的图像和副摄像头采集图像的保存,这种场景下可以有利于监控***统一ID;另一些场景中,副摄像头采集图像的保存由副摄检测线程主导,减轻主摄像头的线程负担。
根据第二方面或第三方面,在一种可能的设计中,监控***会根据所有的预设的目标对图像进行检测,检测到的目标可能是一个或者多个;因此同一个图像中含有多个目标也是可能的。因此,在检测目标的同时,需要对不同的目标进行ID编号。对于不同摄像头拍摄到的图像,可以根据标定的主摄像头和各个副摄像头之间的图像映射关系(空间对应),并根据监控***记录的时间戳(时间维度)、和目标所在的位置(空间维度)等信息进行关联匹配,对于保存的图像中的同一对象匹配为同一目标。对于任意一个目标,采用上述类似方式找到优选小图,尤其是最优小图;具体地,在发给前端识别或后端服务器的时候,也可以打包评价结果排名靠前的X张小图,进一步地,还可以打包这X个小图对应的原图。因此,对于任意一个目标,该单元都能找到其对应的优选小图。对监控视野内的任意对象都能有高质量的图像呈现,大大提升监控质量和能力。
根据第二方面或第三方面,在一种可能的设计中,根据对主摄像头和副摄像头采 集的图像的质量评价结果,计算生成各成像摄像头的理想拍摄参数,包括:曝光时长、增益、去噪参数等;反馈调整主摄像头或副摄像头的拍摄参数。
第四方面,本申请实施例提供一种设备,所述设备包括多个摄像头、显示屏、处理器、存储器和总线;多个摄像头、显示屏、处理器、存储器通过总线相连;多个摄像头用于采集图像;显示屏用于显示视频或图像;存储器用于存储有数据和程序指令;处理器用于调用所述数据和程序指令,与摄像头和显示屏协同完成如上述任意一个方面提供的方法以及任意一种可能的设计的方法。
应理解,上述任意一种可能的设计方法,在不违背自然规律的条件下,可以进行方法之间、方法与装置之间的自由组合。
本发明采用主摄像头采集监控视频,结合多个“长焦”副摄像头采集抓拍图像,其中副摄像头覆盖主摄像头的监控区域。主摄像头采集的视频满足人眼查看的需求,副摄像头采集到的抓拍图用于算法识别。副摄像头可以采用多个大光圈提升弱光环境感光能力,可选的多焦段交叠组合提升远距离时的监控目标的尺寸和清晰度,在主摄像头的监控视野内可以实现4K水准,多个副摄像头的监控区域成像提升宽动态能力。可较好的解决监控应用场景中低照度、目标尺寸小及宽动态的痛点问题。
附图说明
图1为现有技术中一种监控技术方案示意图;
图2为本申请实施例提供的一种监控***信号流示意图;
图3为本申请实施例提供的一种监控拍摄模组示意图;
图4和本申请实施例提供的一种人行道监控环境示意图;
图5为本申请实施例提供的一种大厅出入口监控环境示意图;
图6为本申请实施例提供的一种监控***结构示意图;
图7为本申请实施例提供的一种监控方法示意图;
图8和本申请实施例提供的一种监控示例示意图;
图9为本申请实施例提供的一种监控装置示意图;
图10为本申请实施例提供的一种监控设备示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
本发明实施例基于的监控***可以如图2所示,由拍摄模组采集图像(CCD/CMOS图像传感器上成像),图像经过模/数转换变为数字信号,数字信号经过信号处理和智能分析,得到处理后的数据,处理后的数据经过信息提炼和编码压缩,再经网络传输到Web显示端,经解码呈现可视数据给用户。
其中,拍摄模组还可以理解为拍照***、拍摄组件、摄像组件、摄像单元、摄像头、抓拍组件、抓拍单元、成像装置、成像单元等,应理解这些是本领域人员常用的类似说法,本发明中不予以赘述和限定。
本发明的监控拍摄模组的一种实现形式可以如图3所示,由主摄单元(又可叫做主摄视频组件、主摄像头)以及副摄单元构成;副摄单元由多个(目)副摄抓拍组件(又可叫做副摄像头)构成。
其中,主摄像头可以采用常规的监控视频拍照组件,用于采集监控区域的视频;多目副摄像头可以采用不同的焦段和大光圈设计的高清拍照组件,用于采集各自监控区域的图像/抓拍图。进一步地,多目副摄像头所覆盖的视场区域,覆盖了主摄像头中拍摄质量较低但同时又是用户感兴趣的区域;因此多目副摄像头之间需要按一定的规则排列,确保用户感兴趣的监控区域视场(如图3中监控区域视场的黑色部分和灰色部分)被覆盖完整且不会浪费副摄像头之间较多的重叠空间。本发明通过增加多个副摄像头来清晰地抓拍到主摄像头拍摄的不清晰区域中的图像,使得用户感兴趣的监控区域能够获得到良好的监控性能。
对于监控的任一目标,利用目标检测的结果,获取主摄视场区域中的抓拍图像,可以是视频流中每一图像帧或者部分图像帧中对应的图像;获取多目副摄像头采集的各自覆盖视场区域内的抓拍图像,可以是某个副摄组件在其覆盖区域内获取目标抓拍图。
对于主摄像头和副摄像头获取的所有的含有目标的图像,可以基于时间维度和空间维度识别出同一目标,并统一ID,识别出含有同一个目标的若干个图像,对于这些若干个图像进一步通过裁剪,得到目标更加显著的小图;利用现有技术中的图像质量评价方法对这些裁剪后的小图进行质量评价,从中选出优选的小图。作为可选的,可以根据质量评价结果,评估主摄像头和各个副摄像头成像参数的合理性,并将更新后的成像参数,反馈给主摄像头和相应的副摄像头,调整其拍摄参数用于更好地适应当前环境,进而采集质量更好的视频和抓拍图。
本发明可以应用的场景包括但不限于下面两种监控***或环境:
1)人像抓拍***/环境
针对安装在人行道上人像抓拍***,如图4所示,由于所监控区域为狭长区域,对于远处的目标人脸,存在目标成像尺寸小的问题,因此现有技术的拍摄方案会识别不清楚。另外,针对大厅出入口的人脸抓拍,如图5所示,由于室内室外光线的强烈反差,画面动态范围大,统一的成像参数导致人脸目标区域图像暗,无法进行识别。本发明中可以采用“1个主摄像头+多个副摄像头”的设计方案,多个副摄像头可以解决远处目标尺寸小的问题;此外,多个副摄像头可以分区域成像,可进一步解决宽动态场景目标区图像暗的问题。
2)机非人抓拍***
用于治安监控的机非人抓拍***,需要对大视场范围的机动车、非机动车、行人等进行检测、跟踪及分类,然而对于视场边缘位置处的小尺寸目标,存在严重的漏检问题。由于视场范围内的光照不均匀,某些区域光照不足,导致采集的图像亮度低;如在傍晚、阴雨天等低照度场景下,成像模糊、噪声大。本发明通过“1个主摄像头+多个副摄像头”的设计,多个副摄像头可以解决视场边缘目标尺寸小的问题;通过多个副摄像头分区域成像,可解决宽动态场景目标区域图像暗的问题;此外,多个副摄 像头可以采用大光圈设计,来解决低照度场景下成像模糊、噪声大的问题。
本发明的落地产品形态可以是监控***,也可以包括其中的拍摄组件/***/模组等部分,本发明的监控***结构可以如图6所示,主要包括:
主摄单元101:包括主摄像头(主摄视频组件),可以采用常规监控视频模组或组件,用于采集监控主摄像头视场区域的全局图像数据。采集图像时,一般拍摄帧率不低于25fps,带宽和存储等资源允许的条件下,可以按需提高。主要负责采集监控视野内的实时视频流,满足人眼观看的需求。
在一种具体实现过程中,如,主摄视频组件的镜头焦距可选4mm-10mm内的任意值,可以是定焦镜头或电动变焦,由于电动焦距可调,所以整个镜头的焦距范围是可以超过上述范围,如2.8mm-50mm,应理解,该镜头在工作过程中应调整在一定范围内,如上述的4-10mm;一般***架设好了以后,主摄像头一旦按照某一模式开始持续工作,此时可以理解为等同于定焦镜头。光圈值Fno的范围为可选[1.0,2.0],如Fno=1.4;镜头的光学传感器,可选不低于200万像素。应理解,定焦可以有更高的稳定性且节约调节能耗,变焦可以提供更多的用途,主摄像头本身属于定焦还是变焦本发明中不予以限定。
作为说明,本申请中的焦距可以指业界通用的等效焦距,且本发明实施例中的参数仅进行举例而不进行限定。
副摄单元102:
包括多目副摄像头(副摄抓拍组件);组件可以包括镜头、Sensor及外壳等。采集图像时,拍摄帧率比主摄像头低,以节约功耗。可选的,副摄像头可以采用定焦镜头,可以采用变焦镜头,但任意一个副摄像头在采集图像时的焦距需大于主摄像头采集图像时的焦距,以清晰地获得较远区域的图像。
在一种具体实现过程中,多目副摄像头可以具有不同的焦段,对于监控视场中距离安装位置较远的区域,可以采用长焦镜头,如等效焦距可以在15mm-30mm范围内,具体如18或21或24mm;确保获得感兴趣目标的图像尺寸满足要求,对于监控视场中距离安装位置近的区域,则采用中短焦镜头,如等效焦距可以在6mm-15mm范围内,如8或10或12mm;应理解,中短焦镜头视野大,监控的视野略近;长焦视野小,可监控的距离比较远。多目副摄像头的镜头,可以采用大光圈设计,如Fno=0.8或Fno=1.0或Fno=1.2或Fno=1.5等,目的是增加低照度场景下的感光量,提升低照度成像效果。应理解,焦距和光圈值进一步决定了景深。副摄像头的数目,根据需要覆盖主摄像头的视场范围确定以及副摄像头的FOV来确定。例如需要覆盖的监控视野范围越大,则副摄像头的数量需要越多;另外,如果要监控的距离越远,则副摄像头的排数会越多,例如,10m-20m监控区域需要一排副摄像头覆盖,例如3个焦距为8mm的副摄像头;20m-30m监控区域需要另外一排副摄像头进行覆盖,例如4个21mm的副摄像头。一种实际应用中,副摄像头相对于主摄像头都可以叫做“长焦”。
多个副摄像头的视场/取景区域可以覆盖主摄像头取景不清晰但用户会感兴趣(需要重点监控)的区域,该区域可以理解为补充监控区域,由用户根据监控需求自由定义,目的是为了弥补主摄像头在监控区域内的拍摄不清晰区域,增强主摄像头整个监控视野内的监控能力。应理解,多个副摄像头应尽可能发挥各自最大的取景能力,理 想情况下是用数量最少的副摄像头来监控到整个目标区域,但实际实现时,多个副摄像头之间的取景可能会存在小部分的重叠。副摄像头的设置可以保证大范围监控距离(如20米)内采集到的人脸图像大于50*50像素,达到4K品质的抓拍图,满足人脸等重要目标识别的高标准要求。这可以大大弥补主摄像头监控能力的不足。
应理解,主摄像头的监控区域中并非所有的监控区域都需要被副摄像头覆盖;实际应用中,主摄像头监控区域中过远的地方可以认为是用户不感兴趣或不关注的区域,或者是对监控没有帮助的区域,它们通常位于监控视频的边缘处;除这些边缘区域之外的其他区域都可以理解为用户关注的监控区域,一般来说主摄像头对近距离目标会拍的比较清晰,对于用户关注的监控区域内较远的地方拍摄图像的质量较差,因此副摄像头的架设和布局主要是覆盖这一部分,进而弥补主摄像头在较远处监控能力的不足。例如,可选的实例中,较远处的距离可以定义为距监控拍摄模组的水平距离在5m-25m之间。
主摄像头和副摄像头的组合方式可以根据具体需求由用户灵活定义,如“1个主摄像头(近距离监控)+N个副摄像头(远距离监控)”或“1个主摄(近距离监控)+N1个副摄像头(中远距离监控)+N2个副摄像头(远距离监控)”,N、N1、N2为大于等于2的正整数。另外,主摄像头和副摄像头通常都是向下设置摆放的,
在具体实现过程中,对于主摄像头的取景区域与副摄像头的取景区域的重叠区域,即对于大体一样的取景,但实际获取到的图像大小是不同的。例如对于远处的某一目标,主摄像头拍摄到的实际图像较小,副摄像头采用“长焦”因而拍摄到的实际图像会较大;由于同一目标的实际采集图像的大小不同,因此需要确定两个摄像头采集的图像之间的映射关系,具体为图像中各个位置的对应关系。可以通过标定的方式确认映射关系;标定的方法可采用各种现有的方法,包括:特征点标定等。该映射关系用于判断不同摄像头获取的图像中的目标是否处于同一位置,进而再结合时间因素确定是否是同一目标。该映射关系在落地产品出厂时可以给出或通过算法得到。这个标定的映射关系在使用过程中还可以定期校准,例如可以采用基于特征点匹配的自动校准的方法。
多路ISP处理单元103:
接收主摄像头及多目副摄像头的采集到的图像数据,即raw域的RGB图像数据;该单元可以对RGB数据进行demosaicing去马赛克、3A(AE,Auto Exposure自动曝光;AWB,Auto White Balance自动白平衡;AF,Auto Focus自动聚焦)、去噪、RGB转YUV等图像处理操作,得到YUV图像数据,即主摄像头对应的YUV图像数据和副摄像头对应的YUV图像数据。其中,AE、AWB的参数可根据图像的光照、颜色自动进行调整。在raw域里的这些处理让图像更便于后续的编解码处理。
此外,多路ISP处理单元103还可以外接缓存,存储YUV数据以及此前的中间处理的过程数据。
多目协同处理单元104包括:
多目目标检测跟踪单元1041:接收多路ISP处理单元103得到的YUV数据,包括主摄像头对应的YUV图像数据和副摄像头对应的YUV图像数据;对于这些图像进行目标的检测、跟踪,这里的目标包括但不限于:机动车、非机动车、行人、人脸、 重要物体等,与监控需求有关。
具体地,对于来自主摄像头的YUV图像数据,进行目标检测、跟踪,记录每个图像帧中的目标ID、目标位置,形成目标轨迹。目标的位置可以用图像中的二维坐标值来表示,具体如像素坐标。其中,检测图像的方法可以采用深度学习网络来分析。对于每一个副摄像头得到的YUV图像数据,进行目标检测,记录每个检测帧中的目标ID、目标位置。这些处理可以多路ISP处理单元103每转换完一个图像就可以即刻进行的。
具体实现过程中,该单元可以包含检测主摄像头采集的图像的检测线程,即主摄像头检测线程;还可以包含检测副摄像头采集的图像的检测线程,即副摄像头检测线程;也可以包含既可以检测主摄像头又可以检测副摄像头采集的图像的检测线程,即总检测线程。例如主摄像头检测线程检测到目标之后,就会反馈多路ISP处理单元,从缓存中保存主摄像头对应的含有目标的YUV图像;例如副摄像头检查线程检测到目标之后,就会反馈多路ISP处理单元103,从缓存中保存该副摄像头对应的含有目标的YUV图像;例如总检测线程检测到目标之后,就会反馈多路ISP处理单元103,从缓存中保存主摄像头和副摄像头对应的含有目标的YUV图像。应理解,对于检测不出任何目标的帧,可以通过反馈多路ISP处理单元103将其缓存丢弃。
综上,经过多目目标检测跟踪单元1041,可以得到保存的多个图像;同时,该单元记录这些图像中的每个目标的ID、每个目标的位置等信息。
可选地,经过多目目标检测跟踪单元1041还可以根据检测到的目标类型进而控制主摄像头和副摄像头的拍摄频率;例如检测到目标为车时,拍摄帧率变高;检测到行人的时候,拍摄帧率变低。
多目抓拍图优选单元1042:对于上一模块中保存的图像,可以根据标定的主摄像头和各个副摄像头之间的图像映射关系(空间对应),并根据监控***记录的时间戳(时间维度)、和目标所在的位置(空间维度)等信息进行关联匹配,对于保存的图像中的同一对象匹配为同一目标,并统一ID值(以主摄像头采集图像中的ID号为准);作为可选的,同一时间且同一位置出现两个对象以上时,可以额外采用特征匹配的计算的方法对目标进行匹配;对于不同的目标重新分配新的ID值。例如,对于主摄像头采集的图和副摄像头采集的图的重叠区域,如果副摄像头的抓拍图中存在主摄像头抓拍图中没有记录或识别出的目标,则把这个目标作为主摄像头监控区域中的新目标,并赋予一个新的编号,当然这种概率是极小的。
在多目目标检测跟踪单元1041单元保存的图像中,假设经过多目抓拍图优选单元1042进行ID匹配,得到有M个图像中都含有目标对象(某一个具体目标,可以是监控***中检测到的任意目标之一,也可以与用户需求有关)。接下来需要根据该目标对象裁剪这M个图像,得到M个含有该目标对象的小图。例如根据目标对象的大小或者外形,裁剪出能够显著呈现目标特征的小图。小图的形状可以是方形、圆形、或者是目标物体的轮廓。裁剪就是把图片或影像的某一部分从原始图片或影像中分离出来,还可以理解为抠图。可用的方法包括:特定区域裁剪、套索工具、选框工具、橡皮擦工具等直接选择、快速蒙版、钢笔勾画路径后转选区、抽出滤镜、外挂滤镜抽出、 通道、计算、应用图像法等。本发明裁剪得到的图像可以是目标对象的外形、方块、圆形、或其他形式的形态。
对于该目标对象对应的这M个小图,可以采用基于深度学习的质量评价算法,从而得到该目标对象对应的质量抓拍结果,可以根据预设需求以及质量抓拍的结果从M个小图中选择出质量抓拍结果排名靠前的X个小图;具体的,例如对于该目标对象,选择质量评价结果最好的一张小图。最终优选出的小图,可以来自于主摄像头获取的抓拍图,也可以来自于副摄像头获取的抓拍图,由质量评价结果或算法决定。
经过多目抓拍图优选单元1042的处理,对于该目标对象,可以得到质量满足条件的X个小图;为了后续监控***的进一步识别和验证,多目抓拍图优选单元1042还会从该目标对象对应的M个图像中保存这X个小图对应的原始图像。
本单元的目的是对于任意一个目标,采用上述类似方式找到优选小图,尤其是最优小图;具体地,在发给前端识别或后端服务器的时候,也可以打包评价结果排名靠前的X张小图,进一步地,还可以打包这X个小图对应的原图。因此,对于任意一个目标,该单元都能找到其对应的优选小图。对监控视野内的任意对象都能有高质量的图像呈现,大大提升监控质量和能力。
可选地,本发明的监控***结构还可以包括成像参数生成及反馈控制单元1043:
多目抓拍图优选单元102的质量评价结果,对于主摄像头及多目副摄像头采集图像的成像质量也同时进行了评价。其中,包括图像的亮度、对比度、模糊程度、噪声水平等;基于该评价结果以及期望的成像效果,可以通过计算生成各成像摄像头的理想拍摄参数,包括:曝光时长、增益、去噪参数等;并通过多路ISP处理单元103反馈控制给主摄像头及各副摄像头的成像参数,调整各个摄像头当前的拍摄参数。用成像质量高的拍摄参数去调整摄像头,这种反馈可以是持续的。
视频/图像编码单元105:一方面,该单元可以对经过ISP处理后的主摄像头采集的图像,即视频数据进行视频编码,对经过ISP处理后的各个副摄像头采集的图像数据进行图像编码。另一方面,还可以对于优选出的目标对象的优选图进行图像编码;还可以对优选出的目标对象的优选图的原始图像进行图像编码。编码后的数据可通过网络等方式传输。其中视频编码格式可选H.264、H.265、AVS等主流格式,图像编码可采用JPEG、PNG等编码格式。
传输单元106:
用于传输编码后的视频和图像。
显示单元107:
用于对传过来编码的视频和图像进行解码和显示。例如,视频web显示界面中主显示区域显示视频,视频的周边区域可以显示一些目标的抓拍图像,尤其是显示目标的优选抓拍图。
在具体实现过程中,实时监控的画面会呈现在显示界面中,如主摄像头实时采集到的监控预览视频流;可选地,还可以显示副摄像头的实时采集到的部分图像;可选地,在显示的主摄像头的实时视频流中,监控***就会对目标进行检测和跟踪,如画面上对于任意目标对象显示提示框,同时视频显示区域的周围可以显示该目标对象的优选小图。如有验证的需要,再显示该目标对象对应的原图。在一些实施场景中,例 如无人监控、或者监控目标过多时,有些优选图也可以不必显示出来,等待有调用数据需求的时候再显示。
结合图6对本发明数据流和控制流进行描述。
主摄像头101以第一帧率采集实时视频流,经过多路ISP处理单元103处理,处理后的数据可以经过视频/图像编码单元105进行视频编码,通过传输单元106传输,由显示单元107解码并显示主摄像头监控到的视频流。处理后的数据还经过多目目标检测跟踪单元1041进行目标检测,当检测到任意一个图像中存在目标(如车、人等用户预设的识别对象)时,反馈给多路ISP处理单元103,从***缓存中保存该图像。假设在一段监控时间内,主摄像头101连续采集到的图像有共有M11个,保存下来的含有目标(可以是任意预设的识别对象,或者部分的预设识别对象)的图像(已经是YUV格式)有M12个。应理解,不包含任何目标的图像可以被丢弃。
对于任意一个副摄像头102,以第二帧率(低于第一帧率)采集实时图像数据,经过多路ISP处理单元103处理,处理后的图像数据经过多目目标检测跟踪单元1041进行目标检测,当检测到任意一个图像中存在目标(如车、人等用户预设的识别对象)时,从***缓存中保存该图像。假设在一段监控时间内,所有的副摄像头102采集到的图像共有M21个,保存下来的含有目标(可以是任意预设的识别对象,或者部分的预设识别对象)的图像(已经是YUV格式)有M22个。应理解,不包含任何目标的图像可以被丢弃。可选的,副摄像头对应的YUV图像数据也可以经过视频/图像编码单元105进行视频或图像编码,通过传输单元106传输,由显示单元107解码并显示副摄像头监控的图像数据流;应理解,为节约存储资源和显示资源,这些数据也可以不传输到显示单元107,或传输到显示单元107但不进行显示。
对于主摄像头获取并保存下来的M12个图像以及副摄像头获取并保存下来的M22个图像,多目抓拍图优选单元1042可以根据标定的主摄像头和各个副摄像头之间的图像映射关系(空间对应),并根据监控***记录的时间戳(时间维度)、和目标所在的位置(空间维度)等信息进行关联匹配,对于保存的图像中的同一对象匹配为同一目标,并统一ID值,即识别出保存的图像中哪些目标是同一目标。假设,在上述M12个图像中含有某一目标对象(可以跟用户的需求有关)的图像有M13个,其中,M11≥M12≥M13;在上述M22个图像中含有该目标对象的图像有M23个,其中,M21≥M22≥M23;即共获得了M13+M23个含有目标对象的图像(可以理解为目标对象的原图)。
多目抓拍图优选单元1042根据目标对象对上述M13个图像进行裁剪得到包含目标对象的M13个小图,用于更直观更有效地呈现目标对象。多目抓拍图优选单元1042根据目标对象对上述M23个图像进行图像裁剪得到包含目标对象的M23个小图,用于更直观更有效地呈现目标。
由多目抓拍图优选单元1042得到了M13个图像(原图)、M13个小图、M23个图像(原图)、M23个小图,其中,M13个小图和M23个小图由多目抓拍图优选单元1042进行质量评价,得到优选的X张小图。多目抓拍图优选单元1042将X张小图以及X张小图对应的X张原图传输到视频/图像编码单元105进行图像编码,通过传输 单元106传输到显示单元107,显示单元107可以仅对X张小图进行解码并显示,可选地,还可以对X张大图进行解码和显示。应理解,原图含有更丰富的图像细节和背景,能有利于进行监控结果的验证和校对,一些场景下为节约显示面积,可以存储在显示单元所在终端侧中不进行实时显示。
可选地,多目抓拍图优选单元1042将成像质量评价结果发送给成像参数生成及反馈控制单元1043;成像参数生成及反馈控制单元1043综合多目抓拍图优选单元1042的质量评价结果,对于主摄像头及多目副摄像头采集图像的成像质量进行评价。其中,包括图像的亮度、对比度、模糊程度、噪声水平等;基于该评价结果以及期望的成像效果,可以通过计算生成各成像摄像头的理想拍摄参数,包括:曝光时长、增益、去噪参数等,并通过多路ISP处理单元反馈控制给主摄像头及各副摄像头的成像参数,调整各个摄像头当前的拍摄参数。
本发明实现的过程中,首先要架设好监控拍摄模组。
本发明的监控拍摄模组建立的模式是“1个主摄像头+N个副摄像头”。在一些实例中,N个副摄像头还可以包含中短焦副摄像头和/或长焦副摄像头;不同的中短焦副摄像头的焦距等参数可以相同也可以不同;不同的长焦副摄像头的焦距等参数可以相同也可以不同;可以只有中短焦副摄像头或者长焦副摄像头中的一种;副摄像头的数量N可以是不小于2的任意整数。
例如,一种监控拍摄模组是采用“1个主摄像头(6mm)+4个副摄像头(15mm)+3个副摄像头(8mm)”。摄像头架设的核心思想是以“覆盖住用户感兴趣的监控区域,并能在监控区域中可以采集到完整高清的目标的图像”为目的;具体摄像头参数的设计与个数的选择由区域覆盖、摄像头的FOV、摄像头架设的高度、用户对采集图像质量的要求、监控距离等因素共同决定。其中,主摄像头可以采用常规监控视频组件或摄像头,例如,镜头焦距可以在2mm-8mm范围内电动变焦(实际工作时可固定在6mm左右),光圈Fno=1.4,Sensor 200万像素等,用于实时采集视场区域全局图像数据。对于副摄像头,如果想监控距离摄像头安装位置较远的区域,可以采用长焦镜头,以确保感兴趣目标的尺寸满足要求;如果想监控视场中距离摄像头安装位置较近的区域,可以采用中短焦镜头。副摄像头的镜头,可以采用大光圈设计,取值范围为0.6-1.5之间,如Fno=0.8或Fno=1.2或Fno=1.5等,用于增加低照度场景下的感光量,提升低照度成像效果。
另一种可能的设计规格可以参考如下:
1主摄像头+4长焦副摄像头+4中短焦副摄像头;
主摄像头的总FOV>60°,中短焦副摄像头交叠>1.6°,长焦副摄像头交叠>1°;一种可选的需求是确保相邻副摄像头之间的交叠区域能够完整覆盖某种类型的检测对象,如人脸,保证监控无死角。应理解,副摄像头的FOV与主摄的FOV以及副摄像头的个数有关;例如,假设主摄FOV值一定,副摄像头的个数越多,副摄像头的FOV可以越小,反之亦然。
具体实现过程中,主摄像头的等效焦距可以选择2mm-8mm;典型值6或8mm,在监控距离小于5m的范围内,采集人脸图像的像素远远超过50*50;中短焦副摄像头 的等效焦距可以选择6mm-15mm;典型值10mm或12mm,可以在监控距离5m-15m的范围内,采集人脸图像的像素远远超过50*50;长焦副摄像头的等效焦距可以选择15mm-25mm;典型值18mm或21mm或24mm,可以在监控距离15m-25m的范围内,采集人脸图像的像素远远超过50*50。这种设计可以保证在25米范围内,能够采集到的人脸图像大于50*50像素,达到4K品质的抓拍图。而现有技术中若只采用一个摄像头,则只能保证5m范围内高清抓图,其他范围内抓图的清晰度会变差,进而监控***的性能就会大打折扣。
应理解为,整个监控视野的纵深方向的监控范围需求会影响副摄像头的焦距和排数,整个监控视野的横向方向的监控范围需求会影响副摄像头的视场角;进而影响副摄像头的具体个数。
以上具体实例仅作举例,不做任何限定。
此外,对于任意一个副摄像头所覆盖的监控区域,可以通过标定的方式与主摄像头的重叠监控区域建立映射关系,即对于这块共同的监控区域,主摄像头和副摄像头所采集到的图像之间要建立映射关系。标定的方法可采用现有的方法,包括:特征点标定等。该映射关系可以在监控设备安装现场进行初始化标定,也可以在监控设备出厂时设定,在完成安装后,根据摄像头安装现场的视场场景,对映射关系进行确认和修正。
架设拍照***后,监控***开始采集图像,并对目标进行检测、跟踪和选图,到最后在屏幕上呈现出优选抓拍图。
接下来将以下列示例为例,对本发明实施例展开进一步描述。该实例应用于一种监控拍摄模组,监控拍摄模组包括:1个主摄像头和N个副摄像头;其中,N个副摄像头的监控区域分别覆盖主摄像头监控区域中的N个不同区域;任意一个副摄像头拍摄图像时的焦距大于主摄像头拍摄图像时的焦距;N为不小于2的整数。
示例1
S11:利用主摄像头和N个副摄像头采集图像。
具体实现过程中,主摄像头以第一帧率(例如不低于25fps)采集主摄像头覆盖的监控视场区域内的图像数据,可以理解为视频;各副摄像头以第二帧率(如0.5fps-25fps)采集各自覆盖的监控视场区域内的图像数据,可以理解为图片。其中,第二帧率要比第一帧率低,副摄像头采用类似拍照的方式获取图像数据。此外,副摄像头可以根据监控的目标类别或监控场景的不同而选择不同的帧率。例如监控楼梯、大厅等人物时,拍摄帧率可以选择0.5fps-3fps;例如监控人行道,可以选择1fps-5fps,监控车辆时可以选择3fps-10fps等。主摄像头采集的视频以及各副摄像头采集到的RGB图像经过ISP处理后变为YUV图像数据,对应的可以称为主摄像头的YUV图像数据和副摄像头的YUV图像数据。
S12:在主摄像头和N个副摄像头采集的图像中选出含有目标对象的M个图像。
具体实现过程中,接收ISP处理后的主摄像头及多目副摄像头对应的YUV图像数 据,即主摄像头的YUV图像数据和副摄像头的YUV图像数据,对每帧图像进行目标的检测、跟踪,这里的目标可以包括:机动车、非机动车、行人、人脸等重点监控对象。对于每帧主摄像头的YUV图像数据,进行目标检测,对于各个副摄像头的YUV图像数据,进行目标检测。目标检测的方法可以采用图像分析方法或者神经网络判别方法。
在具体实现过程中,多目目标检测跟踪单元1041进行目标检测时,若检测到图像中存在目标(如车、人等用户预设的识别对象),则反馈给多路ISP处理单元103,反馈给多路ISP处理单元103,从***缓存中保存该图像。通过上述实例的描述,应理解,多目目标检测跟踪单元1041既可以保存主摄像头对应的YUV图像数据也可以保存副摄像头对应的YUV图像数据,假设在一段监控时间内,保存有M0个包含目标的图像。
对于保存的主摄像头的YUV图像数据,进行目标跟踪,记录每帧图像中每个目标的目标ID、目标位置,对于同一个目标会形成目标轨迹;对于保存的各个副摄像头的YUV图像数据,进行目标跟踪,记录每帧图像中每个目标的目标ID、目标位置,形成目标轨迹。跟踪方法可以采用但不限于双向光流跟踪、Kalman预测及匈牙利匹配算法。应理解,目标ID可以以数字或字母编号的形式进行标记,且对于任何不同的摄像头中的数据编号方式可以相同也可以不同,为了在后续处理信号时有所区别,通常不同的摄像头采集的图像对应的记录目标ID时采用的方式不同。另外,记录目标位置可以采用但不限于坐标标记的方式。
对于保存的主摄像头获取的每一帧图像(可以称为主摄抓拍图)与保存的各副摄像头获取的每一个图像(可以称为副摄抓拍图),根据标定的主摄像头和各副摄像头取景区域之间的图像映射关系、图像拍摄的时间戳、目标位置等信息进行关联匹配,识别出在不同图像中的相同的目标,对于匹配出的同一目标,统一ID值,即建立同一个目标的关联关系;对于不同的目标,尤其是对于某一摄像头中采集图像中丢失或未识别的目标(被另一摄像头采集图像中记载或识别),分配新的ID值。可选的,同一时间且同一位置出现两个对象以上时,可以采用特征匹配的算法来进一步地确定不同图像中的各个目标哪些是同一目标。另外,如果总检测线程反馈给多路ISP处理单元103用以同时保存主摄像头和副摄像头采集的图像时,可以通过对主摄像头和副摄像头对拍摄的同一目标直接统一ID。
通过上述方法,对于一个具体的目标或者目标ID,也即目标对象,在上述保存的M0个包含目标的图像中,可以确定出包含目标对象的图像有若干个,这里用M个来表示。
S13:根据目标对象对M个图像进行裁剪,得到M个含有目标对象的小图。
由于监控***中需要显著呈现目标的图像来方便用户感知监控数据;因此需要根据目标对象对上述M个图像进行裁剪,减去多余的无关信息,得到M个含有该目标对象的小图。例如根据目标对象的大小或者外形,裁剪出能够显著呈现目标特征的小图。小图的形状可以是方形、圆形、或者是目标物体的轮廓。裁剪就是把图片或影像的某一部分从原始图片或影像中分离出来,还可以理解为抠图。可用的方法包括:特定区域裁剪、套索工具、选框工具、橡皮擦工具等直接选择、快速蒙版、钢笔勾画路径后 转选区、抽出滤镜、外挂滤镜抽出、通道、计算、应用图像法等。本发明裁剪得到的图像可以是目标对象的外形、方块、圆形、或其他形式的形态。
S14:对M个小图进行质量评价。
针对上述M个小图,可以采用基于深度学习的质量评价算法或其他现有技术中的质量评价方法,对M张小图进行质量评价。从而得到该目标对象对应的质量抓拍结果,质量评价结果可以用分数来表示。可以根据预设需求以及质量抓拍的结果从M个小图中选择出质量抓拍结果排名靠前的X个小图。具体的,例如对于该目标对象,选择质量评价结果最好的一张小图。最终优选出的小图,可以来自于主摄像头获取的抓拍图,也可以来自于副摄像头获取的抓拍图,由质量评价结果或算法决定。通常来说,在补充监控区域,同时间同地点,副摄像头采集的图像的质量会高于主摄像头。
S15:至少显示M张小图中质量评价结果最好的一张小图。
具体地,根据质量评价结果,可以得到最优小图,或多张优选小图。最优小图或多张优选小图经过视频编码单元编码和网络传输传到显示终端,经显示终端解码将最优小图显示在终端的显示屏中。可选地,如果显示终端的空间较大,且用户有需求重点监控某一时段内的对象,还可以选出质量评价结果靠前的X个小图一起呈现给用户。显示形式可以为:显示屏中包括主显示区域和副显示区域,主显示区域用于显示主摄像头采集的实时视频流,副显示区域用于显示副摄像头抓拍到的任意目标的小图。进一步地,还可以打包这X个小图对应的原图传输到显示端,用于后续的显示与验证。
在具体实现过程中,可以在目标对象出现在监控视野中实时显示该优选小图(topX个小图),因此质量评价只能是基于当前采集并保存的含有目标对象的图像,后续一旦检测到质量评价结果更好的另外一个目标对象的小图,进行实时更新。
在具体实现过程中,可以在目标对象消失在监控视野中显示该优选小图(topX个小图),因此质量评价只能是基于所有采集并保存的含有目标对象的图像,因此无需实时更新。
S16:根据图像质量评价结果调整主摄像头或者副摄像头的拍摄参数。
应理解,S16这一步骤是可选的。具体地,M个小图的质量评价结果,对于主摄像头及多个副摄像头的成像质量也会有一定的质量反馈,如包括图像的亮度、对比度、模糊程度、噪声水平等,根据期望的成像效果,可以通过计算生成各成像摄像头的成像参数,包括:曝光时长、增益、去噪参数等,并通过多路ISP处理单元103反馈给主摄像头及各目副摄像头的成像参数。如果主摄像头及各目副摄像头当前的拍摄参数存在不足,则可以根据反馈参数进行适应性的调整和优化。
示例2
如图8所示,监控***的关注区域为ABCD四个副摄像头的监控区域(也可对应称为ABCD区域),且ABCD四个区域在主摄的监控范围(k1、k2和k3的包围区域)内;本例中可以理解为ABCD以外的区域为监控时的不关注区域,虚线表示目标对象的实际目标轨迹。自该目标对象进入监控区域到走出监控区域的过程中,途径ACD三个区域,假定这一过程中,主摄像头对目标拍摄了50帧图,记作z1、z2……z50,多 目目标检测跟踪单元1041中的主摄像头检测线程或副摄像头检测线程在A1、A2、C1、D1、D2、D3处检测到该目标,并反馈多路ISP处理单元103保存到了副摄像头A采集的两个图像(记作a1、a2)、副摄像头C采集的两个像(记作c2)、副摄像头D对于采集的三个图像(记作d1、d2、d3)。应理解,a1、a2、c1、d1、d2、d3会比主摄像头在相应区域拍摄的图像有更大的呈现,进一步可能会有更高的感光,或者更高的像素。
多目抓拍图优选单元1042根据主摄像头与副摄像头通过A区域的位置对应关系,采集图像的时间戳,以及检测到的目标的位置判定出A区域中检测到的目标a1、a2与主摄检测到的目标z1-z50为同一目标;同理,主摄像头与副摄像头通过C区域的位置对应关系,采集图像的时间戳,以及检测到的目标的位置判定出C区域中检测到的目标c1与主摄检测到的目标z1-z50为同一目标;主摄像头与副摄像头通过A区域的位置对应关系,采集图像的时间戳,以及检测到的目标的位置判定出D区域中检测到的目标d1、d2、d3与主摄检测到的目标z1-z50为同一目标。因此,对于该目标对象,通过主摄像头和副摄像头共获得50+2+1+3=56个图像,多目抓拍图优选单元1042将根据目标对象的大小或者外形对这56个图像进行裁剪,得到能够显著显示出目标对象的56个小图。并对着56个小图按照统一的预设评价标准进行质量评价,筛选出质量满足标准的topX个图像作为优选小图,X的典型值为1,如果有其他需求,X可以为其他正整数,由用户根据具体需要自由定义,本发明中不予以限定。这X个小图经过编码传输到显示终端,经过显示终端的解码呈现给用户。例如,若d1的质量评价结果最好,则在显示端对应地显示出d1,以供用户参考对目标对象进行监控分析。这时,显示端可以实时显示主摄像头实时监控的视频,用户可以看到目标对象的移动轨迹,但是在其他显示区域可以只显示出d1这个图像。
此外,由于监控***中还会出现其他的目标,因此检测到其它目标时,同样会保存相关的图像,即实际多目目标检测跟踪单元1041检测到的图像远比上述图像要多的多,且多路ISP处理单元103保存的图像也要比上述图像多得多。本示例中仅用目标对象作为举例,应理解,对于其他任意对象,其监控方法和流程如同该目标对象的示例,本发明中不进行赘述。
本发明实施例提供了一种监控拍摄模组和监控方法,采用“1主摄像头+N副摄像头”为采集图像的基础组件,通过布局多个副摄像头弥足主摄像头在较远处拍摄图像不清楚的弊端,副摄像头的“长焦”和“大光圈”设计大大弥补了主摄像头成像质量的不足,进而使得在主摄监控的大部分区域内,能够采集到目标对象的清晰图像。对于监控***,可以增强监控***的识别准确率,无疑会为监控***的商业成功提供更强大的用户使用基础。
基于上述实施例提供的监控方法,本发明实施例提供一种监控装置,该装置应用于一种监控拍摄模组,该监控拍摄模组包括:1个主摄像头和N个副摄像头;其中,N个副摄像头的监控区域分别覆盖主摄像头监控区域中的N个不同区域;任意一个副摄像头的焦距大于主摄像头的焦距;N为大于1的整数;如图9所示,该装置200可以包括:采集模块201,选择模块202,裁剪模块203,评价模块204,显示模块205; 可选的,任意一个副摄像头的光圈值小于主摄像头的光圈值;其中,
采集模块201,用于利用主摄像头和N个副摄像头采集图像,其中,任意一个副摄像头采集图像的帧率小于所述主摄像头采集图像的帧率;该模块可以由处理器调用存储器中的程序指令或者外部输入的程序指令实现,协同摄像头获取图像并对图像进行一些计算处理,进而采集到图像。
选择模块202,用于在主摄像头和N个副摄像头采集的图像中选出含有目标对象的M个图像,M为大于1的整数;该模块可以由处理器调用存储器中的程序指令或者外部输入的程序指令实现;通过算法筛选含有目标对象的M个图像。
裁剪模块203,用于根据目标对象对M个图像进行裁剪,得到M个含有目标对象的小图;该模块可以由处理器调用存储器中的程序指令或者外部输入的程序指令实现;如图像裁剪或抠图的算法或程序。
评价模块204,用于对上述M个小图进行质量评价;该模块可以由处理器调用存储器中的程序指令或者外部输入的程序指令实现。
显示模块205,用于显示上述M个小图中质量评价结果最好的一个小图。该模块可以由处理器调用存储器中的程序指令或者外部输入的程序指令,并配合显示屏一起实现。
在具体实现过程中,采集模块201具体用于执行S11中所提到的方法以及可以等同替换的方法;选择模块202具体用于执行S12中所提到的方法以及可以等同替换的方法;裁剪模块203具体用于执行S13中所提到的方法以及可以等同替换的方法;评价模块204具体用于执行S14中所提到的方法以及可以等同替换的方法;显示模块204具体用于执行S15中所提到的方法以及可以等同替换的方法。进一步地,装置200还可以包含反馈模块206,具体用于执行S16中所提到的方法以及可以等同替换的方法。其中,上述具体的方法实施例以及实施例中的解释和表述也适用于装置中的方法执行。
在具体实现过程中,采集模块201可以具有主摄像头101、副摄像头102、多路ISP处理单元103中所提到的部分功能以及可以等同替换的功能,具体包括按照各自预设频率采集图像、进行图像的raw域处理等功能;选择模块202可以具有多目目标检测跟踪单元1041、多目抓拍图优选单元1042中所提到的部分功能以及可以等同替换的功能,具体可以包括目标检测、反馈多路ISP处理单元103保存图像、目标追踪、目标ID标记、筛选同一目标等功能;裁剪模块203可以具有多目抓拍图优选单元1042中所提到的部分功能以及可以等同替换的功能,具体可以包括裁剪图像的功能;评价模块204可以具有多目抓拍图优选单元1042中所提到的部分功能以及可以等同替换的功能,具体可以包括按照预设质量评价方法对裁剪后的图像进行质量评价的功能,以及对显示结果进行排序的功能;显示模块205可以具有多目抓拍图优选单元107中所提到的部分功能以及可以等同替换的功能,具体可以包括将编码后的图像数据进行解码并呈现的功能。其中,与图6相关的实施例以及实施例中的解释和表述也适用于装置中相应的功能模块。
该装置应用于的监控拍摄模组可以如方法实施例中任意一种可能得到的监控拍摄模组。关于摄像头数量、硬件参数此处不再赘述。
本申请实施例还提供一种监控设备300,如图10所示,设备包含处理器301、存 储器302、多个摄像头303、显示屏304以及总线305;处理器301、存储器302、多个摄像头303、显示屏304通过总线305相连接;存储器302中存储有程序指令和数据,摄像头303用于采集图像,显示屏304用于显示视频或图像,处理器301用于调用存储器中的数据和程序指令,与多个摄像头303和显示屏304协同完成;以完成如上述实施例中提供的任一方法和可能的设计方法。其中,多个摄像头303可以配置成为如方法实施例中任意一种可能得到的监控拍摄模组,关于摄像头数量、硬件参数此处不再赘述。
本领域内的技术人员应明白,本申请实施例可提供为方法、***、或计算机程序产品。因此,本申请实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请实施例是参照根据本申请实施例的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请实施例进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (16)

  1. 一种监控拍摄模组,其特征在于,所述监控拍摄模组包括:1个主摄像头和N个副摄像头,N为大于1的整数;所述主摄像头和所述N个副摄像头用于采集图像,且任意一个副摄像头采集图像的帧率小于所述主摄像头采集图像的帧率;
    其中,所述N个副摄像头的监控区域分别覆盖所述主摄像头监控区域中的N个不同区域;
    且任意一个副摄像头的焦距大于所述主摄像头的焦距。
  2. 如权利要求1所述模组,其特征在于,所述主摄像头的FOV大于60°,所述主摄像头的焦距在[4,8]mm范围内,所述主摄像头的光圈值在[1.4,2.0]范围内。
  3. 如权利要求1或2任一项所述模组,其特征在于,至少一个副摄像头的焦距在[8,15]mm范围内。
  4. 如权利要求1-3任一项所述模组,其特征在于,至少一个副摄像头的焦距在[15,25]mm范围内。
  5. 如权利要求1-4任一项所述模组,其特征在于,至少一个副摄像头光圈值在[0.8,1.6]范围内,且至少一个副摄像头的光圈值小于所述主摄像头的光圈值。
  6. 如权利要求1-5任一项所述模组,其特征在于,N=4,其中,4个副摄像头的焦距在[18,21]mm范围内。
  7. 如权利要求1-5任一项所述模组,其特征在于,N=7,其中,三个副摄像头的焦距为[12,18]mm,另外四个副摄像头的焦距为[21,25]mm。
  8. 一种监控方法,其特征在于,所述方法应用于一种监控拍摄模组,所述监控拍摄模组包括:1个主摄像头和N个副摄像头;其中,所述N个副摄像头的监控区域分别覆盖所述主摄像头监控区域中的N个不同区域;任意一个副摄像头的焦距大于所述主摄像头的焦距;N为大于1的整数;所述方法包括:
    利用所述主摄像头和所述N个副摄像头采集图像,其中,任意一个副摄像头采集图像的帧率小于所述主摄像头采集图像的帧率;
    在所述主摄像头和所述N个副摄像头采集的图像中选出含有目标对象的M个图像,M为大于1的整数;
    根据所述目标对象对所述M个图像进行裁剪,得到M个含有目标对象的小图;
    对所述M个小图进行质量评价;
    至少显示所述M个小图中质量评价结果最好的一个小图。
  9. 如权利要求8所述方法,其特征在于,所述主摄像头焦距在[4,8]mm范围内,N=4,且4个副摄像头的焦距在[18,21]mm范围内。
  10. 如权利要求8或9所述方法,其特征在于,所述主摄像头光圈值在[1.4,2.0]范围内,至少一个副摄像头光圈值在[0.8,1.6]范围内,且至少一个副摄像头的光圈值小于所述主摄像头的光圈值。
  11. 如权利要求8-10任一项所述方法,其特征在于,在所述主摄像头和所述N个副摄像头采集的图像中选出含有目标对象的M个图像包括:
    利用主摄像头检测线程在所述主摄像头采集的图像中检测到M1个图像含有所述目标对象,从缓存中保存所述M1个图像;利用所述主摄像头检测线程在所述N个副 摄像头中采集的图像中检测到M2个图像含有所述目标对象,从缓存中保存所述M2个图像;或,
    利用主摄像头检测线程在所述主摄像头采集的图像中检测到M1个图像含有所述目标对象,从缓存中保存所述M1个图像;利用副摄像头检测线程在所述N个副摄像头中采集的图像中检测到M2个图像含有所述目标对象,从缓存中保存所述M2个图像;
    根据主摄像头和各副摄像头之间的图像映射关系,并根据所述主摄像头和各副摄像头拍摄图像的时间戳、以及目标位置,将所述M1个图像和所述M2图像识别为含有所述目标对象的图像;其中M=M1+M2。
  12. 一种监控装置,其特征在于,所述装置应用于一种监控拍摄模组,所述监控拍摄模组包括:1个主摄像头和N个副摄像头;其中,所述N个副摄像头的监控区域分别覆盖所述主摄像头监控区域中的N个不同区域;任意一个副摄像头的焦距大于所述主摄像头的焦距;N为大于1的整数;所述装置包括:
    采集模块,用于利用所述主摄像头和所述N个副摄像头采集图像,其中,任意一个副摄像头采集图像的帧率小于所述主摄像头采集图像的帧率;
    选择模块,用于在所述主摄像头和所述N个副摄像头采集的图像中选出含有目标对象的M个图像,M为大于1的整数;
    裁剪模块,用于根据所述目标对象对所述M个图像进行裁剪,得到M个含有目标对象的小图;
    评价模块,用于对所述M个小图进行质量评价;
    显示模块,用于至少显示所述M个小图中质量评价结果最好的一个小图。
  13. 如权利要求12所述装置,其特征在于,所述主摄像头焦距在[4,8]mm范围内,N=4,且4个副摄像头的焦距在[18,21]mm范围内。
  14. 如权利要求12或13所述装置,其特征在于,所述主摄像头光圈值在[1.4,2.0]范围内,至少一个副摄像头光圈值在[0.8,1.6]范围内,且至少一个副摄像头的光圈值小于所述主摄像头的光圈值。
  15. 如权利要求12-14任一项所述装置,其特征在于,所述选择模块具体用于:
    利用主摄像头检测线程在所述主摄像头采集的图像中检测到M1个图像含有所述目标对象,从缓存中保存所述M1个图像;利用所述主摄像头检测线程在所述N个副摄像头中采集的图像中检测到M2个图像含有所述目标对象,从缓存中保存所述M2个图像;或,
    利用主摄像头检测线程在所述主摄像头采集的图像中检测到M1个图像含有所述目标对象,从缓存中保存所述M1个图像;利用副摄像头检测线程在所述N个副摄像头中采集的图像中检测到M2个图像含有所述目标对象,从缓存中保存所述M2个图像;
    根据主摄像头和各副摄像头之间的图像映射关系,并根据所述主摄像头和各副摄像头拍摄图像的时间戳、以及目标位置,将所述M1个图像和所述M2图像识别为含有所述目标对象的图像;其中M=M1+M2。
  16. 一种监控设备,其特征在于,所述设备包括多个摄像头、显示屏、处理器、存储器 和总线;所述多个摄像头、所述显示屏、所述处理器、所述存储器通过所述总线相连;
    所述多个摄像头用于采集图像;
    所述显示屏用于显示视频或图像;
    所述存储器用于存储有数据和程序指令;
    所述处理器用于调用所述数据和程序指令,与所述摄像头和所述显示屏协同完成如权利要求8-11所述方法。
PCT/CN2019/099275 2018-08-07 2019-08-05 一种监控方法与装置 WO2020029921A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19848312.5A EP3829163A4 (en) 2018-08-07 2019-08-05 SURVEILLANCE PROCESS AND DEVICE
US17/168,781 US11790504B2 (en) 2018-08-07 2021-02-05 Monitoring method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810891567.0 2018-08-07
CN201810891567.0A CN110830756B (zh) 2018-08-07 2018-08-07 一种监控方法与装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/168,781 Continuation US11790504B2 (en) 2018-08-07 2021-02-05 Monitoring method and apparatus

Publications (1)

Publication Number Publication Date
WO2020029921A1 true WO2020029921A1 (zh) 2020-02-13

Family

ID=69413411

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/099275 WO2020029921A1 (zh) 2018-08-07 2019-08-05 一种监控方法与装置

Country Status (4)

Country Link
US (1) US11790504B2 (zh)
EP (1) EP3829163A4 (zh)
CN (1) CN110830756B (zh)
WO (1) WO2020029921A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968152A (zh) * 2020-07-15 2020-11-20 桂林远望智能通信科技有限公司 一种动态身份识别方法及装置
CN115223368A (zh) * 2022-08-08 2022-10-21 启迪设计集团股份有限公司 一种非机动车道是否适于改造为单侧双向通行的判定方法
US11637977B2 (en) * 2020-07-15 2023-04-25 Corephotonics Ltd. Image sensors and sensing methods to obtain time-of-flight and phase detection information

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220004748A1 (en) * 2019-01-16 2022-01-06 Hangzhou Hikvision Digital Technology Co., Ltd. Video display method, device and system, and video camera
CN111815675B (zh) * 2020-06-30 2023-07-21 北京市商汤科技开发有限公司 目标对象的跟踪方法及装置、电子设备和存储介质
CN111787224B (zh) * 2020-07-10 2022-07-12 深圳传音控股股份有限公司 图像的获取方法、终端设备和计算机可读存储介质
CN111931638B (zh) * 2020-08-07 2023-06-20 华南理工大学 一种基于行人重识别的局部复杂区域定位***及方法
CN112055179A (zh) * 2020-09-11 2020-12-08 苏州科达科技股份有限公司 视频播放方法及装置
CN113286096B (zh) * 2021-05-19 2022-08-16 中移(上海)信息通信科技有限公司 视频识别方法及***
CN113489948A (zh) * 2021-06-21 2021-10-08 浙江大华技术股份有限公司 一种摄像监控设备、方法、装置及存储介质
US11343424B1 (en) * 2021-07-09 2022-05-24 Viewsonic International Corporation Image capturing method and electronic device
CN113850186A (zh) * 2021-09-24 2021-12-28 中国劳动关系学院 基于卷积神经网络的智能流媒体视频大数据分析方法
CN114677841B (zh) * 2022-02-10 2023-12-29 浙江大华技术股份有限公司 一种车辆变道检测方法及车辆变道检测***
CN114554093B (zh) * 2022-02-25 2023-06-30 重庆紫光华山智安科技有限公司 图像采集***及目标跟踪方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008112053A2 (en) * 2007-03-09 2008-09-18 Eastman Kodak Company Operating dual lens cameras to augment images
CN102104768A (zh) * 2009-12-22 2011-06-22 乐金电子(中国)研究开发中心有限公司 图像监控方法、主控装置及***
CN102164269A (zh) * 2011-01-21 2011-08-24 北京中星微电子有限公司 全景监控方法及装置
CN202939756U (zh) * 2012-11-29 2013-05-15 长安大学 双视频头目标入侵记录***
CN104780315A (zh) * 2015-04-08 2015-07-15 广东欧珀移动通信有限公司 摄像装置拍摄的方法和***
CN106161961A (zh) * 2016-08-27 2016-11-23 山东万博科技股份有限公司 一种消除盲区的摄像头视频监控装置及方法
CN108111818A (zh) * 2017-12-25 2018-06-01 北京航空航天大学 基于多摄像机协同的运动目标主动感知方法和装置

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4366501A (en) * 1978-04-23 1982-12-28 Canon Kabushiki Kaisha Image recording system
US4417564A (en) * 1980-06-04 1983-11-29 Lawrence John C Centering and working gemstones
US6408140B1 (en) * 2000-05-24 2002-06-18 Eastman Kodak Company Dual film image and electronic image capture camera with electronic image verification of film image misfocus
JP5962916B2 (ja) * 2012-11-14 2016-08-03 パナソニックIpマネジメント株式会社 映像監視システム
KR102105189B1 (ko) * 2013-10-31 2020-05-29 한국전자통신연구원 관심 객체 추적을 위한 다중 카메라 동적 선택 장치 및 방법
JP6347675B2 (ja) * 2014-06-06 2018-06-27 キヤノン株式会社 画像処理装置、撮像装置、画像処理方法、撮像方法及びプログラム
EP3164987B1 (en) * 2014-07-01 2024-01-03 Apple Inc. Mobile camera system
US9570106B2 (en) * 2014-12-02 2017-02-14 Sony Corporation Sensor configuration switching for adaptation of video capturing frame rate
CN104597825B (zh) 2014-12-11 2020-02-04 小米科技有限责任公司 信息推送方法和装置
KR102101438B1 (ko) 2015-01-29 2020-04-20 한국전자통신연구원 연속 시점 전환 서비스에서 객체의 위치 및 크기를 유지하기 위한 다중 카메라 제어 장치 및 방법
US9967535B2 (en) * 2015-04-17 2018-05-08 Light Labs Inc. Methods and apparatus for reducing noise in images
CN105430285A (zh) 2015-12-31 2016-03-23 深圳市华途数字技术有限公司 多目摄像机及其曝光方法
CN105930822A (zh) 2016-05-11 2016-09-07 北京格灵深瞳信息技术有限公司 一种人脸抓拍方法及***
KR102600504B1 (ko) 2016-09-07 2023-11-10 삼성전자주식회사 전자 장치 및 그 제어 방법
CN108184059A (zh) * 2017-12-29 2018-06-19 北京小米移动软件有限公司 对焦方法及装置、电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008112053A2 (en) * 2007-03-09 2008-09-18 Eastman Kodak Company Operating dual lens cameras to augment images
CN102104768A (zh) * 2009-12-22 2011-06-22 乐金电子(中国)研究开发中心有限公司 图像监控方法、主控装置及***
CN102164269A (zh) * 2011-01-21 2011-08-24 北京中星微电子有限公司 全景监控方法及装置
CN202939756U (zh) * 2012-11-29 2013-05-15 长安大学 双视频头目标入侵记录***
CN104780315A (zh) * 2015-04-08 2015-07-15 广东欧珀移动通信有限公司 摄像装置拍摄的方法和***
CN106161961A (zh) * 2016-08-27 2016-11-23 山东万博科技股份有限公司 一种消除盲区的摄像头视频监控装置及方法
CN108111818A (zh) * 2017-12-25 2018-06-01 北京航空航天大学 基于多摄像机协同的运动目标主动感知方法和装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968152A (zh) * 2020-07-15 2020-11-20 桂林远望智能通信科技有限公司 一种动态身份识别方法及装置
US11637977B2 (en) * 2020-07-15 2023-04-25 Corephotonics Ltd. Image sensors and sensing methods to obtain time-of-flight and phase detection information
CN111968152B (zh) * 2020-07-15 2023-10-17 桂林远望智能通信科技有限公司 一种动态身份识别方法及装置
CN115223368A (zh) * 2022-08-08 2022-10-21 启迪设计集团股份有限公司 一种非机动车道是否适于改造为单侧双向通行的判定方法
CN115223368B (zh) * 2022-08-08 2023-12-29 启迪设计集团股份有限公司 一种非机动车道是否适于改造为单侧双向通行的判定方法

Also Published As

Publication number Publication date
US11790504B2 (en) 2023-10-17
US20210183041A1 (en) 2021-06-17
CN110830756A (zh) 2020-02-21
CN110830756B (zh) 2022-05-17
EP3829163A1 (en) 2021-06-02
EP3829163A4 (en) 2021-06-09

Similar Documents

Publication Publication Date Title
WO2020029921A1 (zh) 一种监控方法与装置
US10735671B2 (en) Intelligent high resolution video system
CN103795976B (zh) 一种全时空立体可视化方法
US10070053B2 (en) Method and camera for determining an image adjustment parameter
US10645344B2 (en) Video system with intelligent visual display
CN116018616A (zh) 保持帧中的目标对象的固定大小
US7327890B2 (en) Imaging method and system for determining an area of importance in an archival image
WO2020057355A1 (zh) 一种三维建模的方法及其装置
US20230360254A1 (en) Pose estimation method and related apparatus
CN109376601B (zh) 基于高速球的物体跟踪方法、监控服务器、视频监控***
CN106575027A (zh) 摄像装置及其被摄体跟踪方法
US10277888B2 (en) Depth triggered event feature
CN109905641B (zh) 一种目标监控方法、装置、设备及***
CN102244737A (zh) 摄像装置
CN110633648B (zh) 一种自然行走状态下的人脸识别方法和***
CN108603997A (zh) 控制装置、控制方法和控制程序
CN111242025A (zh) 一种基于yolo的动作实时监测方法
CN105245845A (zh) 一种比赛现场根据聚集趋势控制摄像头自动跟拍的方法
CN109636763A (zh) 一种智能复眼监控***
CN114140745A (zh) 施工现场人员属性检测方法、***、装置及介质
CN114697528A (zh) 图像处理器、电子设备及对焦控制方法
CN113302906B (zh) 图像处理设备、图像处理方法和存储介质
WO2020063688A1 (zh) 视频场景变化的检测方法、装置及视频采集设备
JP5003666B2 (ja) 撮像装置、撮像方法、画像信号再生装置および画像信号再生方法
CN114120231A (zh) 一种球场智能监控***及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19848312

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019848312

Country of ref document: EP

Effective date: 20210224