WO2020057355A1 - 一种三维建模的方法及其装置 - Google Patents

一种三维建模的方法及其装置 Download PDF

Info

Publication number
WO2020057355A1
WO2020057355A1 PCT/CN2019/103833 CN2019103833W WO2020057355A1 WO 2020057355 A1 WO2020057355 A1 WO 2020057355A1 CN 2019103833 W CN2019103833 W CN 2019103833W WO 2020057355 A1 WO2020057355 A1 WO 2020057355A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
video data
camera
video
person
Prior art date
Application number
PCT/CN2019/103833
Other languages
English (en)
French (fr)
Inventor
张恩勇
Original Assignee
深圳市九洲电器有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市九洲电器有限公司 filed Critical 深圳市九洲电器有限公司
Publication of WO2020057355A1 publication Critical patent/WO2020057355A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Definitions

  • the present invention relates to the technical field of video surveillance, and particularly to a video linkage monitoring method, a monitoring server, and a video linkage monitoring system.
  • Video surveillance technology is playing an increasingly important role in the field of security. It is widely used in cities because of its intuitiveness, convenience, and rich information content. Skynet, transportation, civil security and other fields.
  • the inventors found that the traditional technology has at least the following problems: During the video monitoring process, the camera often only captures the back of the target person for some important video pictures, which causes unnecessary trouble to analyze the important video pictures .
  • An object of the embodiments of the present invention is to provide a video linkage monitoring method, a monitoring server, and a video linkage monitoring system, which can comprehensively photograph the front and back of a target person.
  • the embodiments of the present invention provide the following technical solutions:
  • an embodiment of the present invention provides a video linkage monitoring method, which is applied to a monitoring server, the monitoring server communicates with multiple cameras, each of the cameras is installed at a different position in a preset area, and each The camera is used to capture area images at different angles within the preset area, and the method includes:
  • the target video data matches a preset video detection abnormal model, a target person is detected from the target video data, and it is determined whether the target video data includes a front image of the target person, and the front image includes the front image A face image of a target person, the target person being located in the preset area;
  • the target video data does not include a front image of the target person
  • an additional camera that is set opposite to the target camera is detected, and the additional camera is controlled to track the person and take a front image of the person.
  • the method further includes:
  • the target video data does not match the preset video detection abnormal model, discard the target video data and continue to detect whether the next target video data collected by the target camera matches the preset video detection abnormal model.
  • the method further includes:
  • the target video data includes a frontal image of the target person, controlling the target camera to track the target person.
  • the detecting an additional camera disposed opposite to the target camera includes:
  • the method further includes:
  • the training video data set includes video data of multiple abnormal scenes
  • the preprocessed video data is processed by a convolution algorithm to establish the video detection abnormal model.
  • an embodiment of the present invention provides a video linkage monitoring device, which is applied to a monitoring server, the monitoring server communicates with multiple cameras, each of the cameras is installed at a different position in a preset area, and each The camera is used to capture area images at different angles within the preset area, and the device includes:
  • a first detection module configured to detect whether the target video data collected by the target camera matches a preset video detection abnormal model
  • a second detection module configured to detect a target person from the target video data if the target video data matches a preset video detection abnormal model, and determine whether the target video data includes a frontal image of the target person, The front image includes a face image of the target person, and the target person is located in the preset area;
  • a third detection module configured to detect an additional camera opposite to the target camera if the target video data does not include a frontal image of the target person, and control the additional camera to track the person and shoot the person Front image.
  • the apparatus further includes:
  • the discarding module is configured to discard the target video data if the target video data does not match the preset video detection abnormal model, and continue to detect whether the next target video data collected by the target camera matches the preset video detection abnormal model.
  • an embodiment of the present invention provides a monitoring server, including:
  • At least one processor At least one processor
  • a memory connected in communication with the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processing
  • the device can be used to perform the video linkage monitoring method according to any one of the above.
  • an embodiment of the present invention provides a video linkage monitoring system, including:
  • the monitoring server communicates with each of the cameras.
  • an embodiment of the present invention provides a non-transitory computer-readable storage medium, where the non-transitory computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used to cause a monitoring server to execute The video linkage monitoring method according to any one.
  • an embodiment of the present invention provides a computer program product.
  • the computer program product includes a computer program stored on a non-volatile computer-readable storage medium.
  • the computer program includes program instructions. When the instruction is executed by the monitoring server, the monitoring server is caused to execute any one of the video linkage monitoring methods.
  • the video linkage monitoring method, monitoring server, and video linkage monitoring system provided by various embodiments of the present invention, first, it is detected whether the target video data collected by the target camera matches a preset video detection abnormality model; second, if the target video data matches the preset
  • the video detection anomaly model detects a target person from the target video data and determines whether the target video data includes a front image of the target person.
  • the front image includes the face image of the target person, and the target person is located in a preset area; again, if the target video
  • An additional camera is detected opposite to the target camera. The additional camera is controlled to track the person and capture the front image of the person. Therefore, it can photograph the front and back of the target person in all directions, thereby bringing convenience and reducing unnecessary troubles in subsequent analysis of the target person.
  • FIG. 1 is a schematic structural diagram of a video linkage monitoring system according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a video linkage monitoring method according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a video linkage monitoring device according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a video linkage monitoring device according to another embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a monitoring server according to an embodiment of the present invention.
  • the video linkage monitoring method of the embodiment of the present invention can be executed in any suitable type of electronic device with computing capability, such as a monitoring server, a desktop computer, a smart phone, a tablet computer, and other electronic products.
  • the monitoring server here may be a physical server or a logical server virtualized by multiple physical servers.
  • the server may also be a server group composed of multiple servers that can communicate with each other, and each functional module may be separately distributed on each server in the server group.
  • the video linkage monitoring device may be used as a software system, independently provided in the above-mentioned client, or as one of the functional modules integrated in the processor, to execute the video linkage monitoring method according to the embodiment of the present invention.
  • FIG. 1 is a schematic structural diagram of a video linkage monitoring system according to an embodiment of the present invention.
  • the video surveillance system 100 includes a plurality of cameras 11, a surveillance server 12, and a mobile terminal 13.
  • the camera 11 is installed in a preset area for collecting video data. It can be understood that the camera 11 is fixedly installed in a preset area according to a preset rule, so as to cover the preset area as much as possible.
  • the camera is arranged on a wall surface, a ground, a roof, or an object surface of the preset area in combination with the specific structure and occlusion of the preset area.
  • Each camera forms a camera group, which is used to monitor a specific surveillance area.
  • Each camera is installed at a different position in a preset area.
  • Each camera is used to capture images of areas at different angles within a preset area.
  • the camera group can capture 360-degree objects in the preset area.
  • each camera in the camera group uploads the collected video data to the same monitoring server.
  • Different monitoring areas correspond to different monitoring servers.
  • the surveillance servers of the two do not share surveillance video with each other.
  • a combination of the camera 11 and a multi-dimensional rotating motor can be used to capture real-time capture of high-definition video frame images in the preset area.
  • a high-definition camera with waterproof function, small size, high resolution, long life, and universal communication interface is selected.
  • the camera 11 includes a network camera, an infrared high-definition camera, a high-speed dome camera, a low-light camera, and the like.
  • the camera 11 has a built-in network coding module.
  • the camera includes a lens, an image sensor, a sound sensor, an A / D converter, a controller, a control interface, a network interface, and so on.
  • the camera may be used to collect video data signals, and the video data signals are analog video signals.
  • the camera is mainly composed of a CMOS light-sensitive component and a peripheral circuit, and is used for converting an optical signal input from the lens into an electrical signal.
  • the network coding module has an embedded chip built therein, the embedded chip is used to convert the video data signals collected by the camera into digital signals, the video data signals are analog video signals, and the embedded chip also The digital signal may be compressed.
  • the embedded chip may be a Hi3516 high-efficiency compression chip.
  • the camera 11 sends the compressed digital signal to the monitoring server 12 through the WIFI network.
  • the monitoring server 12 may send the compressed digital signal to the mobile terminal 13.
  • the camera 11 further includes an infrared sensor, so that the camera 11 has a night vision function. Users on the network can directly view the camera image on the web server with a browser or directly access through the mobile terminal APP.
  • the camera 11 can more easily implement monitoring, especially remote monitoring, with simple construction and maintenance, better support for audio, Better support for alarm linkage, more flexible recording storage, richer product selection, higher-definition video effects and more perfect monitoring and management functions, and the camera can be directly connected to the local area network, which is the data collection and photoelectric signal
  • the conversion end is the data supply end of the entire network.
  • the monitoring server 12 is a device that provides computing services.
  • the composition of the monitoring server includes a processor, a hard disk, a memory, a system bus, and the like. Similar to a general computer architecture, the monitoring server is responsible for providing functions such as mobile terminal APP registration, user management, and device management. At the same time, it is responsible for the video data storage function of the camera, and remembers the IP and port of the mobile terminal and camera through the monitoring server, and transmits the IP and port of the corresponding mobile terminal and camera to each other, so that the camera and mobile end can know The other party's IP and port establish a connection and communication through the IP address and port.
  • the monitoring server obtains the video data of the camera and then analyzes the video data according to the artificial intelligence module. When abnormal video data is detected, it sends an alarm message to notify the mobile terminal.
  • the monitoring server 12 includes a processor, and the processor includes an artificial intelligence module.
  • the artificial intelligence module is responsible for real-time analysis of video data, detects abnormal times, and notifies the mobile terminal.
  • the specific implementation of the artificial intelligence module is divided into two parts, the establishment of a video anomaly detection model and the application of a video anomaly detection model.
  • the first is the establishment of the video anomaly detection model. There are three parts.
  • the first part training the video data set of the video anomaly detection model for the training and learning of the subsequent machines. It includes video data of various abnormal scenes, such as frequent crossing of vehicles, robbery, trailing theft, fights, group fights, screams, crying, smoke, noisy video data, and other abnormal scenes that need to be detected.
  • the training video dataset covers most application scenarios.
  • the second part the preprocessing of the video data set.
  • the video data is extracted 10 pictures per second, and each picture is converted into a picture of 255 pixels long and 255 pixels wide.
  • the third part the establishment of training model, using artificial intelligence convolution algorithm, Python code to build the training model.
  • the model includes an input layer, a hidden layer, and an output layer.
  • the input layer is an input pre-processed picture.
  • the hidden layer is used to calculate the features of the input picture.
  • the output layer is based on the calculated features of the hidden layer to output whether the video contains abnormal scenes.
  • the training process is: normal video is marked as 0, abnormal video is marked as 1, and then the abnormal video and normal video are input into the training system at the same time.
  • the model is transferred to the server, the data set is replaced with the video of the camera, and the model is run to detect whether the video of the camera is abnormal.
  • FIG. 2 is a schematic flowchart of a video linkage monitoring method according to an embodiment of the present invention.
  • the video linkage monitoring method S200 includes:
  • the target camera is any camera in the camera group. It can be understood that the “target” in the target camera is used to distinguish other cameras.
  • the monitoring server selects video data of a specific camera from the camera group for detection and analysis At this time, the specific camera is the target camera, and at the same time, the video data collected by the target camera is the target video data.
  • the "target” in the target camera is not used to limit the protection scope of the present invention, but is only used for differentiation.
  • the video detection abnormal model is pre-built by the administrator and stored in the monitoring server.
  • the video detection abnormal model is used to evaluate whether the target video data needs to be processed in a targeted manner.
  • the monitoring server When constructing a video detection anomaly model, first, the monitoring server obtains a training video data set.
  • the training video data set includes video data of multiple abnormal scenes.
  • the video data set of the training video anomaly detection model is used for training and learning of subsequent machines. , Which includes video data of various abnormal scenes, such as frequently interspersed and connected vehicles, robbery, trailing theft, fights, group fights, screams, crying, smoke, noisy video data, and other abnormal scenes that need to be detected .
  • the training video dataset covers most application scenarios.
  • the monitoring server preprocesses the video data of various abnormal scenes.
  • the video data is extracted 10 pictures per second, and each picture is converted into a picture with a length of 255 pixels and a width of 255 pixels.
  • the monitoring server processes the pre-processed video data through a convolution algorithm and establishes a video detection abnormal model, for example, the establishment of a training model, the use of artificial intelligence's convolution algorithm, and Python code to build the trained model.
  • the model includes an input layer, a hidden layer, and an output layer.
  • the input layer is an input pre-processed picture.
  • the hidden layer is used to calculate the features of the input picture.
  • the output layer is based on the calculated features of the hidden layer to output whether the video contains abnormal scenes.
  • the training process is: normal video is marked as 0, abnormal video is marked as 1, and then the abnormal video and normal video are input into the training system at the same time. Through the data set preprocessing and the calculation of the training model, it is distinguished whether the video is abnormal or not.
  • the target video data matches a preset video detection abnormal model, detect the target person from the target video data, and determine whether the target video data includes a front image of the target person.
  • the front image includes the face image of the target person.
  • the target person is located at Preset area
  • the monitoring server detects the target person from the target video data according to the image analysis algorithm, for example, A follows B, A pickpocket B, and camera monitoring To A ’s trailing action behavior, and send video data containing A ’s trailing action behavior to the monitoring server.
  • the monitoring server detects A ’s trailing action behavior, uses the video data as target video data, and uses the image analysis algorithm to The data identified A as the target person.
  • the monitoring server determines whether the target video data has facial feature points associated with the target person. If it exists, the target video data is considered to include the front image of the target person; if it does not exist, the target video is considered to be the target video. The data does not include the front image of the target person, and the target video data includes only the back image of the target person. For example, following the above example, if the monitoring server detects the face image of nail A in the target video data, the target camera is considered to have captured the front image of nail A. If the monitoring server does not detect the face image of nail A in the target video data, it is considered that the target camera captured the back image of nail A.
  • the target video data does not include a front image of the target person, detect an additional camera set opposite to the target camera, and control the additional camera to track the person and take a front image of the person.
  • the monitoring server when the monitoring server detects that the target video data does not include a frontal image of the target person, the monitoring server determines the current geographic location of the target person.
  • the monitoring server detects and covers all the additional cameras of the target person's current geographical position and determines the installation geographical positions of all the additional cameras according to the current geographical position of the target person, and determines the relationship with the target camera from the installation geographical positions of all the additional cameras. Install additional cameras that are relatively geographically located.
  • the surveillance server controls an additional camera relative to the installed geographic location of the target camera to track the person and take a frontal image of the person.
  • the target video data if the target video data does not match the preset video detection abnormal model, the target video data is discarded, and it continues to detect whether the next target video data collected by the target camera matches the preset video detection abnormal model.
  • the target video data includes a frontal image of the target person, control the target camera to track the target person.
  • the method provided in the embodiment of the present invention can photograph the front and back of the target person in all directions, thereby bringing convenience and reducing unnecessary troubles for subsequent analysis of the target person.
  • the monitoring server detects additional cameras that are set opposite the target camera First, the monitoring server obtains the light intensity in the preset area. For example, a light sensor set in the preset area collects the light intensity and transmits the light intensity to the monitoring server.
  • the monitoring server judges whether the light intensity is greater than a preset intensity threshold. If it is greater than that, it obtains the minimum illumination values of all the additional cameras set relative to the target camera, and traverses the additional cameras with the lowest minimum illumination value from the lowest illumination values of all the additional cameras. As a camera that tracks and captures the front image of a person, the surveillance server obtains the front image of the person in high definition as much as possible. If it is less than that, an additional camera set relative to the target camera is detected.
  • an embodiment of the present invention provides a video linkage monitoring device applied to a monitoring server.
  • the monitoring server communicates with multiple cameras, and each camera is installed at a different position in a preset area. Each camera Used to capture images of areas at different angles within a preset area.
  • the video linkage monitoring device according to the embodiment of the present invention can be used as one of the software functional units.
  • the video linkage monitoring device includes several instructions stored in a memory, and the processor can access the memory and call the instructions for execution to complete the video linkage. Monitoring methods.
  • the video linkage monitoring device 300 includes a first detection module 31, a second detection module 32, and a third detection module 33.
  • the first detection module 31 is configured to detect whether the target video data collected by the target camera matches a preset video detection abnormal model
  • the second detection module 32 is configured to detect a target person from the target video data if the target video data matches a preset video detection abnormal model, and determine whether the target video data includes a front image of the target person, The front image includes a face image of the target person, and the target person is located in the preset area;
  • the third detection module 33 is configured to detect an additional camera opposite to the target camera if the target video data does not include a frontal image of the target person, and control the additional camera to track the person and shoot the person Front image.
  • the method provided in the embodiment of the present invention can photograph the front and back of the target person in all directions, thereby bringing convenience and reducing unnecessary troubles for subsequent analysis of the target person.
  • the video linkage monitoring device 300 further includes a discarding module 34.
  • the discarding module 34 is configured to discard the target video data if the target video data does not match the preset video detection abnormal model, and continue to detect whether the next target video data collected by the target camera matches the preset video detection abnormal model.
  • the above-mentioned video linkage monitoring device can execute the video linkage monitoring method provided by the embodiment of the present invention, and has corresponding function modules and beneficial effects of the execution method.
  • the video linkage monitoring method provided in the embodiment of the present invention.
  • an embodiment of the present invention provides a monitoring server.
  • the monitoring server 500 includes: one or more processors 51 and a memory 52. Among them, one processor 51 is taken as an example in FIG. 5.
  • the processor 51 and the memory 52 may be connected through a bus or in other manners.
  • the connection through the bus is taken as an example.
  • the memory 52 is a non-volatile computer-readable storage medium and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules, such as programs corresponding to the video linkage monitoring method in the embodiment of the present invention.
  • Instruction / module The processor 51 executes various functional applications and data processing of the video linkage monitoring device by running non-volatile software programs, instructions, and modules stored in the memory 52, that is, the video linkage monitoring method of the foregoing method embodiment and the foregoing device are implemented. The function of each module of the embodiment.
  • the memory 52 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device.
  • the memory 52 may optionally include memory remotely set with respect to the processor 51, and these remote memories may be connected to the processor 51 through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the program instructions / modules are stored in the memory 52, and when executed by the one or more processors 51, perform the video linkage monitoring method in any of the foregoing method embodiments, for example, execute each of the above-described each of FIG. 2 Steps; the functions of each module described in FIG. 3 and FIG. 4 can also be realized.
  • An embodiment of the present invention also provides a non-volatile computer storage medium.
  • the computer storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors, such as a process in FIG. 5.
  • the processor 51 may enable the one or more processors to execute the video linkage monitoring method in any of the foregoing method embodiments, for example, to execute the video linkage monitoring method in any of the above method embodiments, for example, to execute the foregoing description to perform the foregoing description.
  • Each step shown in FIG. 2 described above is performed; the functions of each module described in FIG. 3 and FIG. 4 may also be implemented.
  • the embodiments of the device or device described above are only schematic, and the unit modules described as separate components may or may not be physically separated, and the components displayed as module units may or may not be physical units. , Can be located in one place, or can be distributed to multiple network module units. Some or all of the modules may be selected according to actual needs to achieve the objective of the solution of this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)

Abstract

本发明涉及视频监控技术领域,特别是涉及一种视频联动监控方法、监控服务器、视频联动监控***。方法包括:检测目标摄像机采集的目标视频数据是否匹配预设视频检测异常模型;若目标视频数据匹配预设视频检测异常模型,从目标视频数据中检测出目标人物,并判断目标视频数据是否包含目标人物的正面图像,正面图像包括目标人物的人脸图像,目标人物位于预设区域;若目标视频数据未包含目标人物的正面图像,检测出与目标摄像机相对设置的额外摄像机,控制额外摄像机跟踪人物并拍摄人物的正面图像。因此,其能够全方位地拍摄目标人物的正面与背面,从而为后续解析目标人物时带来方便,减少不必要的麻烦。

Description

一种三维建模的方法及其装置
本发明要求于2018年09月21日提交中国专利局,申请号为201811109513.0,发明名称为“一种视频联动监控方法、监控服务器、视频联动监控***”的中国专利申请的优先权,其全部内容通过引用结合在本发明中。
技术领域
本发明涉及视频监控技术领域,特别是涉及一种视频联动监控方法、监控服务器、视频联动监控***。
背景技术
随着人口、经济、社会水平的发展,人们对安全有了更高的意识,视频监控技术在安全领域发挥着越来越重要的作用,以其直观、方便、信息内容丰富而广泛应用于城市天网、交通、民用安全等各个领域。
发明人在实现本发明的过程中,发现传统技术至少存在以下问题:视频监控过程中,对于一些重要视频画面,摄像头往往只是拍摄到目标人物的背面,给解析重要视频画面带来不必要的麻烦。
发明内容
本发明实施例一个目的旨在提供一种视频联动监控方法、监控服务器、视频联动监控***,其能够全方位地拍摄目标人物的正面与背面。
为解决上述技术问题,本发明实施例提供以下技术方案:
在第一方面,本发明实施例提供一种视频联动监控方法,应用于监控服务器,所述监控服务器与多个摄像机通讯,每个所述摄像机安装于预设区域内的不同位置,每个所述摄像机用于拍摄所述预设区域内不同角度的区域图像,所述方法包括:
检测目标摄像机采集的目标视频数据是否匹配预设视频检测异常模型;
若所述目标视频数据匹配预设视频检测异常模型,从所述目标视频数据中检测出目标人物,并判断所述目标视频数据是否包含所述目标人物的正面图像,所述正面图像包括所述目标人物的人脸图像,所述目标人物位于所述预设区域;
若所述目标视频数据未包含所述目标人物的正面图像,检测出与所述目标摄像机相对设置的额外摄像机,控制所述额外摄像机跟踪所述人物并拍摄所述人物的正面图像。
可选地,所述方法还包括:
若所述目标视频数据未匹配预设视频检测异常模型,丢弃所述目标视频数据,继续检测目标摄像机采集的下一个目标视频数据是否匹配预设视频检测异常模型。
可选地,所述方法还包括:
若所述目标视频数据包含所述目标人物的正面图像,控制所述目标摄像机跟踪所述目标人物。
可选地,所述检测出与所述目标摄像机相对设置的额外摄像机,包括:
获取所述预设区域内的光照强度;
判断所述光照强度是否大于预设强度阈值;
若大于,获取与所述目标摄像机相对设置的所有额外摄像机的最低照度值;
从所有额外摄像机的最低照度值中遍历出最低照度值最低的额外摄像机作为跟踪并拍摄所述人物的正面图像的摄像机;
若小于,检测出与所述目标摄像机相对设置的额外摄像机。
可选地,所述方法还包括:
获取训练视频数据集,所述训练视频数据集包括多种异常场景的视频数据;
对所述多种异常场景的视频数据进行预处理;
通过卷积算法处理预处理后的视频数据,建立所述视频检测异常模型。
在第二方面,本发明实施例提供一种视频联动监控装置,应用于监控服务器,所述监控服务器与多个摄像机通讯,每个所述摄像机安装于预设区域内的不同位置,每个所述摄像机用于拍摄所述预设区域内不同角度的区域图像,所述装置包括:
第一检测模块,用于检测目标摄像机采集的目标视频数据是否匹配预设视频检测异常模型;
第二检测模块,用于若所述目标视频数据匹配预设视频检测异常模型,从所述目标视频数据中检测出目标人物,并判断所述目标视频数据是否包含所述目标人物的正面图像,所述正面图像包括所述目标人物的人脸图像,所述目标人物位于所述预设区域;
第三检测模块,用于若所述目标视频数据未包含所述目标人物的正面图像,检测出与所述目标摄像机相对设置的额外摄像机,控制所述额外摄像机跟踪所述人物并拍摄所述人物的正面图像。
可选地,所述装置还包括:
丢弃模块,用于若所述目标视频数据未匹配预设视频检测异常模型,丢弃所述目标视频数据,继续检测目标摄像机采集的下一个目标视频数据是否匹配预设视频检测异常模型。
在第三方面,本发明实施例提供一种监控服务器,包括:
至少一个处理器;以及
与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够用于执行任一项所述的视频联动监控方法。
在第四方面,本发明实施例提供一种视频联动监控***,包括:
若干摄像机;以及
所述的监控服务器,所述监控服务器分别与各个所述摄像机通讯。
在第四方面,本发明实施例提供一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使监控服务器执行任一项所述的视频联动监控方法。
在第六方面,本发明实施例提供一种计算机程序产品,所述计算机程序产品包括存储在非易失性计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被监控服务器执行时,使所述监控服务器执行任一项所述的视频联动监控方法。
在本发明各个实施例提供的视频联动监控方法、监控服务器、视频联动监控***中,首先,检测目标摄像机采集的目标视频数据是否匹配预设视频检测异常模型;其次,若目标视频数据匹配预设视频检测异常模型,从目标视频数据中检测出目标人物,并判断目标视频数据是否包含目标人物的正面图像,正面图像包括目标人物的人脸图像,目标人物位于预设区域;再次,若目标视频数据未包含目标人物的正面图像,检测出与目标摄像机相对设置的额外摄像机,控制额外摄像机跟踪人物并拍摄人物的正面图像。因此,其能够全方位地拍摄目标人物的正面与背面,从而为后续解析目标人物时带来方便,减少不必要的麻烦。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本发明实施例提供一种视频联动监控***的结构示意图;
图2是本发明实施例提供一种视频联动监控方法的流程示意图;
图3是本发明实施例提供一种视频联动监控装置的结构示意图;
图4是本发明另一实施例提供一种视频联动监控装置的结构示意图;
图5是本发明实施例提供一种监控服务器的结构示意图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
本发明实施例的视频联动监控方法,可以在任何合适类型、具有运算能力的电子设备中执行,例如监控服务器、台式计算机、智能手机、平板电脑以及其他电子产品中。其中,此处的监控服务器可以是一个物理服务器或者多个物理服务器虚拟而成的一个逻辑服务器。服务器也可以是多个可互联通信的服务器组成的服务器群,且各个功能模块可分别分布在服务器群中的各个服务器上。
本发明实施例的视频联动监控装置可以作为软件***,独立设置在上述客户端中,也可以作为整合在处理器中的其中一个功能模块,执行本发明实施例的视频联动监控方法。
请参阅图1,图1是本发明实施例提供一种视频联动监控***的结构示意图。如图1所示,视频监控***100包括若干摄像机11、监控服务器12及移动终端13。
摄像机11安装于预设区域内,用于采集视频数据。可以理解的是,摄像机11按照预设规律固定安装于预设区域,尽可能地做到将所述预设区域全部覆盖。例如,在所述预设区域的墙面、地面、屋顶或者物体表面,结合所述预设区域的具体结构和遮挡等布设所述摄像机。
其中,摄像机的数量为多个。各个摄像机组成一个摄像机群,用于监控特定监控区域范围,每个摄像机安装于预设区域内的不同位置。每个摄像机用于拍摄预设区域内不同角度的区域图像,例如,在一些实施例中,摄像机群能够360度地拍摄位于预设区域内的物体。
一般的,摄像机群中各个摄像机皆将采集的视频数据上传至同一监控服务器。不同监控区域范围,对应着不同监控服务器。对于管理不同监控区域的不同管理者,两者的监控服务器互不共享监控视频。
为提高摄像机11的拍摄角度和拍摄范围,减少摄像机11的布设,降低***成本,可以采用摄像机11与多维旋转电机结合的方式对预设区域进行高清视频帧图像的实时捕抓。当然,可以选择一体化的摄像机11替代多维旋转电机与摄像机11结合的方式,比如,半球形一体机、快速球型一体机、结合云台的一体化高清摄像机或镜头内置的一体机等,上述的一体机可以实现自动聚焦。优选的,选择具有防水功能、体积较 小、分辨率高、高寿命以及具有通用通信接口等的高清摄像机。
在一些实施例中,摄像机11包括为网络摄像机、红外高清摄像机、高速球、低照度摄像机等等。摄像机11内置有网络编码模块。
摄像机包括镜头、图像传感器、声音传感器、A/D转换器、控制器、控制接口、网络接口以及等等。所述摄像机可以用于采集视频数据信号,所述视频数据信号为模拟视频信号。所述摄像机主要由CMOS光敏元器件和***电路组成,用于将所述镜头传入的光信号转换为电信号。
具体的,网络编码模块内置一嵌入式芯片,所述嵌入式芯片用于将所述摄像机采集到的视频数据信号转换为数字信号,所述视频数据信号为模拟视频信号,所述嵌入式芯片还可以将所述数字信号进行压缩。具体的,所述嵌入式芯片可以为Hi3516高效压缩芯片。
摄像机11通过WIFI网络将压缩后的数字信号发送到监控服务器12。监控服务器12可以将压缩后的数字信号发送到移动终端13。其中,摄像机11还包括红外传感器,使得摄像机11具有夜视功能。网络上用户可以直接用浏览器观看Web服务器上的摄像机图像或者通过移动终端APP直接访问,摄像机11能更简单地实现监控,特别是远程监控,具有简单的施工和维护、更好的支持音频、更好的支持报警联动、更灵活的录像存储、更丰富的产品选择、更高清的视频效果和更完美的监控管理功能,并且可直接将摄像机接入本地局域网,是数据的采集和光电信号的转换端,是整个网络的数据提供端。
其中,监控服务器12是提供计算服务的设备。监控服务器的构成包括处理器、硬盘、内存、***总线等,和通用的计算机架构类似,监控服务器负责提供移动终端APP的注册登录,用户的管理,设备管理等功能。同时负责摄像机的视频数据的存储功能,以及通过监控服务器记住移动终端和摄像机的IP和端口,将对应的移动终端和摄像机的IP和端口都传送给对方,从而使摄像机端和移动端能知道对方的IP和端口,通过IP地址和端口建立二者的连接通信。监控服务器获取摄像机的视频数据然后根据人工智能模块去分析视频数据,当检测到异常的视频数据时就会发送告警信息通知所述移动终端。
具体的,监控服务器12包括一处理器,所述处理器包括人工智能模块。所述人工智能模块负责对视频数据的实时分析,检测异常的时刻并通知移动终端。人工智能模块的具体实施方式分为,视频异常检测模型的建立和视频异常检测模型的应用两个部分。首先是视频异常检测模型的建立分这三个部分,第一部分:训练视频异常检测模型的视频数据集,用于后面的机器的训练和学习。包括各种异常场景的视频数据如行驶车辆频繁穿插并线、抢劫、尾随盗窃、打架斗殴、群殴、尖叫声,哭泣声、烟雾,嘈杂的视频数据等多种需要检测的异常场景。训练视频数据集覆盖大部分的应用场景。第二部分:视频数据集的预处理,将视频数据按一秒钟抽取10张图片,每张图片转换为长255像素和宽255像素的图片。第三部分:训练模型的建立,使用人工智能的卷积算法,Python代码建立训练的模型。模型包括输入层,隐藏层,输出层,输入层是输入预处理的图片,隐藏层用来计算输入图片的特征,输出层是通过隐藏层的计算特征输出该视频是否包含异常场景。训练的过程是:将正常的视频标记为0异常的视频标记为1,然后将异常的视频和正常的视频同时输入训练***,通过数据集预处理和训练模型的计算,分辨视频是异常视频还是正常的视频。重复上面的步骤,当***分辨的正确率达到90%以上停止训练,保存模型。建立完模型后,将模型转移到服务器端,将数据集换成摄像机的视频,运行模型,检测摄像机的视频是否有异常的情况。
请参阅图2,图2是本发明实施例提供一种视频联动监控方法的流程示意图。如图2所示,视频联动监控方法S200包括:
S21、检测目标摄像机采集的目标视频数据是否匹配预设视频检测异常模型;
在本实施例中,目标摄像机为摄像机群中任意摄像机,可以理解的是,目标摄像机中“目标”是用于区分其它摄像机,当监控服务器从摄像机群中选择特定摄像机的视频数据作出检测分析时,此时,该特定摄像机便为目标摄像机,与此同时,由目标摄像机采集的视频数据便为目标视频数据。目标摄像机中“目标”并不用于限制本发明的保护范围, 只是用于区分之用。
视频检测异常模型由管理者预先构建并存储于监控服务器,视频检测异常模型用于评价目标视频数据是否需要作出针对性地处理。
构建视频检测异常模型时,首先,监控服务器获取训练视频数据集,训练视频数据集包括多种异常场景的视频数据,例如,训练视频异常检测模型的视频数据集用于后面的机器的训练和学习,其包括各种异常场景的视频数据如行驶车辆频繁穿插并线、抢劫、尾随盗窃、打架斗殴、群殴、尖叫声,哭泣声、烟雾,嘈杂的视频数据等多种需要检测的异常场景。训练视频数据集覆盖大部分的应用场景。
其次,监控服务器对多种异常场景的视频数据进行预处理,例如,将视频数据按一秒钟抽取10张图片,每张图片转换为长255像素和宽255像素的图片。
再次,监控服务器通过卷积算法处理预处理后的视频数据,建立视频检测异常模型,例如,训练模型的建立,使用人工智能的卷积算法,Python代码建立训练的模型。模型包括输入层,隐藏层,输出层,输入层是输入预处理的图片,隐藏层用来计算输入图片的特征,输出层是通过隐藏层的计算特征输出该视频是否包含异常场景。训练的过程是:将正常的视频标记为0异常的视频标记为1,然后将异常的视频和正常的视频同时输入训练***,通过数据集预处理和训练模型的计算,分辨视频是异常视频还是正常的视频。重复上面的步骤,当***分辨的正确率达到90%以上停止训练,保存模型。建立完模型后,将模型转移到服务器端,将数据集换成摄像机的视频,运行模型,检测摄像机的视频是否有异常的情况。
S22、若目标视频数据匹配预设视频检测异常模型,从目标视频数据中检测出目标人物,并判断目标视频数据是否包含目标人物的正面图像,正面图像包括目标人物的人脸图像,目标人物位于预设区域;
在本实施例中,当目标视频数据匹配预设视频检测异常模型,监控服务器根据图像分析算法,从目标视频数据中检测出目标人物,例如,甲尾随乙,伺机扒手乙的手提包,摄像机监控到甲的尾随动作行为,并 将包含甲的尾随动作行为的视频数据发送至监控服务器,监控服务器检测到甲的尾随动作行为,将该视频数据作为目标视频数据,并根据图像分析算法从目标视频数据中确定甲为目标人物。
再次,当检测出目标人物后,监控服务器判断目标视频数据是否存在与目标人物关联的人脸特征点,若存在,则认为目标视频数据包含目标人物的正面图像;若未存在,则认为目标视频数据未包含目标人物的正面图像,并且该目标视频数据只包含目标人物的背面图像。例如,承接上述例子,若监控服务器在目标视频数据检测出甲的人脸图像,则认为目标摄像机拍摄到甲的正面图像。若监控服务器在目标视频数据未检测出甲的人脸图像,则认为目标摄像机拍摄到甲的背面图像。
S23、若目标视频数据未包含目标人物的正面图像,检测出与目标摄像机相对设置的额外摄像机,控制额外摄像机跟踪人物并拍摄人物的正面图像。
在本实施例中,当监控服务器检测出目标视频数据未包含目标人物的正面图像时,监控服务器确定目标人物的当前地理位置。
其次,监控服务器根据目标人物的当前地理位置,检测与覆盖目标人物的当前地理位置的所有额外摄像机并确定所有额外摄像机的安装地理位置,并从所有额外摄像机的安装地理位置中确定与目标摄像机的安装地理位置相对的额外摄像机。
再次,监控服务器控制与目标摄像机的安装地理位置相对的额外摄像机跟踪人物并拍摄人物的正面图像。
在本实施例中,若目标视频数据未匹配预设视频检测异常模型,丢弃目标视频数据,继续检测目标摄像机采集的下一个目标视频数据是否匹配预设视频检测异常模型。
若目标视频数据包含目标人物的正面图像,控制目标摄像机跟踪目标人物。
因此,本发明实施例提供的方法能够全方位地拍摄目标人物的正面与背面,从而为后续解析目标人物时带来方便,减少不必要的麻烦。
实际上,一些恶性事件发生时间大部分在光线弱等黑暗地方,为了 严防非法分子,争取获得非法分子高清人脸图像,在一些实施例中,监控服务器检测出与目标摄像机相对设置的额外摄像机时,首先,监控服务器获取预设区域内的光照强度,例如,设置于预设区域内的光照传感器采集光照强度,并将光照强度传输至监控服务器。
其次,监控服务器判断光照强度是否大于预设强度阈值,若大于,获取与目标摄像机相对设置的所有额外摄像机的最低照度值,从所有额外摄像机的最低照度值中遍历出最低照度值最低的额外摄像机作为跟踪并拍摄人物的正面图像的摄像机,于是,监控服务器便尽可能地获取到高清的人物正面图像。若小于,检测出与目标摄像机相对设置的额外摄像机。
通过此种方式,其能够尽可能地获取到高清的人物正面图像,从而实现有效地视频监控。
需要说明的是,在上述各个实施例中,上述各步骤之间并不必然存在一定的先后顺序,本领域普通技术人员,根据本发明实施例的描述可以理解,不同实施例中,上述各步骤可以有不同的执行顺序,亦即,可以并行执行,亦可以交换执行等等。
作为本发明实施例的另一方面,本发明实施例提供一种视频联动监控装置应用于监控服务器,监控服务器与多个摄像机通讯,每个摄像机安装于预设区域内的不同位置,每个摄像机用于拍摄预设区域内不同角度的区域图像。本发明实施例的视频联动监控装置可以作为其中一个软件功能单元,视频联动监控装置包括若干指令,该若干指令存储于存储器内,处理器可以访问该存储器,调用指令进行执行,以完成上述视频联动监控方法。
请参阅图3,视频联动监控装置300包括:第一检测模块31、第二检测模块32及第三检测模块33。
第一检测模块31用于检测目标摄像机采集的目标视频数据是否匹配预设视频检测异常模型;
第二检测模块32用于若所述目标视频数据匹配预设视频检测异常模型,从所述目标视频数据中检测出目标人物,并判断所述目标视频数 据是否包含所述目标人物的正面图像,所述正面图像包括所述目标人物的人脸图像,所述目标人物位于所述预设区域;
第三检测模块33用于若所述目标视频数据未包含所述目标人物的正面图像,检测出与所述目标摄像机相对设置的额外摄像机,控制所述额外摄像机跟踪所述人物并拍摄所述人物的正面图像。
因此,本发明实施例提供的方法能够全方位地拍摄目标人物的正面与背面,从而为后续解析目标人物时带来方便,减少不必要的麻烦。
在一些实施例中,请参阅图4,视频联动监控装置300还包括丢弃模块34。丢弃模块34用于若所述目标视频数据未匹配预设视频检测异常模型,丢弃所述目标视频数据,继续检测目标摄像机采集的下一个目标视频数据是否匹配预设视频检测异常模型。
需要说明的是,上述视频联动监控装置可执行本发明实施例所提供的视频联动监控方法,具备执行方法相应的功能模块和有益效果。未在视频联动监控装置实施例中详尽描述的技术细节,可参见本发明实施例所提供的视频联动监控方法。
作为本发明实施例的又另一方面,本发明实施例提供一种监控服务器。如图5所示,该监控服务器500包括:一个或多个处理器51以及存储器52。其中,图5中以一个处理器51为例。
处理器51和存储器52可以通过总线或者其他方式连接,图5中以通过总线连接为例。
存储器52作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本发明实施例中的视频联动监控方法对应的程序指令/模块。处理器51通过运行存储在存储器52中的非易失性软件程序、指令以及模块,从而执行视频联动监控装置的各种功能应用以及数据处理,即实现上述方法实施例视频联动监控方法以及上述装置实施例的各个模块的功能。
存储器52可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器52可选包括相对于处理器51远程设 置的存储器,这些远程存储器可以通过网络连接至处理器51。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述程序指令/模块存储在所述存储器52中,当被所述一个或者多个处理器51执行时,执行上述任意方法实施例中的视频联动监控方法,例如,执行以上描述的图2各个步骤;也可实现附图3与图4所述的各个模块的功能。
本发明实施例还提供了一种非易失性计算机存储介质,所述计算机存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,例如图5中的一个处理器51,可使得上述一个或多个处理器可执行上述任意方法实施例中的视频联动监控方法,例如,执行上述任意方法实施例中的视频联动监控方法,例如,执行以上描述的执行以上描述的执行以上描述的图2所示的各个步骤;也可实现附图3与图4所述的各个模块的功能。
以上所描述的装置或设备实施例仅仅是示意性的,其中所述作为分离部件说明的单元模块可以是或者也可以不是物理上分开的,作为模块单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络模块单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用直至得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;在本发明的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的 本发明的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (10)

  1. 一种视频联动监控方法,应用于监控服务器,所述监控服务器与多个摄像机通讯,每个所述摄像机安装于预设区域内的不同位置,每个所述摄像机用于拍摄所述预设区域内不同角度的区域图像,其特征在于,所述方法包括:
    检测目标摄像机采集的目标视频数据是否匹配预设视频检测异常模型;
    若所述目标视频数据匹配预设视频检测异常模型,从所述目标视频数据中检测出目标人物,并判断所述目标视频数据是否包含所述目标人物的正面图像,所述正面图像包括所述目标人物的人脸图像,所述目标人物位于所述预设区域;
    若所述目标视频数据未包含所述目标人物的正面图像,检测出与所述目标摄像机相对设置的额外摄像机,控制所述额外摄像机跟踪所述人物并拍摄所述人物的正面图像。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    若所述目标视频数据未匹配预设视频检测异常模型,丢弃所述目标视频数据,继续检测目标摄像机采集的下一个目标视频数据是否匹配预设视频检测异常模型。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    若所述目标视频数据包含所述目标人物的正面图像,控制所述目标摄像机跟踪所述目标人物。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述检测出与所述目标摄像机相对设置的额外摄像机,包括:
    获取所述预设区域内的光照强度;
    判断所述光照强度是否大于预设强度阈值;
    若大于,获取与所述目标摄像机相对设置的所有额外摄像机的最低照度值;
    从所有额外摄像机的最低照度值中遍历出最低照度值最低的额外 摄像机作为跟踪并拍摄所述人物的正面图像的摄像机;
    若小于,检测出与所述目标摄像机相对设置的额外摄像机。
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取训练视频数据集,所述训练视频数据集包括多种异常场景的视频数据;
    对所述多种异常场景的视频数据进行预处理;
    通过卷积算法处理预处理后的视频数据,建立所述视频检测异常模型。
  6. 一种视频联动监控装置,应用于监控服务器,所述监控服务器与多个摄像机通讯,每个所述摄像机安装于预设区域内的不同位置,每个所述摄像机用于拍摄所述预设区域内不同角度的区域图像,其特征在于,所述装置包括:
    第一检测模块,用于检测目标摄像机采集的目标视频数据是否匹配预设视频检测异常模型;
    第二检测模块,用于若所述目标视频数据匹配预设视频检测异常模型,从所述目标视频数据中检测出目标人物,并判断所述目标视频数据是否包含所述目标人物的正面图像,所述正面图像包括所述目标人物的人脸图像,所述目标人物位于所述预设区域;
    第三检测模块,用于若所述目标视频数据未包含所述目标人物的正面图像,检测出与所述目标摄像机相对设置的额外摄像机,控制所述额外摄像机跟踪所述人物并拍摄所述人物的正面图像。
  7. 根据权利要求6所述的装置,其特征在于,所述装置还包括:
    丢弃模块,用于若所述目标视频数据未匹配预设视频检测异常模型,丢弃所述目标视频数据,继续检测目标摄像机采集的下一个目标视频数据是否匹配预设视频检测异常模型。
  8. 一种监控服务器,其特征在于,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理 器执行,以使所述至少一个处理器能够用于执行如权利要求1至5任一项所述的视频联动监控方法。
  9. 一种视频联动监控***,其特征在于,包括:
    若干摄像机;以及
    如权利要求8所述的监控服务器,所述监控服务器分别与各个所述摄像机通讯。
  10. 一种非暂态计算机可读存储介质,其特征在于,所述非暂态计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使监控服务器执行如权利要求1至5任一项所述的视频联动监控方法。
PCT/CN2019/103833 2018-09-21 2019-08-30 一种三维建模的方法及其装置 WO2020057355A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811109513.0A CN109241933A (zh) 2018-09-21 2018-09-21 视频联动监控方法、监控服务器、视频联动监控***
CN201811109513.0 2018-09-21

Publications (1)

Publication Number Publication Date
WO2020057355A1 true WO2020057355A1 (zh) 2020-03-26

Family

ID=65056678

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103833 WO2020057355A1 (zh) 2018-09-21 2019-08-30 一种三维建模的方法及其装置

Country Status (2)

Country Link
CN (1) CN109241933A (zh)
WO (1) WO2020057355A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553264A (zh) * 2020-04-27 2020-08-18 中国科学技术大学先进技术研究院 一种适用于中小学生的校园非安全行为检测及预警方法
CN112329849A (zh) * 2020-11-04 2021-02-05 中冶赛迪重庆信息技术有限公司 基于机器视觉的废钢料场卸料状态识别方法、介质及终端
CN114640764A (zh) * 2022-03-01 2022-06-17 深圳市安软慧视科技有限公司 基于布控平台的目标检测方法、***及相关设备

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241933A (zh) * 2018-09-21 2019-01-18 深圳市九洲电器有限公司 视频联动监控方法、监控服务器、视频联动监控***
CN109922311B (zh) * 2019-02-12 2022-01-28 平安科技(深圳)有限公司 基于音视频联动的监控方法、装置、终端及存储介质
CN111914588A (zh) * 2019-05-07 2020-11-10 杭州海康威视数字技术股份有限公司 进行监控的方法和***
CN110210461B (zh) * 2019-06-27 2021-03-05 北京澎思科技有限公司 基于摄像机网格的多视图协同异常行为检测方法
CN112492261A (zh) * 2019-09-12 2021-03-12 华为技术有限公司 跟踪拍摄方法及装置、监控***
CN110866692A (zh) * 2019-11-14 2020-03-06 北京明略软件***有限公司 一种预警信息的生成方法、生成装置及可读存储介质
CN110929633A (zh) * 2019-11-19 2020-03-27 公安部第三研究所 基于小数据集实现涉烟车辆异常检测的方法
CN111242008B (zh) * 2020-01-10 2024-04-12 河南讯飞智元信息科技有限公司 打架事件检测方法、相关设备及可读存储介质
CN113552123A (zh) * 2020-04-17 2021-10-26 华为技术有限公司 视觉检测方法和视觉检测装置
CN111931564A (zh) * 2020-06-29 2020-11-13 北京大学 一种基于人脸识别的目标跟踪方法及装置
CN112034758B (zh) * 2020-08-31 2021-11-30 成都市达岸信息技术有限公司 一种低功耗多功能物联网安控监测装置及***
CN112437255A (zh) * 2020-11-04 2021-03-02 中广核工程有限公司 核电厂智能视频监控***及方法
CN116996665B (zh) * 2023-09-28 2024-01-26 深圳天健电子科技有限公司 基于物联网的智能监控方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000092368A (ja) * 1998-09-09 2000-03-31 Canon Inc カメラ制御装置及びコンピュータ読み取り可能な記憶媒体
CN102254169A (zh) * 2011-08-23 2011-11-23 东北大学秦皇岛分校 基于多摄像机的人脸识别方法及***
CN103237192A (zh) * 2012-08-20 2013-08-07 苏州大学 一种基于多摄像头数据融合的智能视频监控***
CN103268680A (zh) * 2013-05-29 2013-08-28 北京航空航天大学 一种家庭智能监控防盗***
CN109241933A (zh) * 2018-09-21 2019-01-18 深圳市九洲电器有限公司 视频联动监控方法、监控服务器、视频联动监控***

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101572804B (zh) * 2009-03-30 2012-03-21 浙江大学 多摄像机智能控制方法及装置
CN102480593B (zh) * 2010-11-25 2014-04-16 杭州华三通信技术有限公司 双镜头摄像机切换方法及装置
CN101999888B (zh) * 2010-12-01 2012-07-25 北京航空航天大学 一种对体温异常者进行检测与搜寻的疫情防控***
CN104079885A (zh) * 2014-07-07 2014-10-01 广州美电贝尔电业科技有限公司 无人监守联动跟踪的网络摄像方法及装置
WO2016153479A1 (en) * 2015-03-23 2016-09-29 Longsand Limited Scan face of video feed
CN105913037A (zh) * 2016-04-26 2016-08-31 广东技术师范学院 基于人脸识别与射频识别的监控跟踪***
CN107592507A (zh) * 2017-09-29 2018-01-16 深圳市置辰海信科技有限公司 自动跟踪捕捉高清晰正面人脸照片的方法
CN108446630B (zh) * 2018-03-20 2019-12-31 平安科技(深圳)有限公司 机场跑道智能监控方法、应用服务器及计算机存储介质
CN108419014B (zh) * 2018-03-20 2020-02-21 北京天睿空间科技股份有限公司 利用全景摄像机和多台抓拍摄像机联动抓拍人脸的方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000092368A (ja) * 1998-09-09 2000-03-31 Canon Inc カメラ制御装置及びコンピュータ読み取り可能な記憶媒体
CN102254169A (zh) * 2011-08-23 2011-11-23 东北大学秦皇岛分校 基于多摄像机的人脸识别方法及***
CN103237192A (zh) * 2012-08-20 2013-08-07 苏州大学 一种基于多摄像头数据融合的智能视频监控***
CN103268680A (zh) * 2013-05-29 2013-08-28 北京航空航天大学 一种家庭智能监控防盗***
CN109241933A (zh) * 2018-09-21 2019-01-18 深圳市九洲电器有限公司 视频联动监控方法、监控服务器、视频联动监控***

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553264A (zh) * 2020-04-27 2020-08-18 中国科学技术大学先进技术研究院 一种适用于中小学生的校园非安全行为检测及预警方法
CN111553264B (zh) * 2020-04-27 2023-04-18 中科永安(安徽)科技有限公司 一种适用于中小学生的校园非安全行为检测及预警方法
CN112329849A (zh) * 2020-11-04 2021-02-05 中冶赛迪重庆信息技术有限公司 基于机器视觉的废钢料场卸料状态识别方法、介质及终端
CN114640764A (zh) * 2022-03-01 2022-06-17 深圳市安软慧视科技有限公司 基于布控平台的目标检测方法、***及相关设备

Also Published As

Publication number Publication date
CN109241933A (zh) 2019-01-18

Similar Documents

Publication Publication Date Title
WO2020057355A1 (zh) 一种三维建模的方法及其装置
WO2020057346A1 (zh) 视频监控方法及装置、监控服务器及视频监控***
WO2020057353A1 (zh) 基于高速球的物体跟踪方法、监控服务器、视频监控***
WO2020094091A1 (zh) 一种图像抓拍方法、监控相机及监控***
JP6127152B2 (ja) セキュリティ監視システム及び相応な警報触発方法
JP5213105B2 (ja) 映像ネットワークシステム及び映像データ管理方法
US8891826B2 (en) Image processing system, image processing method, and computer program
WO2020029921A1 (zh) 一种监控方法与装置
KR101425505B1 (ko) 객체인식기술을 이용한 지능형 경계 시스템의 감시 방법
JP2016100696A (ja) 画像処理装置、画像処理方法、及び画像処理システム
CN101860679A (zh) 图像捕捉方法和数码相机
US9521377B2 (en) Motion detection method and device using the same
US10657783B2 (en) Video surveillance method based on object detection and system thereof
CN103929592A (zh) 全方位智能监控设备及方法
JP2014128002A (ja) 被写体領域追跡装置、その制御方法及びプログラム
KR101442669B1 (ko) 지능형 객체감지를 통한 범죄행위 판별방법 및 그 장치
US11509818B2 (en) Intelligent photography with machine learning
CN110191324B (zh) 图像处理方法、装置、服务器及存储介质
JP2022189835A (ja) 撮像装置
CN115086567A (zh) 延时摄影方法和装置
CN110267011B (zh) 图像处理方法、装置、服务器及存储介质
CN111800604A (zh) 基于枪球联动检测人形和人脸数据的方法及装置
CN114491466B (zh) 一种基于私有云技术的智能训练***
CN113660455B (zh) 一种基于dvs数据的跌倒检测方法、***、终端
JP6632632B2 (ja) 監視システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19863520

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19863520

Country of ref document: EP

Kind code of ref document: A1