CN112601021B - Method and system for processing monitoring video of network camera - Google Patents

Method and system for processing monitoring video of network camera Download PDF

Info

Publication number
CN112601021B
CN112601021B CN202011472470.XA CN202011472470A CN112601021B CN 112601021 B CN112601021 B CN 112601021B CN 202011472470 A CN202011472470 A CN 202011472470A CN 112601021 B CN112601021 B CN 112601021B
Authority
CN
China
Prior art keywords
target object
determining
frame
video
coordinate point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011472470.XA
Other languages
Chinese (zh)
Other versions
CN112601021A (en
Inventor
王丹星
余丹
兰雨晴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongbiao Huian Information Technology Co Ltd
Original Assignee
Zhongbiao Huian Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongbiao Huian Information Technology Co Ltd filed Critical Zhongbiao Huian Information Technology Co Ltd
Priority to CN202011472470.XA priority Critical patent/CN112601021B/en
Publication of CN112601021A publication Critical patent/CN112601021A/en
Application granted granted Critical
Publication of CN112601021B publication Critical patent/CN112601021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • H04L67/025Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)

Abstract

The invention provides a processing method and a processing system for a network camera monitoring video. The method comprises the following steps: acquiring a target image of a target object; acquiring monitoring videos shot by a plurality of network cameras respectively; performing image analysis on each monitoring video to obtain a plurality of video frames containing a target image of the target object; and splicing the video frames in sequence according to the sequence of the shooting time of the video frames to form a monitoring video frame set of the target object.

Description

Method and system for processing monitoring video of network camera
Technical Field
The invention relates to the technical field of monitoring, in particular to a method and a system for processing a monitoring video of a network camera.
Background
The webcam is a new generation of webcam produced by combining traditional webcams with web technology, and can transmit video to the other end of the earth through the web, and a remote browser can monitor the video without any specialized software, as long as the standard web browser (such as Microsoft IE or Netscape) is used. The network camera is generally composed of a lens, an image sensor, a sound sensor, an A/D converter, an image sensor, a sound sensor, a controller network server, an external alarm, a control interface and the like.
In the prior art, a user can perform real-time video monitoring on a certain object, such as a certain person, a certain vehicle and a certain animal, by using a network camera, but since the object is moving, when the object moves to a different area, different network cameras are required to perform video monitoring on the object, and therefore, the object appears in the monitoring videos of a plurality of network cameras. If a user wants to view as many videos containing the object as possible, the user needs to manually search the videos, which is very inconvenient.
Disclosure of Invention
The embodiment of the invention provides a method and a system for processing a monitoring video of a network camera.
The embodiment of the invention provides a processing method for a network camera monitoring video, which comprises the following steps:
acquiring a target image of a target object;
acquiring monitoring videos shot by a plurality of network cameras respectively;
performing image analysis on each monitoring video to obtain a plurality of video frames containing a target image of the target object;
and splicing the video frames in sequence according to the sequence of the shooting time of the video frames to form a monitoring video frame set of the target object.
In one embodiment, the acquiring the monitoring videos shot by the plurality of webcams includes:
determining a first surveillance video comprising the target image;
determining the moving speed of the target object according to the first monitoring video;
determining a geographical range in which the target object may appear in a preset time period after the current time according to the moving speed of the target object;
and acquiring the monitoring videos shot by the network cameras in the geographic range respectively as the monitoring videos shot by the network cameras respectively.
In one embodiment, after the forming the set of surveillance video frames of the target object, the method further includes:
and marking the target object on each video frame in the monitoring video frame set.
In one embodiment, the target image comprises any one of a left view, a right view, a front view, a rear view, a top view, a bottom view and a perspective view.
The embodiment of the invention provides a processing system for a network camera monitoring video, which comprises:
the first acquisition module is used for acquiring a target image of a target object;
the second acquisition module is used for acquiring the monitoring videos shot by the plurality of network cameras respectively;
the analysis module is used for carrying out image analysis on each monitoring video to obtain a plurality of video frames containing the target image of the target object;
and the splicing module is used for splicing the video frames in sequence according to the sequence of the shooting time of the video frames to form a monitoring video frame set of the target object.
In one embodiment, the acquiring the monitoring videos shot by the plurality of webcams includes:
determining a first surveillance video comprising the target image;
determining the moving speed of the target object according to the first monitoring video;
determining a geographical range in which the target object may appear in a preset time period after the current time according to the moving speed of the target object;
and acquiring the monitoring videos shot by the network cameras in the geographic range respectively as the monitoring videos shot by the network cameras respectively.
In one embodiment, after the forming the set of surveillance video frames of the target object, the method further includes:
and marking the target object on each video frame in the monitoring video frame set.
In one embodiment, the target image comprises any one of a left view, a right view, a front view, a rear view, a top view, a bottom view and a perspective view.
The beneficial effects of the above technical scheme are: according to the technical scheme, the monitoring video frame set of the target object can be obtained only, and each frame of video in the set has the target object, so that a user can conveniently and quickly obtain all video frames with the target object, and the user can conveniently monitor the target object.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a processing method for monitoring videos by a network camera according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
An embodiment of the present invention provides a processing method for a network camera monitoring video, as shown in fig. 1, the method includes steps S1-S4:
step S1, a target image of the target object is acquired. The target image comprises any one of a left view, a right view, a front view, a rear view, a top view, a bottom view and a perspective view.
And step S2, acquiring the monitoring videos shot by the plurality of network cameras respectively.
Step S3, performing image analysis on each of the surveillance videos, and obtaining a plurality of video frames including a target image of a target object.
And step S4, splicing the video frames in sequence according to the sequence of the shooting time of the video frames to form a monitoring video frame set of the target object.
The beneficial effects of the above technical scheme are: according to the technical scheme, the monitoring video frame set of the target object can be obtained only, and each frame of video in the set has the target object, so that a user can conveniently and quickly obtain all video frames with the target object, and the user can conveniently monitor the target object.
In one embodiment, acquiring monitoring videos shot by a plurality of network cameras respectively comprises:
determining a first surveillance video comprising a target image;
determining the moving speed of the target object according to the first monitoring video;
determining a geographical range in which the target object may appear in a preset time period after the current time according to the moving speed of the target object; for example, the position of the target object may be obtained according to the position of the network camera corresponding to the first monitoring video and the position of the target object in the first monitoring video; further, according to the moving speed v of the target object, after the preset time period t is calculated, a circular area with the position of the target object as the center of a circle and vt as the radius is a geographical range in which the target object may appear;
and acquiring the monitoring videos shot by the network cameras in the geographical range as the monitoring videos shot by the network cameras.
By the technical scheme, all the monitoring videos which may appear in the target object can be quickly found, and the efficiency of finally determining the monitoring video frame set is further improved.
In one embodiment, after forming the set of surveillance video frames of the target object, the method further includes:
and marking a target object on each video frame in the monitoring video frame set.
According to the technical scheme, the target object is marked on each video frame in the monitoring video frame set, so that a user can quickly check the target object in the video frame, and the monitoring efficiency of the user is improved.
In one embodiment, the determining the moving speed of the target object according to the first surveillance video includes:
determining the moving distance of each frame of each characteristic coordinate point of a target object in a first monitoring video of the target image;
determining the moving speed of the target object according to the moving distance of each frame of each characteristic coordinate point of the target object;
and determining the geographical range in which the target object may appear in a preset time period after the current time according to the moving speed of the target object.
The determining the moving distance of each frame of each characteristic coordinate point of the target object in the first monitoring video of the target image includes:
step A1: and (2) according to the moving distance of each frame of each characteristic coordinate point of the target object in the first monitoring video of the target image by using a formula (1):
Figure BDA0002834446380000051
wherein Si,jRepresenting the moving distance of the ith characteristic coordinate point from the jth-1 th frame to the jth frame in the first monitoring video of the target image; (x)i,j,yi,j) Representing coordinates of an ith characteristic coordinate point of a jth frame in a first monitoring video of the target image; (x)i-1,j,yi-1,j) Representing coordinates of an ith characteristic coordinate point of a j-1 th frame in a first monitoring video of the target image;
the determining the moving speed of the target object according to the moving distance of each frame of each characteristic coordinate point of the target object includes:
step A2: determining a moving speed identification value of the target object according to the moving distance of each frame of each characteristic coordinate point of the target object by using a formula (2):
Figure BDA0002834446380000061
wherein V represents a moving speed identification value of the target object; f represents a frame rate of a first surveillance video of the target image; m represents the total number of feature coordinate points of the target image; n represents the number of frames in a first surveillance video of the target image;
the determining, according to the moving speed of the target object, a geographical range in which the target object may appear within a preset time period after a current time includes:
step A3: determining the geographical range in which the target object may appear within a preset time period after the current time according to the moving speed identification value of the target object by using formula (3)
Figure BDA0002834446380000062
Wherein etakA determination value representing a geographical range in which the target object is likely to appear; l isi,kRepresenting the distance between the ith characteristic coordinate point of the current target object and the kth edge coordinate point of the first monitoring video; t represents a preset time period; u () represents a step function (the function value is 1 when the value in the parentheses is 0 or more, and the function value is 0 when the value in the parentheses is less than 0);
when etakWhen 0, it means that the target object does not appear in the vicinity of the k-th edge coordinate point;
when etakWhen 1, it indicates that the target object may appear in the vicinity of the k-th edge coordinate point.
The beneficial effects of the above technical scheme are: obtaining the moving distance of each frame of the target object in the first monitoring video of the target image by using the formula (1) of the step A1, thereby laying a foundation for subsequently solving the speed through the moving distance of each frame; then, determining the moving speed of the target object by using the formula (2) in the step A2, so that the moving speed is calculated by integrating the frame number and each feature point, and the calculation accuracy is ensured; formula (3) in step a3 determines the geographical range where the target object may appear in a preset time period after the current time, so that tracking shooting can be performed according to the geographical range, and the reliability of the system is met.
Corresponding to the processing method provided by the embodiment of the present invention, an embodiment of the present invention further provides a processing system for a network camera monitoring video, including:
the first acquisition module is used for acquiring a target image of a target object;
the second acquisition module is used for acquiring the monitoring videos shot by the plurality of network cameras respectively;
the analysis module is used for carrying out image analysis on each monitoring video to obtain a plurality of video frames of a target image containing a target object;
and the splicing module is used for splicing the video frames in sequence according to the sequence of the shooting time of the video frames to form a monitoring video frame set of the target object.
In one embodiment, acquiring monitoring videos shot by a plurality of network cameras respectively comprises:
determining a first surveillance video comprising a target image;
determining the moving speed of the target object according to the first monitoring video;
determining a geographical range in which the target object may appear in a preset time period after the current time according to the moving speed of the target object;
and acquiring the monitoring videos shot by the network cameras in the geographical range as the monitoring videos shot by the network cameras.
In one embodiment, after forming the set of surveillance video frames of the target object, the method further includes:
and marking a target object on each video frame in the monitoring video frame set.
In one embodiment, the target image includes any one of a left view, a right view, a front view, a rear view, a top view, a bottom view, and a perspective view.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (6)

1. A processing method for monitoring videos by a network camera is characterized by comprising the following steps:
acquiring a target image of a target object;
acquiring monitoring videos shot by a plurality of network cameras respectively;
performing image analysis on each monitoring video to obtain a plurality of video frames containing a target image of the target object;
splicing the video frames in sequence according to the sequence of the shooting time of the video frames to form a monitoring video frame set of the target object;
the acquiring of the monitoring videos shot by the plurality of network cameras includes:
determining a first surveillance video comprising the target image;
determining the moving speed of the target object according to the first monitoring video;
determining a geographical range in which the target object may appear in a preset time period after the current time according to the moving speed of the target object;
acquiring monitoring videos shot by the network cameras in the geographic range respectively to serve as the monitoring videos shot by the network cameras respectively;
wherein the determining the moving speed of the target object according to the first monitoring video comprises:
determining the moving distance of each frame of each characteristic coordinate point of a target object in a first monitoring video of the target image;
determining the moving speed of the target object according to the moving distance of each frame of each characteristic coordinate point of the target object;
determining a geographical range in which the target object may appear in a preset time period after the current time according to the moving speed of the target object;
wherein the determining the moving distance of each frame of each characteristic coordinate point of the target object in the first monitoring video of the target image comprises:
step A1: determining the moving distance of each frame of each characteristic coordinate point of the target object in the first monitoring video of the target image by using a formula (1):
Figure FDA0003126463080000021
wherein Si,jRepresenting the moving distance of the ith characteristic coordinate point from the jth-1 th frame to the jth frame in the first monitoring video of the target image; (x)i,j,yi,j) Representing coordinates of an ith characteristic coordinate point of a jth frame in a first monitoring video of the target image; (x)i-1,j,yi-1,j) Representing coordinates of an ith characteristic coordinate point of a j-1 th frame in a first monitoring video of the target image;
the determining the moving speed of the target object according to the moving distance of each frame of each characteristic coordinate point of the target object includes:
step A2: determining a moving speed identification value of the target object according to the moving distance of each frame of each characteristic coordinate point of the target object by using a formula (2):
Figure FDA0003126463080000022
wherein V represents a moving speed identification value of the target object; f represents a frame rate of a first surveillance video of the target image; m represents the total number of feature coordinate points of the target image; n represents the number of frames in a first surveillance video of the target image;
the determining, according to the moving speed of the target object, a geographical range in which the target object may appear within a preset time period after a current time includes:
step A3: determining the geographical range in which the target object may appear within a preset time period after the current time according to the moving speed identification value of the target object by using formula (3)
Figure FDA0003126463080000023
Wherein etakA determination value representing a geographical range in which the target object is likely to appear; l isi,kRepresenting the distance between the ith characteristic coordinate point of the current target object and the kth edge coordinate point of the first monitoring video; t represents a preset time period; u () represents a step function, and the function value is 1 when the value in the parentheses is 0 or more and 0 when the value in the parentheses is less than 0;
when etakWhen 0, it means that the target object does not appear in the vicinity of the k-th edge coordinate point;
when etakWhen 1, it indicates that the target object may appear in the vicinity of the k-th edge coordinate point.
2. The method of claim 1,
after the forming of the monitoring video frame set of the target object, the method further includes:
and marking the target object on each video frame in the monitoring video frame set.
3. The method of claim 1,
the target image comprises any one of a left view, a right view, a front view, a rear view, a top view, a bottom view and a perspective view.
4. A processing system for webcam surveillance video, comprising:
the first acquisition module is used for acquiring a target image of a target object;
the second acquisition module is used for acquiring the monitoring videos shot by the plurality of network cameras respectively;
the analysis module is used for carrying out image analysis on each monitoring video to obtain a plurality of video frames containing the target image of the target object;
the splicing module is used for splicing the video frames in sequence according to the sequence of the shooting time of the video frames to form a monitoring video frame set of the target object;
the acquiring of the monitoring videos shot by the plurality of network cameras includes:
determining a first surveillance video comprising the target image;
determining the moving speed of the target object according to the first monitoring video;
determining a geographical range in which the target object may appear in a preset time period after the current time according to the moving speed of the target object;
acquiring monitoring videos shot by the network cameras in the geographic range respectively to serve as the monitoring videos shot by the network cameras respectively;
wherein the determining the moving speed of the target object according to the first monitoring video comprises:
determining the moving distance of each frame of each characteristic coordinate point of a target object in a first monitoring video of the target image;
determining the moving speed of the target object according to the moving distance of each frame of each characteristic coordinate point of the target object;
determining a geographical range in which the target object may appear in a preset time period after the current time according to the moving speed of the target object;
wherein the determining the moving distance of each frame of each characteristic coordinate point of the target object in the first monitoring video of the target image comprises:
step A1: determining the moving distance of each frame of each characteristic coordinate point of the target object in the first monitoring video of the target image by using a formula (1):
Figure FDA0003126463080000041
wherein Si,jRepresenting the moving distance of the ith characteristic coordinate point from the jth-1 th frame to the jth frame in the first monitoring video of the target image; (x)i,j,yi,j) Representing coordinates of an ith characteristic coordinate point of a jth frame in a first monitoring video of the target image; (x)i-1,j,yi-1,j) Representing coordinates of an ith characteristic coordinate point of a j-1 th frame in a first monitoring video of the target image;
the determining the moving speed of the target object according to the moving distance of each frame of each characteristic coordinate point of the target object includes:
step A2: determining a moving speed identification value of the target object according to the moving distance of each frame of each characteristic coordinate point of the target object by using a formula (2):
Figure FDA0003126463080000042
wherein V represents a moving speed identification value of the target object; f represents a frame rate of a first surveillance video of the target image; m represents the total number of feature coordinate points of the target image; n represents the number of frames in a first surveillance video of the target image;
the determining, according to the moving speed of the target object, a geographical range in which the target object may appear within a preset time period after a current time includes:
step A3: determining the geographical range in which the target object may appear within a preset time period after the current time according to the moving speed identification value of the target object by using formula (3)
Figure FDA0003126463080000051
Wherein etakA determination value representing a geographical range in which the target object is likely to appear; l isi,kRepresenting the distance between the ith characteristic coordinate point of the current target object and the kth edge coordinate point of the first monitoring video; t represents a preset time period; u () represents a step function, and when the value in the parentheses is 0 or more, the function value is 1, and when the value in the parentheses is less than 0The time function value is 0;
when etakWhen 0, it means that the target object does not appear in the vicinity of the k-th edge coordinate point;
when etakWhen 1, it indicates that the target object may appear in the vicinity of the k-th edge coordinate point.
5. The system of claim 4,
after the forming of the monitoring video frame set of the target object, the method further includes:
and marking the target object on each video frame in the monitoring video frame set.
6. The system of claim 4,
the target image comprises any one of a left view, a right view, a front view, a rear view, a top view, a bottom view and a perspective view.
CN202011472470.XA 2020-12-14 2020-12-14 Method and system for processing monitoring video of network camera Active CN112601021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011472470.XA CN112601021B (en) 2020-12-14 2020-12-14 Method and system for processing monitoring video of network camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011472470.XA CN112601021B (en) 2020-12-14 2020-12-14 Method and system for processing monitoring video of network camera

Publications (2)

Publication Number Publication Date
CN112601021A CN112601021A (en) 2021-04-02
CN112601021B true CN112601021B (en) 2021-08-31

Family

ID=75195499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011472470.XA Active CN112601021B (en) 2020-12-14 2020-12-14 Method and system for processing monitoring video of network camera

Country Status (1)

Country Link
CN (1) CN112601021B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113794861A (en) * 2021-09-10 2021-12-14 王平 Monitoring system and monitoring method based on big data network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011166305A (en) * 2010-02-05 2011-08-25 Sony Corp Image processing apparatus and imaging apparatus
CN106878666A (en) * 2015-12-10 2017-06-20 杭州海康威视数字技术股份有限公司 The methods, devices and systems of destination object are searched based on CCTV camera
CN107845264A (en) * 2017-12-06 2018-03-27 西安市交通信息中心 A kind of volume of traffic acquisition system and method based on video monitoring
CN110278413A (en) * 2019-06-28 2019-09-24 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011166305A (en) * 2010-02-05 2011-08-25 Sony Corp Image processing apparatus and imaging apparatus
CN106878666A (en) * 2015-12-10 2017-06-20 杭州海康威视数字技术股份有限公司 The methods, devices and systems of destination object are searched based on CCTV camera
CN107845264A (en) * 2017-12-06 2018-03-27 西安市交通信息中心 A kind of volume of traffic acquisition system and method based on video monitoring
CN110278413A (en) * 2019-06-28 2019-09-24 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium

Also Published As

Publication number Publication date
CN112601021A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN105654512B (en) A kind of method for tracking target and device
CN110278383B (en) Focusing method, focusing device, electronic equipment and storage medium
CN110142785A (en) A kind of crusing robot visual servo method based on target detection
CN105812746B (en) A kind of object detection method and system
CN107357286A (en) Vision positioning guider and its method
CN106529538A (en) Method and device for positioning aircraft
CN111860352B (en) Multi-lens vehicle track full tracking system and method
CN111722186B (en) Shooting method and device based on sound source localization, electronic equipment and storage medium
CN105898107B (en) A kind of target object grasp shoot method and system
CN110910459B (en) Camera device calibration method and device and calibration equipment
CN110287907B (en) Object detection method and device
CN109520500A (en) One kind is based on the matched accurate positioning of terminal shooting image and streetscape library acquisition method
WO2022048582A1 (en) Method and device for optical flow information prediction, electronic device, and storage medium
US20210182571A1 (en) Population density determination from multi-camera sourced imagery
CN110827321B (en) Multi-camera collaborative active target tracking method based on three-dimensional information
CN112207821A (en) Target searching method of visual robot and robot
CN109712188A (en) A kind of method for tracking target and device
CN115808170B (en) Indoor real-time positioning method integrating Bluetooth and video analysis
CN114943773A (en) Camera calibration method, device, equipment and storage medium
CN110602376B (en) Snapshot method and device and camera
CN113382155A (en) Automatic focusing method, device, equipment and storage medium
CN110991306B (en) Self-adaptive wide-field high-resolution intelligent sensing method and system
CN112601021B (en) Method and system for processing monitoring video of network camera
CN109460077B (en) Automatic tracking method, automatic tracking equipment and automatic tracking system
US20190272426A1 (en) Localization system and method and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant