CN111757098B - Debugging method and device after installation of intelligent face monitoring camera, camera and medium - Google Patents

Debugging method and device after installation of intelligent face monitoring camera, camera and medium Download PDF

Info

Publication number
CN111757098B
CN111757098B CN202010619736.2A CN202010619736A CN111757098B CN 111757098 B CN111757098 B CN 111757098B CN 202010619736 A CN202010619736 A CN 202010619736A CN 111757098 B CN111757098 B CN 111757098B
Authority
CN
China
Prior art keywords
camera
shooting area
initial shooting
area
monitoring picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010619736.2A
Other languages
Chinese (zh)
Other versions
CN111757098A (en
Inventor
陈忠江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010619736.2A priority Critical patent/CN111757098B/en
Publication of CN111757098A publication Critical patent/CN111757098A/en
Application granted granted Critical
Publication of CN111757098B publication Critical patent/CN111757098B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a debugging method and device of an intelligent face monitoring camera, the camera and a medium, and relates to the technical field of camera debugging and image processing. The specific implementation scheme is as follows: acquiring an initial shooting area, and determining the coordinate of the initial shooting area in a camera coordinate system, wherein the initial shooting area is obtained by marking on a monitoring picture of a camera in advance; and determining the position relation of the initial shooting area in the monitoring picture of the camera according to the coordinates, and adjusting the parameters of the camera according to the position relation. According to the method and the device, the parameters are automatically debugged by the camera by means of the pre-marked initial shooting area, professional technicians do not need to debug the parameters according to experience, and debugging efficiency and precision are improved.

Description

Debugging method and device after installation of intelligent face monitoring camera, camera and medium
Technical Field
The application relates to the field of computers, in particular to a camera debugging technology, and specifically relates to a debugging method and device after an intelligent face monitoring camera is installed, a camera and a medium.
Background
In the process of installing and deploying the AI intelligent face monitoring camera, the requirement of the intelligent face monitoring camera in the debugging process is higher than that of the traditional security camera, so that professional debugging personnel are required to debug.
However, the debugging personnel usually have experience as the main, and the experience of the debugging personnel determines the final debugging effect. Therefore, the prior art not only has low debugging efficiency, but also can not ensure the best effect of the AI intelligent face monitoring camera.
Disclosure of Invention
The application provides a debugging method and device after an intelligent face monitoring camera is installed, the camera and a medium, so that debugging efficiency and precision are improved.
In a first aspect, an embodiment of the present application provides a debugging method after an intelligent face monitoring camera is installed, including:
acquiring an initial shooting area, and determining the coordinate of the initial shooting area in a camera coordinate system, wherein the initial shooting area is obtained by marking on a monitoring picture of a camera in advance;
and determining the position relation of the initial shooting area in the monitoring picture of the camera according to the coordinates, and adjusting the parameters of the camera according to the position relation.
In a second aspect, an embodiment of the present application further provides a debugging device after installing the intelligent face monitoring camera, including:
the system comprises an initial shooting area determining module, a judging module and a judging module, wherein the initial shooting area determining module is used for acquiring an initial shooting area and determining the coordinate of the initial shooting area under a camera coordinate system, and the initial shooting area is obtained by marking on a monitoring picture of a camera in advance;
and the parameter adjusting module is used for determining the position relation of the initial shooting area in the monitoring picture of the camera according to the coordinates and adjusting the parameters of the camera according to the position relation.
In a third aspect, an embodiment of the present application further provides a method for installing an intelligent face monitoring camera, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the debugging method after the intelligent face monitoring camera is installed, according to any embodiment of the present application.
In a fourth aspect, an embodiment of the present application further provides a non-transitory computer-readable storage medium storing computer instructions, where the computer instructions are configured to enable the computer to execute the method for debugging after installing the intelligent face monitoring camera according to any embodiment of the present application.
According to the technical scheme of the embodiment of the application, the parameter debugging is automatically realized by the camera according to the position relation of the initial shooting area in the monitoring picture by means of the pre-marked initial shooting area, professional technicians do not need to debug according to experience, and the debugging efficiency and precision are improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become readily apparent from the following description, and other effects of the above alternatives will be described hereinafter in conjunction with specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flowchart of a debugging method after an intelligent face monitoring camera is installed according to an embodiment of the present application;
fig. 2 is a schematic diagram of an initial shooting area in a monitoring screen according to an embodiment of the present application;
FIG. 3 is a schematic flowchart of a debugging method after an intelligent face monitoring camera is installed according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a debugging device after an intelligent face monitoring camera is installed according to an embodiment of the application;
fig. 5 is a block diagram after an intelligent face monitoring camera is installed, which is used to implement the debugging method after the intelligent face monitoring camera is installed according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic flowchart of a debugging method after an intelligent face monitoring camera is installed according to an embodiment of the present application, which is applicable to a case of debugging after the intelligent face monitoring camera is installed, and relates to the technical field of camera debugging and image processing. The method can be executed by a debugging device after the intelligent face monitoring camera is installed, and the device is realized by adopting a software and/or hardware mode and is preferably configured in the intelligent face monitoring camera. As shown in fig. 1, the method specifically includes the following steps:
s101, acquiring an initial shooting area, and determining coordinates of the initial shooting area in a camera coordinate system, wherein the initial shooting area is obtained by marking on a monitoring picture of an intelligent face monitoring camera in advance.
Specifically, after the intelligent face monitoring camera is installed, a technician can open and view a monitoring picture of the camera through a web application on a terminal of the technician, and mark an initial shooting area on the monitoring picture, for example, a square or rectangular area can be framed and selected, and the initial shooting area is an area where a target to be monitored is located. Fig. 2 is a schematic diagram of an initial shooting area in a monitoring screen according to an embodiment of the present application. As shown in the figure, the area 1 is a monitoring picture of the camera, and the area 2 is an initial shooting area of the mark, so as to mark an area where an object to be monitored is located.
After the technician marks the initial shooting area through the web application, the camera can acquire the position information of the initial shooting area on the monitoring picture through the web application on the terminal, namely the coordinates in the camera coordinate system.
S102, determining the position relation of the initial shooting area in the monitoring picture of the camera according to the coordinates, and adjusting the parameters of the camera according to the position relation.
The parameters of the camera typically include, among other things, focal length and angle. The focal length determines the size of the photographic target, and the angle determines the position and completeness of the photographic target in the visual field. Because the effect in the initial monitoring picture can not meet the shooting requirement on the monitoring target generally after the camera is installed and before the camera is debugged, in order to achieve automatic debugging of the camera, in the embodiment of the application, the camera automatically adjusts the parameters of the camera according to the position relation of the initial shooting area in the monitoring picture of the camera by means of the initial shooting area, so that the effect in the adjusted monitoring picture meets the monitoring requirement on the monitoring target.
Specifically, in an initial monitoring picture before the camera is debugged, a technician determines a monitoring target, such as a doorway or a passage, according to a shooting requirement. Then, the monitoring target can be selected by a mouse or a touch mode in a monitoring picture on a screen, so that an initial monitoring area is marked. However, since the camera is not debugged, the position and size of the initial monitoring area in the monitoring picture are usually not satisfactory, and therefore, the camera needs to determine the position relationship of the initial shooting area in the monitoring picture of the camera according to the coordinates and adjust the parameters of the camera according to the position relationship. For example, if the initial shooting area is too small in the monitoring picture, the accuracy of face recognition of a person appearing in the initial shooting area is affected, or even face recognition cannot be performed, and at this time, the initial shooting area can be enlarged by adjusting the focal length of the camera; moreover, if the angle of the camera is not appropriate, the angle can also be determined by the position relation of the initial shooting area in the monitoring picture of the camera, and then debugging is carried out by adjusting the angle of the camera.
According to the technical scheme of the embodiment of the application, the parameter debugging is automatically realized by the camera according to the position relation of the initial shooting area in the monitoring picture by means of the pre-marked initial shooting area, professional technicians do not need to debug according to experience, and the debugging efficiency and precision are improved.
Fig. 3 is a schematic flowchart of a debugging method after an intelligent face monitoring camera is installed according to an embodiment of the present application, and the present embodiment is further optimized based on the above embodiment. As shown in fig. 3, the method specifically includes the following steps:
s201, obtaining an initial shooting area, and determining coordinates of each vertex on the edge of the initial shooting area in a camera coordinate system, wherein the initial shooting area is obtained by marking on a monitoring picture of the intelligent face monitoring camera in advance.
For example, if the initial photographing region is a square or a rectangle, four sides of the square or the rectangle are edges of the initial photographing region, and four vertices of the square or the rectangle are each vertex on the edges. The coordinates of each vertex on the edge of the initial shooting area under the camera coordinate system can be determined through an image processing technology. Of course, the initial shooting area may have other shapes, such as a pentagon or a hexagon, and the present application does not limit this.
S202, determining the area proportion relation of the initial shooting area in the monitoring picture according to the coordinates; and adjusting the focal length of the camera according to the area ratio relation, wherein the area ratio of the initial shooting area after the focal length is adjusted in the monitoring picture reaches a preset threshold value.
In the monitoring picture, the shooting area for the monitored target needs to meet a certain size, so that the target can be clearly shot by the camera, the face recognition is carried out on the pedestrian in the target, and if the shooting area is too small, the problem that the face recognition cannot be carried out is caused. And the size of the shooting area can be measured by the area ratio of the initial shooting area in the monitoring picture. If the proportion is large and a certain threshold value is met, the size of the shooting area can be considered to meet the condition, and face recognition can be carried out.
Further, the area of the initial shooting area is determined according to the coordinates of each vertex on the edge of the initial shooting area under the camera coordinate system, or the area ratio relationship is measured according to the distance between each vertex and the corresponding vertex on the periphery of the monitoring picture, for example, the farther the distance between the vertex on the edge of the initial shooting area and the corresponding vertex on the periphery of the monitoring picture is, the smaller the initial shooting area is, the smaller the area ratio is, and vice versa. During adjustment, if the area ratio of the initial shooting area is too small, the focal length can be adjusted to be larger, the shooting area is enlarged, and otherwise, the focal length is adjusted to be smaller. And detecting whether the area ratio reaches a preset threshold value or not while adjusting until the area ratio reaches the preset threshold value.
In addition, an adjustment target value of the focal length can be calculated based on the current area ratio of the initial shooting area according to the performance parameters of the camera, then the focal length is directly adjusted to the target value, whether the area ratio reaches a preset threshold value or not is detected, and if the area ratio does not reach the preset threshold value, fine adjustment is continued until the preset threshold value is reached.
S203, determining the offset relation of the initial shooting area in the monitoring picture according to the coordinates; and adjusting the angle of the camera according to the offset relation, wherein the initial shooting area after the angle is adjusted is positioned in the center of the monitoring picture.
Specifically, in the monitoring picture of the camera before debugging, the monitored target may not be centered, or even the complete monitored target may not be captured, and at this time, the angle of the camera needs to be adjusted. According to the coordinates of each vertex on the edge of the initial shooting area in the camera coordinate system, the offset relation of the initial shooting area in the monitoring picture can be determined, namely whether the initial shooting area is centered or not, and if the initial shooting area is not centered, the angle of the camera is adjusted until the initial shooting area is centered.
Further, in one embodiment, the method further comprises: and carrying out automatic focusing according to the adjusted parameters, the performance data of the camera, the deployment height and the deployment distance from the camera to the initial target area.
Besides adjusting the focal length and angle of the camera, the face recognition can be accurately performed only by focusing to a certain definition. Specifically, according to the adjusted focal length and angle, the performance data of the camera, the deployment height and the deployment distance to the initial target area, the automatic focusing can be realized. For example, based on the current focal length and angle, as well as the performance data of the camera, the deployment height, and the deployment distance to the initial target area, the value of the focusing parameter is calculated and focusing is performed according to the value. The specific calculation method may be implemented by any method disclosed in the prior art, and the present application is not limited in any way.
According to the technical scheme of the embodiment of the application, the camera automatically debugs the focal length and the angle by means of the pre-marked initial shooting area and according to the area proportion relation and the offset relation of the initial shooting area in the monitoring picture, and in addition, the debugging on the definition can be realized. According to the method and the device, professional technicians do not need to debug according to experience, and debugging efficiency and precision are improved.
Fig. 4 is a schematic structural diagram of a debugging apparatus after an intelligent face monitoring camera is installed according to an embodiment of the present application, which is applicable to a case of debugging after the intelligent face monitoring camera is installed, and relates to the technical field of camera debugging and image processing. The device can realize the debugging method after the intelligent face monitoring camera is installed according to any embodiment of the application. As shown in fig. 4, the apparatus 300 specifically includes:
an initial shooting area determining module 301, configured to acquire an initial shooting area and determine coordinates of the initial shooting area in a camera coordinate system, where the initial shooting area is obtained by marking on a monitoring picture of a camera in advance;
a parameter adjusting module 302, configured to determine a position relationship of the initial shooting area in a monitoring picture of the camera according to the coordinates, and adjust a parameter of the camera according to the position relationship.
Optionally, the initial shooting area determining module is specifically configured to:
the method comprises the steps of obtaining an initial shooting area, and determining coordinates of each vertex on the edge of the initial shooting area under a camera coordinate system, wherein the initial shooting area is rectangular or square.
Optionally, the parameter adjusting module includes a first parameter adjusting unit, and is specifically configured to:
determining the area proportion relation of the initial shooting area in the monitoring picture according to the coordinates;
and adjusting the focal length of the camera according to the area ratio relationship, wherein the area ratio of the initial shooting area after the focal length is adjusted in the monitoring picture reaches a preset threshold value.
Optionally, the parameter adjusting module includes a second parameter adjusting unit, and is specifically configured to:
determining the inclination angle relation of the initial shooting area in the monitoring picture according to the coordinates;
and adjusting the angle of the camera according to the inclination angle relationship, wherein the edge of the initial shooting area after the angle adjustment is parallel to the outer edge of the monitoring picture.
Optionally, the apparatus further comprises:
and the focusing module is used for carrying out automatic focusing according to the adjusted parameters, the performance data of the camera, the deployment height and the deployment distance from the camera to the initial target area.
The debugging device 300 after installation of the intelligent face monitoring camera provided by the embodiment of the application can execute the debugging method after installation of the intelligent face monitoring camera provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. Reference may be made to the description of any method embodiment of the present application for details not explicitly described in this embodiment.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 5 is a block diagram of an electronic device according to the debugging method after installing an intelligent face monitoring camera according to the embodiment of the application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 5, the electronic apparatus includes: one or more processors 401, memory 402, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 5, one processor 401 is taken as an example.
Memory 402 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor, so that the at least one processor executes the debugging method provided by the application after the intelligent face monitoring camera is installed. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to execute the method for debugging after installation of the intelligent face surveillance camera provided by the present application.
The memory 402 may be used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the debugging method after installing the intelligent face monitoring camera in the embodiment of the present application (for example, the initial shooting area determining module 301 and the parameter adjusting module 302 shown in fig. 4). The processor 401 executes various functional applications and data processing of the server by running the non-transitory software programs, instructions and modules stored in the memory 402, that is, the debugging method after installing the intelligent face monitoring camera in the above method embodiment is realized.
The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area can store data and the like created according to the use of the electronic equipment for implementing the debugging method after the intelligent face monitoring camera is installed. Further, the memory 402 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 402 may optionally include a memory remotely disposed with respect to the processor 401, and these remote memories may be connected to an electronic device implementing the debugging method after installing the intelligent face monitoring camera according to the embodiment of the present application through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for implementing the debugging method after the installation of the intelligent face monitoring camera according to the embodiment of the application may further include: an input device 403 and an output device 404. The processor 401, the memory 402, the input device 403 and the output device 404 may be connected by a bus or other means, and fig. 5 illustrates an example of a connection by a bus.
The input device 403 may receive input numeric or character information and generate key signal input related to user setting and function control of an electronic device implementing the debugging method after installing the intelligent human face monitoring camera according to the embodiment of the present application, for example, a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 404 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. The client may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud computing, cloud service, a cloud database, cloud storage and the like. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the parameter debugging is automatically realized by the camera according to the position relation of the initial shooting area in the monitoring picture by means of the pre-marked initial shooting area, professional technicians do not need to debug according to experience, and the debugging efficiency and precision are improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (8)

1. A debugging method after an intelligent face monitoring camera is installed comprises the following steps:
acquiring an initial shooting area, and determining the coordinates of each vertex on the edge of the initial shooting area under a camera coordinate system, wherein the initial shooting area is obtained by marking on a monitoring picture of a camera in advance; the initial shooting area is an area where a target to be monitored is located;
determining the position relation of the initial shooting area in the monitoring picture of the camera according to the coordinates, and adjusting the parameters of the camera according to the position relation, wherein the position relation comprises the following steps:
determining the area proportion relation of the initial shooting area in the monitoring picture according to the coordinates;
adjusting the focal length of the camera according to the area ratio relationship, wherein the area ratio of the initial shooting area after the focal length is adjusted in the monitoring picture reaches a preset threshold value;
the determining the area proportion relation of the initial shooting area in the monitoring picture according to the coordinates comprises the following steps:
and measuring the area ratio relation according to the distance between each vertex and the corresponding vertex at the periphery of the monitoring picture.
2. The method according to claim 1, wherein the determining a position relationship of the initial shooting area in a monitoring picture of the camera according to the coordinates and adjusting parameters of the camera according to the position relationship comprises:
determining the offset relation of the initial shooting area in the monitoring picture according to the coordinates;
and adjusting the angle of the camera according to the offset relation, wherein the initial shooting area after the angle is adjusted is positioned in the center of the monitoring picture.
3. The method of claim 1, further comprising:
and carrying out automatic focusing according to the adjusted parameters, the performance data of the camera, the deployment height and the deployment distance from the camera to the initial target area.
4. The utility model provides a debugging device behind installation intelligent face surveillance camera machine, includes:
the system comprises an initial shooting area determining module, a judging module and a judging module, wherein the initial shooting area determining module is used for acquiring an initial shooting area and determining the coordinates of each vertex on the edge of the initial shooting area under a camera coordinate system, and the initial shooting area is obtained by marking on a monitoring picture of a camera in advance; the initial shooting area is an area where a target to be monitored is located;
a parameter adjusting module, configured to determine a position relationship of the initial shooting area in a monitoring picture of the camera according to the coordinates, and adjust a parameter of the camera according to the position relationship, where the parameter adjusting module includes:
determining the area proportion relation of the initial shooting area in the monitoring picture according to the coordinates;
adjusting the focal length of the camera according to the area ratio relationship, wherein the area ratio of the initial shooting area after the focal length is adjusted in the monitoring picture reaches a preset threshold value;
the determining the area proportion relation of the initial shooting area in the monitoring picture according to the coordinates comprises the following steps:
and measuring the area ratio relation according to the distance between each vertex and the corresponding vertex at the periphery of the monitoring picture.
5. The apparatus according to claim 4, wherein the parameter adjustment module includes a second parameter adjustment unit, specifically configured to:
determining the offset relation of the initial shooting area in the monitoring picture according to the coordinates;
and adjusting the angle of the camera according to the offset relation, wherein the initial shooting area after the angle is adjusted is positioned in the center of the monitoring picture.
6. The apparatus of claim 4, further comprising:
and the focusing module is used for carrying out automatic focusing according to the adjusted parameters, the performance data of the camera, the deployment height and the deployment distance from the camera to the initial target area.
7. An intelligent face surveillance camera comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of commissioning after installation of an intelligent face surveillance camera of any of claims 1-3.
8. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of commissioning after installation of an intelligent face surveillance camera according to any one of claims 1-3.
CN202010619736.2A 2020-06-30 2020-06-30 Debugging method and device after installation of intelligent face monitoring camera, camera and medium Active CN111757098B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010619736.2A CN111757098B (en) 2020-06-30 2020-06-30 Debugging method and device after installation of intelligent face monitoring camera, camera and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010619736.2A CN111757098B (en) 2020-06-30 2020-06-30 Debugging method and device after installation of intelligent face monitoring camera, camera and medium

Publications (2)

Publication Number Publication Date
CN111757098A CN111757098A (en) 2020-10-09
CN111757098B true CN111757098B (en) 2022-08-05

Family

ID=72678628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010619736.2A Active CN111757098B (en) 2020-06-30 2020-06-30 Debugging method and device after installation of intelligent face monitoring camera, camera and medium

Country Status (1)

Country Link
CN (1) CN111757098B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351208A (en) * 2020-11-03 2021-02-09 中冶赛迪重庆信息技术有限公司 Automatic tracking method, system, equipment and medium for loading and unloading videos of unmanned vehicles
CN112702571B (en) * 2020-12-18 2022-10-25 福建汇川物联网技术科技股份有限公司 Monitoring method and device
CN112995519B (en) * 2021-03-26 2024-03-01 精英数智科技股份有限公司 Camera self-adaptive adjustment method and device for water detection monitoring
CN113422901B (en) * 2021-05-29 2023-03-03 华为技术有限公司 Camera focusing method and related equipment
CN113645378B (en) * 2021-06-21 2022-12-27 福建睿思特科技股份有限公司 Safe management and control portable video distribution and control terminal based on edge calculation
CN114333030A (en) * 2021-12-31 2022-04-12 科大讯飞股份有限公司 Image processing method, device, equipment and storage medium
CN116471477A (en) * 2022-01-11 2023-07-21 华为技术有限公司 Method for debugging camera and related equipment
CN114979473A (en) * 2022-05-16 2022-08-30 遥相科技发展(北京)有限公司 Industrial robot control method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104486543A (en) * 2014-12-09 2015-04-01 北京时代沃林科技发展有限公司 Equipment and method for controlling cloud deck camera by intelligent terminal in touch manner
CN109982029A (en) * 2017-12-27 2019-07-05 浙江宇视科技有限公司 A kind of camera supervised scene Automatic adjustment method and device
CN111147749A (en) * 2019-12-31 2020-05-12 宇龙计算机通信科技(深圳)有限公司 Photographing method, photographing device, terminal and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10547790B2 (en) * 2018-06-14 2020-01-28 Google Llc Camera area locking
CN111131697B (en) * 2019-12-23 2022-01-04 北京中广上洋科技股份有限公司 Multi-camera intelligent tracking shooting method, system, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104486543A (en) * 2014-12-09 2015-04-01 北京时代沃林科技发展有限公司 Equipment and method for controlling cloud deck camera by intelligent terminal in touch manner
CN109982029A (en) * 2017-12-27 2019-07-05 浙江宇视科技有限公司 A kind of camera supervised scene Automatic adjustment method and device
CN111147749A (en) * 2019-12-31 2020-05-12 宇龙计算机通信科技(深圳)有限公司 Photographing method, photographing device, terminal and storage medium

Also Published As

Publication number Publication date
CN111757098A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN111757098B (en) Debugging method and device after installation of intelligent face monitoring camera, camera and medium
CN111523468B (en) Human body key point identification method and device
CN108668086B (en) Automatic focusing method and device, storage medium and terminal
CN111738072A (en) Training method and device of target detection model and electronic equipment
CN112132113A (en) Vehicle re-identification method and device, training method and electronic equipment
CN110659600B (en) Object detection method, device and equipment
CN111693147A (en) Method and device for temperature compensation, electronic equipment and computer readable storage medium
EP3910533A1 (en) Method, apparatus, electronic device, and storage medium for monitoring an image acquisition device
CN110555838A (en) Image-based part fault detection method and device
CN111998959B (en) Temperature calibration method and device based on real-time temperature measurement system and storage medium
CN110675635A (en) Method and device for acquiring external parameters of camera, electronic equipment and storage medium
CN111784757A (en) Training method of depth estimation model, depth estimation method, device and equipment
CN112509058A (en) Method and device for calculating external parameters, electronic equipment and storage medium
EP3879439A1 (en) Method and device for detecting body temperature, electronic apparatus and storage medium
CN111191619B (en) Method, device and equipment for detecting virtual line segment of lane line and readable storage medium
CN113325954A (en) Method, apparatus, device, medium and product for processing virtual objects
CN112102417A (en) Method and device for determining world coordinates and external reference calibration method for vehicle-road cooperative roadside camera
CN111601013A (en) Method and apparatus for processing video frames
CN110798681B (en) Monitoring method and device of imaging equipment and computer equipment
CN110995687B (en) Cat pool equipment identification method, device, equipment and storage medium
CN110030467B (en) Method, device and equipment for installing camera shooting assembly
CN116208853A (en) Focusing angle determining method, device, equipment and storage medium
CN114596362B (en) High-point camera coordinate calculation method and device, electronic equipment and medium
CN115575931A (en) Calibration method, calibration device, electronic equipment and storage medium
CN114727077A (en) Projection method, apparatus, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant