CN112804481B - Method and device for determining position of monitoring point and computer storage medium - Google Patents

Method and device for determining position of monitoring point and computer storage medium Download PDF

Info

Publication number
CN112804481B
CN112804481B CN202011589233.1A CN202011589233A CN112804481B CN 112804481 B CN112804481 B CN 112804481B CN 202011589233 A CN202011589233 A CN 202011589233A CN 112804481 B CN112804481 B CN 112804481B
Authority
CN
China
Prior art keywords
image
position information
monitoring point
target monitoring
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011589233.1A
Other languages
Chinese (zh)
Other versions
CN112804481A (en
Inventor
杨海舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN202011589233.1A priority Critical patent/CN112804481B/en
Publication of CN112804481A publication Critical patent/CN112804481A/en
Application granted granted Critical
Publication of CN112804481B publication Critical patent/CN112804481B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Alarm Systems (AREA)
  • Testing And Monitoring For Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method and a device for determining the position of a monitoring point and a computer storage medium, belonging to the technical field of computers. The method comprises the following steps: and determining the position information of the target monitoring point according to the first image included by the monitoring point data and the second image included by the external data. According to the embodiment of the application, the position information of the target monitoring point can be determined through the first image and the second image, so that not only is the labor saved, but also the speed of determining the position information of the target monitoring point is accelerated.

Description

Method and device for determining position of monitoring point and computer storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for determining a position of a monitoring point and a computer storage medium.
Background
In modern cities, monitoring equipment is distributed all over each corner of a street, and monitoring pictures obtained by the monitoring equipment can protect the safety of people and can also know some real-time information and the like. For example, certain events occurring in a public area may be monitored, or road conditions may be monitored in real time. In order to realize accurate description of the scene in the monitoring picture, it is usually necessary to determine the position of the monitoring point where the monitoring device is located.
In the related art, the position of the monitoring point is determined by means of manual acquisition. Specifically, when a monitoring device is installed at any position of a city, the position of a monitoring point where the monitoring device is located is recorded. Because the position of the monitoring point where the monitoring equipment is located is determined in a manual acquisition mode, and more monitoring equipment is installed in a city, the speed of obtaining the position of the monitoring point by the computer is low, and a large amount of manpower is consumed.
Disclosure of Invention
The embodiment of the application provides a method and a device for determining the position of a monitoring point and a computer storage medium, which effectively accelerate the speed of determining the position information of a target monitoring point. The technical scheme is as follows:
in a first aspect, a method for determining a location of a monitoring point is provided, where the method includes:
the method comprises the steps that monitoring point data and external data are obtained in response to selection operation of a point location calibration control on a display interface, the monitoring point data comprises a first image, the external data comprises a second image, the first image is an image collected by image collection equipment at a target monitoring point, and the point location calibration control is used for calibrating position information of the target monitoring point;
determining the position information of the target monitoring point based on the first image and the second image;
and displaying the position information of the target monitoring point.
Optionally, the external data further includes first location information, where the first location information is location information of an image acquisition device that acquires the second image, and the monitoring point data further includes shooting parameters of the image acquisition device at the target monitoring point;
the determining the position information of the target monitoring point based on the first image and the second image comprises:
under the condition that the matching degree between the first image and the second image meets a first preset condition, determining the position information of the target monitoring point based on the first position information;
and under the condition that the matching degree between the first image and the second image meets a second preset condition, determining the position information of the target monitoring point based on the first image and the shooting parameters.
Optionally, when the matching degree of the first image and the second image satisfies a first preset condition, determining the location information of the target monitoring point based on the first location information includes:
and under the condition that the content of the first image is the same as that of the second image, the shooting angle of the first image is the same as that of the second image, and the size of the first image is the same as that of the second image, taking the first position information as the position information of the target monitoring point.
Optionally, the determining, based on the first location information, the location information of the target monitoring point when the matching degree of the first image and the second image satisfies a first preset condition includes:
determining a conversion relationship between a three-dimensional space corresponding to the first position information and a three-dimensional space corresponding to the target monitoring point when the content of the first image and the content of the second image are the same, but one or more of the size of the first image and the size of the second image, the shooting angle of the first image and the shooting angle of the second image are different, or when the content of the first image and the content of the second image are partially the same; and performing three-dimensional space conversion on the first position information based on the conversion relation to obtain the position information of the target monitoring point.
Optionally, when the matching degree between the first image and the second image satisfies a second preset condition, determining the position information of the target monitoring point based on the first image and the shooting parameter includes:
determining a reference object from objects appearing in the first image in a case where both the content of the first image and the content of the second image are different;
and determining the position information of the target monitoring point based on the position information of the reference object and the shooting parameters.
Optionally, the method further comprises:
if the position information of the reference object is not obtained, obtaining the marking information of the reference object;
and taking the marking information of the reference object as the position information of the monitoring point.
In a second aspect, there is provided an apparatus for determining a location of a monitoring point, the apparatus comprising:
the system comprises an acquisition module, a display module and a point location calibration control, wherein the acquisition module is used for responding to selection operation of the point location calibration control on a display interface, acquiring monitoring point data and external data, the monitoring point data comprises a first image, the external data comprises a second image, the first image is an image acquired by image acquisition equipment at a target monitoring point, and the point location calibration control is used for calibrating position information of the target monitoring point;
the determining module is used for determining the position information of the target monitoring point based on the first image and the second image;
and the display module is used for displaying the position information of the target monitoring point.
Optionally, the external data further includes first location information, where the first location information is location information of an image acquisition device that acquires the second image, and the monitoring point data further includes shooting parameters of the image acquisition device at the target monitoring point;
the determining module includes:
a first determining unit, configured to determine, based on the first location information, location information of the target monitoring point when a matching degree between the first image and the second image satisfies a first preset condition;
and the second determining unit is used for determining the position information of the target monitoring point based on the first image and the shooting parameters under the condition that the matching degree between the first image and the second image meets a second preset condition.
Optionally, the first determining unit is configured to use the first position information as the position information of the target monitoring point when the content of the first image and the content of the second image, the shooting angle of the first image and the shooting angle of the second image, and the size of the first image and the size of the second image are the same.
Optionally, the first determining unit is configured to determine a conversion relationship between a three-dimensional space corresponding to the first location information and a three-dimensional space corresponding to the target monitoring point, if the content of the first image and the content of the second image are the same, but one or more of the size of the first image and the size of the second image, the shooting angle of the first image, and the shooting angle of the second image are different, or if the content of the first image and the content of the second image are partially the same; and performing three-dimensional space conversion on the first position information based on the conversion relation to obtain the position information of the target monitoring point.
Optionally, the second determining unit is configured to determine a reference object from an object appearing in the first image when the content of the first image and the content of the second image are both different;
and determining the position information of the target monitoring point based on the position information of the reference object and the shooting parameters.
Optionally, the second determining unit is further configured to:
if the position information of the reference object is not obtained, obtaining the marking information of the reference object;
and taking the marking information of the reference object as the position information of the monitoring point.
In a third aspect, a computer-readable storage medium is provided, the computer-readable storage medium having instructions stored thereon, the instructions when executed by a processor implement a method for determining a location of a monitor point according to the first aspect.
In a fourth aspect, there is provided a computer apparatus, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform a method of determining a location of a monitoring point according to the first aspect.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform a method of determining a location of a monitoring point as described in the first aspect above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
and determining the position information of the target monitoring point through a first image included in the monitoring point data and a second image included in the external data. Thus, the position information of the target monitoring point is determined according to the first image and the second image, and can be determined directly according to the first image and the second image, namely, the position information of the monitoring point is determined based on the content recognition of the images. The method and the device not only save manpower, but also accelerate the speed of determining the position information of the target monitoring point.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a system for determining a location of a monitoring point according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of a method for determining a location of a monitoring point according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of an image three-dimensional coordinate system according to an embodiment of the present application.
Fig. 4 is a schematic image diagram provided in an embodiment of the present application.
Fig. 5 is a specific flowchart of a method for determining a location of a monitoring point according to an embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of a device for determining a monitor point position according to an embodiment of the present application.
Fig. 7 is a block diagram of a terminal according to an embodiment of the present disclosure.
Fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
For convenience of description, an application scenario of the embodiment of the present application is described first.
The position of the monitoring point where the monitoring equipment is located can reflect the position of the scene in the monitoring picture acquired by the monitoring equipment, and further the scene in the monitoring picture can be accurately described. The method provided by the embodiment of the application is applied to the scene of determining the position of the monitoring point.
In order to implement a method for determining a location of a monitored point, an embodiment of the present application provides a system for determining a location of a monitored point. For the sake of convenience of the following description, the system for determining the position of the monitoring point is explained in detail.
Fig. 1 is a schematic structural diagram of a system for determining a location of a monitoring point according to an embodiment of the present disclosure. As shown in fig. 1, the system 100 for determining the position of a monitoring point includes a data acquisition module 101, a parsing module 102, a matching analysis module 103, a spatial reconstruction and integration module 104, a geographic coordinate calculation module 105, and a tag calibration module 106.
The data acquisition module is used for acquiring monitoring point data and external data. The monitoring point data includes a first image and the external data includes a second image. The first image is an image acquired by image acquisition equipment at a target monitoring point, and the second image is an image acquired by other image acquisition equipment.
The analysis module is used for analyzing the monitoring point data and the external data acquired by the data acquisition module to acquire corresponding position information, such as longitude and latitude information.
The matching analysis module is used for analyzing the corresponding information of the first image and the corresponding information of the second image to carry out similarity matching and outputting an image matching result, for example, the matching degree of the first image and the second image.
The space reconstruction and integration module determines a three-dimensional space corresponding to the monitoring point and a three-dimensional space corresponding to the first position information according to the matching result output by the matching analysis module, and determines a conversion relation between the three-dimensional space corresponding to the monitoring point and the three-dimensional space corresponding to the first position information. And obtaining a conversion relation between the two three-dimensional spaces, namely performing space integration on the three-dimensional space corresponding to the monitoring point and the three-dimensional space corresponding to the first position information in the same three-dimensional coordinate system. The first position information is position information of an image capturing device that captures the second image.
The geographic coordinate calculation module further determines the position information of the monitoring point based on the conversion relation between the two three-dimensional spaces determined by the space reconstruction and integration module.
The label calibration module is used for marking the position information of the monitoring points determined by the geographic coordinate calculation module on the monitoring points, so that the monitoring points can be conveniently classified and managed.
The system for determining the position of the monitoring point provided in the embodiment of the present application may be, for the determined position of the monitoring point, a longitude and latitude, a place name, and the like, which is not limited herein.
It should be noted that, each module in the system for determining a location of a monitored point shown in fig. 1 may be deployed in a terminal in a centralized manner, and at this time, the terminal implements the method for determining a location of a monitored point provided in the embodiment of the present application. The method for determining the location of the monitoring point provided by the embodiment of the present application may also be deployed in a centralized manner in a server. Optionally, each module in the system for determining the location of the monitoring point may also be deployed in a distributed manner on different devices, which is not limited in this embodiment of the present application.
In addition, each module of the system for determining the position of the monitoring point in fig. 1 is a software module, and the naming of each module is based on the function naming of the software module. When the embodiment of the application is applied, different naming can be performed based on requirements, for example, the data acquisition module can be named as a first module, the analysis module can be named as a second module, and the matching analysis module can be named as a third module. The embodiments of the present application do not limit the naming of the above modules.
Based on the system for determining the location of the monitored point shown in fig. 1, the method provided by the embodiment of the present application is further described below. It should be noted that, in the embodiment of the present application, the steps in fig. 2 may be executed by using a device such as a terminal, a controller, a server, and the like, and the execution subject of the embodiment of the present application is not limited herein. Fig. 2 illustrates a terminal as an execution subject.
Fig. 2 is a flowchart of a method for determining a location of a monitoring point according to an embodiment of the present application, where the method for determining a location of a monitoring point may include the following steps.
Step 201: and the terminal responds to the selection operation of the point location calibration control on the display interface to obtain the monitoring point data and the external data.
In order to enable the terminal to start the point location calibration program of the target monitoring point, the terminal needs to respond to the selection operation of the point location calibration control for the target monitoring point on the display interface, and obtain monitoring point data and external data through the data acquisition module in fig. 1. The position calibration control is used for calibrating the position information of the target monitoring point.
In a possible implementation manner, the above implementation manner for the selection operation of the point location calibration control on the display interface is: a map comprising a plurality of monitoring points is displayed on a display interface of the terminal. And the user performs trigger operation based on one monitoring point of the plurality of monitoring points displayed on the display interface. And the terminal determines a target monitoring point in response to the triggering operation. After the terminal determines the target monitoring point, the terminal displays a point location calibration control aiming at the target monitoring point. And the user performs selection operation based on the point location calibration control of the target monitoring point, and the terminal responds to the selection operation instruction of the point location calibration control of the target monitoring point so as to start a point location calibration program of the target monitoring point.
In another possible implementation manner, the above implementation manner of the selection operation for the point location calibration control on the display interface is: a map with a plurality of point location calibration controls is displayed on a display interface of the terminal, and any point location calibration control corresponds to one monitoring point. And the user selects and operates on the basis of a target point position calibration control in the multiple point position calibration controls displayed on the display interface. And the terminal responds to the selection operation of the target point position calibration control, and takes the monitoring point at the target point position calibration control as a target monitoring point so as to start a point position calibration program of the target monitoring point.
It should be noted that, the implementation manners of the two terminal responses to the point location calibration instruction for the target monitoring point are only optional implementation manners, and the embodiment of the present application is not limited.
Optionally, the monitoring point data is acquired by an image acquisition device at the target monitoring point, and the image acquisition device places the acquired monitoring point data in a local cache, or uploads the data to a background server for storage. Therefore, the implementation manner of the terminal acquiring the monitoring point data may be as follows: the terminal obtains the data of the monitoring point based on the local cache of the image acquisition equipment. Specifically, the terminal sends a request for obtaining the monitoring point data to a local cache memory of the image acquisition device, wherein the request for obtaining the monitoring point data carries an identifier of a target monitoring point. After receiving the request, the image acquisition equipment searches the monitoring point data corresponding to the target monitoring point in the local cache, returns the searched monitoring point data to the terminal, and the terminal receives the monitoring point data. Or the terminal acquires the monitoring point data based on the background server. Specifically, the terminal sends a request for obtaining the monitoring point data to the background server, wherein the request for obtaining the monitoring point data carries the identifier of the target monitoring point. After receiving the request, the background server searches the monitoring point data corresponding to the target monitoring point, returns the searched monitoring point data to the terminal, and the terminal receives the monitoring point data.
Optionally, the implementation manner of the terminal acquiring the external data may be: the external data may be obtained based on the internet. Specifically, an external data control is displayed on a display interface of the terminal, a user triggers the external data control through preset operation, and the terminal responds to the triggering operation aiming at the external control to acquire one or more external data on the internet. When there is one acquired external data, the following step 202 is performed. When the acquired external data is multiple, displaying a plurality of second images included in the multiple external data on a terminal display interface, and selecting one second image from the multiple second images by a user, wherein the external data is the external data corresponding to the one second image selected by the user from the multiple second images. The terminal performs the following steps of step 202 for selecting external data corresponding to one second image. The preset operation may be any one of a click, a slide, a touch, and the like. Optionally, the terminal responds to a trigger operation for the external control, obtains one or more pieces of external data from the internet, and filters the external data according to the data source or the image content, and displays the filtered second images on the terminal display interface. Optionally, external data of a data source with street view images, such as a hundred-degree map, may be selected; the image content may also be selected as external data of the scene class.
The user selects one of the plurality of second images according to the user's own intention. For example, the user wants to select the second image of the same type as the first image, and if the first image is an image of a cat, the second image selected by the user is also an image of a cat. The basis of how to select one second image from the plurality of second images is not limited, and the user can arbitrarily select the second image according to the intention of the user.
The external data can be data such as videos and images uploaded by individuals in the social platform, shop information contained in life software, and the like. And will not be described in detail herein. The implementation manner of the terminal acquiring the external data may specifically refer to the implementation manner of the terminal acquiring the monitoring point data, and is not described herein again.
Step 202: and the terminal determines the position information of the target monitoring point based on the first image and the second image.
The terminal determines a first image included in the monitoring point data and a second image included in the external data. And the terminal determines the position information of the target monitoring point based on the first image and the second image. The external data may further include first position information, where the first position information is position information of an image capturing device that captures the second image. The monitoring point data may further include shooting parameters of the image capturing device at the target monitoring point.
In step 202, the degree of association between the first position information and the position information of the target monitoring point can be determined directly based on the first image and the second image, so that the accuracy of determining the position information of the target monitoring point is improved.
Therefore, in a possible implementation manner, the implementation process of the terminal determining the position information of the target monitoring point based on the first image and the second image is as follows: and under the condition that the matching degree between the first image and the second image meets a first preset condition, determining the position information of the target monitoring point based on the first position information. And under the condition that the matching degree between the first image and the second image meets a second preset condition, determining the position information of the target monitoring point based on the first image and the shooting parameters. Wherein the first preset condition indicates that the first image and the second image are identical or partially identical. The second preset condition indicates that the first image and the second image are both different.
The determination basis of the matching degree between the first image and the second image comprises the judgment of the similarity between the content of the first image and the content of the second image. If the content of the first image and the content of the second image are identical, it indicates that the image capturing device at the monitoring point and the image capturing device capturing the second image may be the first image and the second image captured at the same position in the same scene. If the content of the first image and the content of the second image are partially the same, it indicates that the image capturing device at the monitoring point and the image capturing device capturing the second image may be the first image and the second image captured in the same scene but at different positions. In both cases, the position information of the target monitoring point is associated with the first position information.
Wherein the content of the first image comprises a plurality of objects appearing in the first image and the content of the second image comprises a plurality of objects appearing in the second image.
Therefore, the above implementation manner for determining the content of the first image and the content of the second image is as follows: and the terminal respectively carries out edge detection on the first image and the second image to obtain an object appearing in the first image and an object appearing in the second image.
The implementation manner of the terminal for performing edge detection on the first image and the second image is as follows: the terminal obtains the pixel value of the first image pixel point and the pixel value of the second image pixel point. And determining points with obvious brightness change in the first image according to the pixel values of the pixels of the first image, determining the pixels with obvious brightness change, and determining continuous pixels as the edge of the object appearing in the first image. And determining points with obvious brightness change in the second image according to the pixel values of the pixels of the second image, determining the pixels with obvious brightness change, and determining continuous pixels as the edges of the objects appearing in the second image.
The above-mentioned edge detection of the images is only an implementation manner that the determination of the object appearing in the first image and the object appearing in the second image is optional, and the determination of the object appearing in the first image and the object appearing in the second image may also be performed in other manners, such as an image segmentation network, an image recognition network, and the like, which are not described herein one by one. When the similarity between the first image and the second image satisfies the second preset condition, for example, the content of the first image is different from the content of the second image, which indicates that the first image and the second image captured by the image capturing device at the monitoring point and the image capturing device capturing the second image are not in the same scene. At this time, the position information of the target monitoring point and the first position information do not have any correlation.
Thus, the similarity between the content of the first image and the content of the second image can be classified into three cases, the first being that the content of the first image and the content of the second image are identical. Second, the content of the first image is partially identical to the content of the second image. Third, the content of the first image is different from the content of the second image. The following three cases are used to determine how to determine the location information of the target monitoring point.
(1) The content of the first image and the content of the second image are identical.
In a possible implementation in which the content appearing in the first image and the content appearing in the second image are identical, if the shooting angle of the first image and the shooting angle of the second image are identical, it cannot be said that the image capturing device at the monitoring point and the image capturing device capturing the second image are located at the same position, and it is also possible that the same scene is in the same orientation, but facing different distances of the content in the images.
Therefore, in this case, further comparison is required based on the size of the first image and the size of the second image. If the size of the first image is the same as that of the second image, the first image and the second image which are shot at the same position of the image acquisition equipment at the monitoring point and the image acquisition equipment for acquiring the second image in the same scene are shown. And at the moment, the terminal takes the first position information as the position information of the target monitoring point. That is, if the first preset condition is: and if the content of the first image is the same as the content of the second image, the shooting angle of the first image is the same as the shooting angle of the second image, and the size of the first image is the same as the size of the second image, the first position information is taken as the position information of the target monitoring point.
For example, a user a takes an image of a hangzhou building at a certain location and uploads the image to a social platform, the content matching degree of the image and a first image acquired by image acquisition equipment at a target monitoring point is extremely high, and the extremely high matching degree specifically means: when the content of the shot image of the Hangzhou building is the same as the content of the first image, the shot image shooting angle of the Hangzhou building is the same as the shooting angle of the first image, and the shot size of the image of the Hangzhou building is the same as the size of the first image, the geographic position where the user A sends the social dynamic state is considered to be approximately equal to the geographic coordinates of the target monitoring point. I.e. the location information of the target monitoring point.
The above implementation manner of determining the shooting angle of the first image and the shooting angle of the second image is as follows: the terminal establishes a three-dimensional coordinate system by taking the position of the monitoring point as an original point, establishes the three-dimensional coordinate system based on the first position information, forms a three-dimensional space corresponding to the target monitoring point and a three-dimensional space corresponding to the first position information, and determines the shooting angle of the first image and the shooting angle of the second image according to the three-dimensional space corresponding to the target monitoring point and the three-dimensional space corresponding to the first position information.
In addition, the above determination of whether the shooting angle of the first image and the shooting angle of the second image are the same may also be referred to as performing similarity matching on the shooting angle of the first image and the shooting angle of the second image. Specifically, the terminal may set an angle threshold for dividing whether the photographing angle of the first image and the photographing angle of the second image are the same. And when the difference value between the shooting angle of the first image and the shooting angle of the second image is lower than the angle threshold value, the shooting angle of the first image is the same as that of the second image. The difference between the shooting angle of the first image and the shooting angle of the second image exceeds an angle threshold, and the shooting angle of the first image is different from the shooting angle of the second image.
The implementation manner of the size of the first image and the size of the second image is as follows: the size of the image is also the size of the image frame, the terminal determines the size of the first image according to the length and width of the first image, and determines the size of the second image according to the length and width of the second image.
Further, the above-described determination of whether the size of the first image and the size of the second image are the same may also be referred to as performing similarity matching on the size of the first image and the size of the second image. Specifically, the terminal may set a size threshold for dividing whether the size of the first image and the size of the second image are the same. When the difference between the size of the first image and the size of the second image is greater than the size threshold, the size of the first image and the size of the second image are not the same. When the difference between the size of the first image and the size of the second image is less than the size threshold, the size of the first image is the same as the size of the second image.
If the first preset condition is: when the content of the first image is completely the same as the content of the second image, and one or more of the size of the first image, the size of the second image, the shooting angle of the first image and the shooting angle of the second image are different, determining a conversion relation between a three-dimensional space corresponding to the first position information and a three-dimensional space corresponding to the target monitoring point, and performing three-dimensional space conversion on the first position information based on the conversion relation between the three-dimensional space corresponding to the first position information and the three-dimensional space corresponding to the target monitoring point to obtain the position information of the target monitoring point.
It should be noted that, when the shooting angle of the first image and the shooting angle of the second image are different, in a special case, there may be a case where the size of the first image and the size of the second image are the same. For example, the first image and the second image are both the same regular sphere, and the resulting size is the same regardless of the angle at which the regular sphere is captured. However, in general, since irregularities are present in each scene, when the photographing angle of the first image and the photographing angle of the second image are different, the size of the first image and the size of the second image are substantially different.
For convenience of description later, a three-dimensional space corresponding to image acquisition is explained first.
The three-dimensional space corresponding to the first position information specifically includes: and a three-dimensional coordinate system established based on the first position information. For example, with the image capture device capturing the second image positioned at the origin o1, the depth of field is the z1 axis, where the z1 axis is perpendicular to the plane of the second image. The horizontal axis of the acquired second image plane based on the origin is the x1 axis, and the vertical axis of the acquired second image plane based on the origin is the y1 axis. And establishing a three-dimensional space corresponding to the first position information according to the three-dimensional coordinate system.
The three-dimensional space corresponding to the target monitoring point specifically includes: and establishing a three-dimensional coordinate system for the terminal based on the target monitoring point. The position of the monitoring equipment of the target monitoring point is taken as an origin o2, and the depth of field is taken as a z2 axis, wherein the z2 axis is perpendicular to the plane of the first image. The horizontal axis of the first image plane acquired by the monitoring device of the target monitoring point based on the origin is the x2 axis, and the vertical axis of the first image plane acquired by the monitoring device of the target monitoring point based on the origin is the y1 axis. And establishing a three-dimensional space of the first image according to the three-dimensional coordinate system.
As shown in fig. 3, fig. 3 is a schematic diagram of an image three-dimensional coordinate system according to an embodiment of the present application. In fig. 3, the position of the monitoring device of the target monitoring point is the origin o 2. The depth of the monitored picture is the z2 axis, i.e. the direction perpendicular to the image. The horizontal axis of the image plane acquired by the monitoring device at the target monitoring point based on the origin is the x2 axis, and the vertical axis of the image plane acquired by the monitoring device at the target monitoring point based on the origin is the y2 axis.
Based on the definition of the three-dimensional space, the implementation manner of determining the conversion relationship between the three-dimensional space corresponding to the first position information and the three-dimensional space corresponding to the target monitoring point is as follows: the terminal acquires coordinate points of a plurality of objects appearing in the second image in the three-dimensional coordinate system and coordinate points of a plurality of objects appearing in the first image in the three-dimensional coordinate system. And converting the coordinate points of the plurality of objects appearing in the second image and the coordinate points of the plurality of objects appearing in the first image, and obtaining a conversion relation through a plurality of conversion training.
Based on the conversion relation, the three-dimensional space conversion is carried out on the first position information, and the realization mode of obtaining the position information of the target monitoring point is as follows: and the terminal places the origin o1 of the three-dimensional coordinate system corresponding to the first position information and the position of the monitoring equipment of the target monitoring point on the same three-dimensional coordinate system as the origin o2 according to the space reconstruction and integration module to obtain the three-dimensional space under the same three-dimensional coordinate system. And then according to the conversion relation, the position information of the target monitoring point can be obtained. Wherein the transformation relationship includes a shift and a rotation of the coordinates.
(2) The content appearing in the first image is partially the same as the content appearing in the second image.
When the content appearing in the first image and the content appearing in the second image are partially the same, at this time, neither the photographing angle of the first image nor the photographing angle of the second image, nor the size of the first image nor the size of the second image may be the same. Therefore, a conversion relationship between the three-dimensional space corresponding to the first location information and the three-dimensional space corresponding to the target monitoring point needs to be determined, and three-dimensional space conversion is performed on the first location information based on the conversion relationship to obtain location information of the target monitoring point.
The specific implementation manner may refer to the related contents in (1), and will not be described herein again.
(3) The content appearing in the first image and the content appearing in the second image are both different, i.e. a second preset condition.
And when the content appearing in the first image and the content appearing in the second image are different, the fact that the image acquisition device at the monitoring point and the image acquisition device acquiring the second image do not shoot the first image and the second image in the same scene is indicated. At this time, the content displayed on the second image has no reference meaning for the terminal to determine the position information of the target monitoring point, and therefore it is necessary for the terminal to determine a reference object from objects appearing in the first image, and determine the position information of the target monitoring point based on the position information of the reference object and the photographing parameter.
In a possible implementation manner, the above-mentioned implementation manner for determining the reference object from the objects appearing in the first image is: the terminal determines all objects appearing in the first image as reference objects.
In another possible implementation, one object among all objects appearing in the first image is selected to be determined as the reference object. The selected reference object is an object capable of identifying a name. Such as shops, road signs, etc.
The above-mentioned implementation manner of selecting one object from all objects appearing in the first image to determine as the reference object is as follows: the terminal randomly selects one object among all objects appearing in the first image to be determined as a reference object. Or the terminal displays the object appearing in the first image in the form of an object list or in the form of an object image on the display interface, the user triggers based on one object in the object list or one object in the object image as a reference object through preset operation, and the terminal determines one reference object after detecting the triggering operation.
The above implementation manners of determining the position information of the target monitoring point according to the reference object may be divided into two types, and the first type is that in the case of determining the position information of one or more reference objects, the terminal determines the position information of the target monitoring point based on the position information of one or more reference objects and the shooting parameters. The second is that in case that the location information of one or more reference objects cannot be determined, the terminal determines the location information of the target monitoring point based on the names of one or more reference objects.
In the first implementation manner, the determining the position information of the reference object is implemented by: the terminal determines the identification information of one or more reference objects by using the analysis module, and acquires the position information corresponding to the identification information of the one or more reference objects on the third-party software according to the identification information of the one or more reference objects. The identification information of the reference object is the name of the reference object. For example, the terminal obtains the location information of the starbucks a-way shop from some service software according to the two identifiers, namely the starbucks a-way shop and the starbucks a-way shop.
The above implementation manner of determining the position information of the target monitoring point based on the position information and the shooting parameter of the reference object is as follows: since the reference object is an object appearing in the first image, the plane coordinate of the reference object in the two-dimensional coordinate system corresponding to the first image can be determined according to the relative position relationship between the pixel points in the first image, and the plane coordinate can also be referred to as the two-dimensional camera coordinate of the reference object. Furthermore, the position information of the reference object is also the position coordinates of the reference object in geodetic coordinates, which may be generally latitude and longitude. Therefore, based on the two-dimensional camera coordinates of the reference object, the position information of the reference object, and the photographing parameters, the conversion relationship between the aforementioned two-dimensional coordinate system and the geodetic coordinate system can be determined. Since the coordinates of the target monitoring point in the two-dimensional coordinate system are known (i.e. the origin of the two-dimensional coordinate system), knowing the transformation relationship between the two-dimensional coordinate system and the geodetic coordinate system and the coordinates of the target monitoring point in the two-dimensional coordinate system, the coordinates of the target monitoring point in the geodetic coordinate system, that is, the position information of the target position point, can be determined.
In the second implementation manner, the terminal determines the identification names of the one or more reference objects by using the parsing module, and if the position information of the reference object is not obtained according to the identification information of the one or more reference objects, the tag calibration module of the terminal obtains the tagging information of the reference object, and uses the tagging information of the one or more reference objects as the position information of the monitoring point. The annotation information of the reference object may be input by a user through a preset operation.
Further, the tag calibration module of the terminal may further obtain the position relationship between the labeling information of the reference object and different reference objects, and use the position relationship between the labeling information of the reference object and different reference objects as the position information of the monitoring point. The annotation information of the reference object and the position relation between different reference objects can be input by a user through preset operation.
The range of the position of the target monitoring point can be reduced through the position relation among the plurality of reference objects, so that a user can accurately determine the position information of the target monitoring point.
For example, as shown in fig. 4, fig. 4 is a schematic image diagram provided in the embodiment of the present application. In fig. 4, there are two reference objects, and the labeled information is starbucks and Huarun Wanjia, respectively. The position relation between the starbucks and the Huarun Wanjia is that the starbucks and the Huarun Wanjia separate a pedestrian path.
Step 203: and the terminal displays the position information of the target monitoring point.
The terminal marks the determined position information of the target monitoring point by using the label calibration module and displays the position information on a display interface of the terminal so as to be convenient for a user to check.
In summary, in the embodiment of the present application, the position information of the target monitoring point is determined through the first image included in the monitoring point data and the second image included in the external data. Therefore, the position information of the target monitoring point does not need to be manually collected and recorded on site of the target monitoring point, and can be directly determined according to the first image and the second image, namely, the position information of the monitoring point is determined based on the content identification of the images. Therefore, the manpower is saved, and the speed of determining the position information of the target monitoring point is increased.
The method provided by the embodiments of the present application is further explained below by taking fig. 5 as an example. Fig. 5 is a specific flowchart of a method for determining a location of a monitoring point according to an embodiment of the present disclosure. It should be noted that the embodiment shown in fig. 5 is only a partial optional technical solution in the embodiment shown in fig. 2, and does not constitute a limitation on the determination method of the monitoring point position provided in the embodiment of the present application.
The method comprises the following steps that 1, a terminal responds to a point location calibration instruction aiming at a target monitoring point and starts to obtain monitoring point data and external data. The monitoring point data includes the monitoring point image data in fig. 5 and the internal parameters of the monitoring point, which are the shooting parameters. The external data is external data with longitude and latitude information.
2. And respectively carrying out similarity matching on the content of the first image and the content of the second image, the shooting angle of the first image and the shooting angle of the second image, and the size of the first image and the size of the second image by using an image analysis technology. That is, the image analysis technology is applied to primarily match the shot picture of the monitoring point with external data.
3. And calculating geographic information such as geographic coordinates of the monitoring points in different modes based on the initial matching result. That is, the content of the first image and the content of the second image, the shooting angle of the first image and the shooting angle of the second image, the size of the first image and the size of the second image are respectively subjected to similarity matching by using an image analysis technology, and the position information of the monitoring point is respectively determined according to different similarity matching degrees.
4. And marking the monitoring point based on the determined position information of the target monitoring point, and finishing marking.
In summary, in the embodiment of the present application, the position information of the target monitoring point is determined through the first image included in the monitoring point data and the second image included in the external data. Thus, the position information of the target monitoring point is determined according to the first image and the second image, and can be determined directly according to the first image and the second image, namely, the position information of the monitoring point is determined based on the content recognition of the images. The method and the device not only save manpower, but also accelerate the speed of determining the position information of the target monitoring point.
Fig. 6 is a schematic structural diagram of a device for determining a location of a monitored point according to an embodiment of the present application, where the device for determining a location of a monitored point may be implemented by software, hardware, or a combination of the two. The apparatus 600 for determining the position of the monitor point may include: the device comprises an acquisition module 601, a determination module 602 and a display module 603.
The system comprises an acquisition module, a point location calibration control and a display interface, wherein the acquisition module is used for responding to selection operation of the point location calibration control on the display interface and acquiring monitoring point data and external data, the monitoring point data comprises a first image, the external data comprises a second image, the first image is an image acquired by image acquisition equipment at a target monitoring point, and the point location calibration control is used for calibrating position information of the target monitoring point;
the determining module is used for determining the position information of the target monitoring point based on the first image and the second image;
and the display module is used for displaying the position information of the target monitoring point.
Optionally, the external data further includes first location information, where the first location information is location information of an image acquisition device that acquires the second image, and the monitoring point data further includes shooting parameters of the image acquisition device at the target monitoring point;
a determination module comprising:
the first determining unit is used for determining the position information of the target monitoring point based on the first position information under the condition that the matching degree between the first image and the second image meets a first preset condition;
and the second determining unit is used for determining the position information of the target monitoring point based on the first image and the shooting parameters under the condition that the matching degree between the first image and the second image meets a second preset condition.
Optionally, the first determining unit is configured to use the first position information as the position information of the target monitoring point when the content of the first image and the content of the second image, the shooting angle of the first image and the shooting angle of the second image, and the size of the first image and the size of the second image are all the same.
Optionally, the first determining unit is configured to determine a conversion relationship between a three-dimensional space corresponding to the first position information and a three-dimensional space corresponding to the target monitoring point, when the content of the first image and the content of the second image are the same, but one or more of the size of the first image and the size of the second image, and the shooting angle of the first image and the shooting angle of the second image are different, or when the content of the first image and the content of the second image are partially the same; and performing three-dimensional space conversion on the first position information based on the conversion relation to obtain the position information of the target monitoring point.
Optionally, a second determining unit, configured to determine the reference object from an object appearing in the first image if the content of the first image and the content of the second image are both different;
and determining the position information of the target monitoring point based on the position information of the reference object and the shooting parameters.
Optionally, the second determining unit is further configured to:
if the position information of the reference object is not obtained, obtaining the marking information of the reference object;
and taking the labeling information of the reference object as the position information of the monitoring point.
In summary, in the embodiment of the present application, the position information of the target monitoring point is determined through the first image included in the monitoring point data and the second image included in the external data. Thus, the position information of the target monitoring point is determined according to the first image and the second image, and can be determined directly according to the first image and the second image, namely, the position information of the monitoring point is determined based on the content recognition of the images. The method and the device not only save manpower, but also accelerate the speed of determining the position information of the target monitoring point.
It should be noted that: the device for determining a location of a monitored point provided in the above embodiment is only illustrated by dividing the functional modules when determining the location of the monitored point, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the apparatus for determining a location of a monitored point and the method for determining a location of a monitored point provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 7 is a block diagram of a terminal 700 according to an embodiment of the present disclosure. The terminal 700 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio La8er III, motion video Experts compression standard Audio layer 3), an MP4 player (Moving Picture Experts Group Audio La8er IV, motion video Experts compression standard Audio layer 4), a notebook computer, or a desktop computer. Terminal 700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so on.
In general, terminal 700 includes: a processor 701 and a memory 702.
The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate array 8), and a PLA (Programmable Logic array 8). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit) which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement the method for determining a location of a monitor point provided by the method embodiments herein.
In some embodiments, the terminal 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 704, a display screen 705, a camera assembly 706, an audio circuit 707, a positioning component 708, and a power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio frequency circuit 704 is used for receiving and transmitting RF (Radio frequency 8) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless fidelity 8) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be one, providing the front panel of the terminal 700; in other embodiments, the display 705 can be at least two, respectively disposed on different surfaces of the terminal 700 or in a folded design; in other embodiments, the display 705 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 705 can be made of LCD (Liquid crystal Cr8 stable display 8), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, the main camera and the wide-angle camera are fused to realize panoramic shooting and a VR (Virtual reality) shooting function or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is used to locate the current geographic Location of the terminal 700 for navigation or LBS (Location Based Service). The Positioning component 708 can be a Positioning component based on the united states GPS (Global Positioning system), the chinese beidou system, the russian graves system, or the european union' S galileo system.
Power supply 709 is provided to supply power to various components of terminal 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When power supply 709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the terminal 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the display screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the terminal 700, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the terminal 700 by the user. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on a side frame of terminal 700 and/or underneath display 705. When the pressure sensor 713 is disposed on a side frame of the terminal 700, a user's grip signal on the terminal 700 may be detected, and the processor 701 performs right-left hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting a fingerprint of a user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 714 may be disposed on the front, back, or side of the terminal 700. When a physical button or a vendor Logo is provided on the terminal 700, the fingerprint sensor 714 may be integrated with the physical button or the vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the display screen 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the display screen 705 is increased; when the ambient light intensity is low, the display brightness of the display screen 705 is adjusted down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically disposed on a front panel of the terminal 700. The proximity sensor 716 is used to collect the distance between the user and the front surface of the terminal 700. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually decreases, the processor 701 controls the display 705 to switch from the bright screen state to the dark screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 is gradually increased, the processor 701 controls the display 705 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 7 is not intended to be limiting of terminal 700 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Embodiments of the present application further provide a non-transitory computer-readable storage medium, and when instructions in the storage medium are executed by a processor of a terminal, the terminal is enabled to execute the method for determining a location of a monitoring point provided in the above embodiments.
The embodiment of the present application further provides a computer program product containing instructions, which when run on a terminal, causes the terminal to execute the method for determining a location of a monitoring point provided in the foregoing embodiment.
Fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application. (this means is for the server described below) the server may be a server in a background cluster of servers. Specifically, the method comprises the following steps:
the server 800 includes a Central Processing Unit (CPU)801, a system memory 804 including a Random Access Memory (RAM)802 and a Read Only Memory (ROM)803, and a system bus 805 connecting the system memory 804 and the central processing unit 801. The server 800 also includes a basic input/output system (I/O system) 806, which facilitates transfer of information between devices within the computer, and a mass storage device 807 for storing an operating system 813, application programs 814, and other program modules 815.
The basic input/output system 806 includes a display 808 for displaying information and an input device 809 such as a mouse, keyboard, etc. for user input of information. Wherein a display 808 and an input device 809 are connected to the central processing unit 801 through an input output controller 810 connected to the system bus 805. The basic input/output system 806 may also include an input/output controller 810 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 810 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 807 is connected to the central processing unit 801 through a mass storage controller (not shown) connected to the system bus 805. The mass storage device 807 and its associated computer-readable media provide non-volatile storage for the server 800. That is, the mass storage device 807 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM drive.
Without loss of generality, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 804 and mass storage 807 described above may be collectively referred to as memory.
According to various embodiments of the present application, server 800 may also operate as a remote computer connected to a network through a network, such as the Internet. That is, the server 800 may be connected to the network 812 through the network interface unit 811 coupled to the system bus 805, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 811.
The memory further includes one or more programs, and the one or more programs are stored in the memory and configured to be executed by the CPU. The one or more programs include instructions for performing a method for determining a location of a monitoring point as provided by an embodiment of the present application, and the method includes:
the method comprises the steps that in response to selection operation of a point location calibration control on a display interface, monitoring point data and external data are obtained, the monitoring point data comprise first images, the external data comprise second images, the first images are images collected by image collection equipment at target monitoring points, and the point location calibration control is used for calibrating position information of the target monitoring points;
determining position information of a target monitoring point based on the first image and the second image;
and displaying the position information of the target monitoring point.
Embodiments of the present application further provide a non-transitory computer-readable storage medium, and when instructions in the storage medium are executed by a processor of a server, the server is enabled to execute the method for determining a location of a monitoring point provided in the foregoing embodiments.
Embodiments of the present application further provide a computer program product containing instructions, which when run on a server, cause the server to execute the method for determining a location of a monitoring point provided in the foregoing embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only a preferred embodiment of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (8)

1. A method for determining a location of a monitoring point, the method comprising:
the method comprises the steps that monitoring point data and external data are obtained in response to selection operation of a point location calibration control on a display interface, the monitoring point data comprises a first image, the external data comprises a second image, the first image is an image collected by image collection equipment at a target monitoring point, and the point location calibration control is used for calibrating position information of the target monitoring point;
determining the position information of the target monitoring point based on the first image and the second image;
displaying the position information of the target monitoring point;
the determining the position information of the target monitoring point based on the first image and the second image comprises:
determining a reference object from objects appearing in the first image under the condition that the monitoring point data further comprises shooting parameters of image acquisition equipment at the target monitoring point and the matching degree between the first image and the second image meets a second preset condition, wherein the second preset condition indicates that the contents of the first image and the second image are different, and determining the position information of the target monitoring point based on the position information of the reference object and the shooting parameters, wherein the position information of the reference object is the position information of the reference object under geodetic coordinates;
and/or the presence of a gas in the gas,
and under the condition that the external data further comprises first position information and the matching degree between the first image and the second image meets a first preset condition, determining the position information of the target monitoring point based on the first position information, wherein the first position information is the position information of image acquisition equipment for acquiring the second image, and the first preset condition indicates that the contents of the first image and the second image are completely the same or partially the same.
2. The method of claim 1, wherein in a case that the matching degree of the first image and the second image satisfies a first preset condition, determining the position information of the target monitoring point based on the first position information comprises:
and under the condition that the content of the first image is the same as that of the second image, the shooting angle of the first image is the same as that of the second image, and the size of the first image is the same as that of the second image, taking the first position information as the position information of the target monitoring point.
3. The method according to claim 1, wherein the determining the position information of the target monitoring point based on the first position information in the case that the matching degree of the first image and the second image satisfies a first preset condition comprises:
in the case where the contents of the first image and the second image are the same but one or more of the size of the first image and the size of the second image, the photographing angle of the first image, and the photographing angle of the second image are different, or,
in case the content of the first image and the content of the second image are partly identical,
determining a conversion relation between a three-dimensional space corresponding to the first position information and a three-dimensional space corresponding to the target monitoring point; and performing three-dimensional space conversion on the first position information based on the conversion relation to obtain the position information of the target monitoring point.
4. The method of claim 1, wherein the method further comprises:
if the position information of the reference object is not obtained, obtaining the marking information of the reference object;
and taking the marking information of the reference object as the position information of the monitoring point.
5. An apparatus for determining a location of a monitoring point, the apparatus comprising:
the system comprises an acquisition module, a display module and a point location calibration control, wherein the acquisition module is used for responding to selection operation of the point location calibration control on a display interface, acquiring monitoring point data and external data, the monitoring point data comprises a first image, the external data comprises a second image, the first image is an image acquired by image acquisition equipment at a target monitoring point, and the point location calibration control is used for calibrating position information of the target monitoring point;
the determining module is used for determining the position information of the target monitoring point based on the first image and the second image;
the display module is used for displaying the position information of the target monitoring point;
the determining module comprises:
a second determining unit, configured to determine a reference object from an object appearing in the first image when the monitoring point data further includes shooting parameters of an image capturing device at the target monitoring point, and a matching degree between the first image and the second image satisfies a second preset condition, where the second preset condition indicates that contents of the first image and the second image are different, and determine position information of the target monitoring point based on position information of the reference object and the shooting parameters, where the position information of the reference object is position information of the reference object in geodetic coordinates;
and/or the presence of a gas in the gas,
a first determining unit, configured to determine, based on first position information when the external data further includes the first position information and a matching degree between the first image and the second image satisfies a first preset condition, position information of the target monitoring point, where the first position information is position information of an image capturing device that captures the second image, and the first preset condition indicates that contents of the first image and the second image are completely the same or partially the same.
6. The apparatus of claim 5,
the first determining unit is used for taking the first position information as the position information of the target monitoring point when the content of the first image and the content of the second image, the shooting angle of the first image and the shooting angle of the second image, and the size of the first image and the size of the second image are the same;
the first determining unit is configured to determine a conversion relationship between a three-dimensional space corresponding to the first position information and a three-dimensional space corresponding to the target monitoring point when the content of the first image and the content of the second image are the same, but one or more of the size of the first image and the size of the second image, and the shooting angle of the first image and the shooting angle of the second image are different, or when the content of the first image and the content of the second image are partially the same; performing three-dimensional space conversion on the first position information based on the conversion relation to obtain position information of the target monitoring point;
the second determination unit is further configured to:
if the position information of the reference object is not obtained, obtaining the marking information of the reference object;
and taking the marking information of the reference object as the position information of the monitoring point.
7. A computer apparatus, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of any of the above claims 1 to 4.
8. A computer-readable storage medium having stored thereon instructions which, when executed by a processor, carry out the steps of the method of any of claims 1 to 4.
CN202011589233.1A 2020-12-29 2020-12-29 Method and device for determining position of monitoring point and computer storage medium Active CN112804481B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011589233.1A CN112804481B (en) 2020-12-29 2020-12-29 Method and device for determining position of monitoring point and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011589233.1A CN112804481B (en) 2020-12-29 2020-12-29 Method and device for determining position of monitoring point and computer storage medium

Publications (2)

Publication Number Publication Date
CN112804481A CN112804481A (en) 2021-05-14
CN112804481B true CN112804481B (en) 2022-08-16

Family

ID=75805479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011589233.1A Active CN112804481B (en) 2020-12-29 2020-12-29 Method and device for determining position of monitoring point and computer storage medium

Country Status (1)

Country Link
CN (1) CN112804481B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7681796B2 (en) * 2006-01-05 2010-03-23 International Business Machines Corporation Mobile device tracking
DE102007056835A1 (en) * 2007-11-26 2009-05-28 Robert Bosch Gmbh Image processing module for estimating an object position of a surveillance object, method for determining an object position of a surveillance object and computer program
JP5389879B2 (en) * 2011-09-20 2014-01-15 株式会社日立製作所 Imaging apparatus, surveillance camera, and camera screen masking method
DE102012106860A1 (en) * 2012-07-27 2014-02-13 Jenoptik Robot Gmbh Device and method for identifying and documenting at least one object passing through a radiation field
JP6482856B2 (en) * 2014-12-22 2019-03-13 セコム株式会社 Monitoring system
CN109886078B (en) * 2018-12-29 2022-02-18 华为技术有限公司 Retrieval positioning method and device for target object
CN111754581A (en) * 2019-03-28 2020-10-09 阿里巴巴集团控股有限公司 Camera calibration method, roadside sensing equipment and intelligent traffic system

Also Published As

Publication number Publication date
CN112804481A (en) 2021-05-14

Similar Documents

Publication Publication Date Title
CN109886208B (en) Object detection method and device, computer equipment and storage medium
CN110839128B (en) Photographing behavior detection method and device and storage medium
CN113627413B (en) Data labeling method, image comparison method and device
CN110457571B (en) Method, device and equipment for acquiring interest point information and storage medium
CN110570460A (en) Target tracking method and device, computer equipment and computer readable storage medium
CN111127509A (en) Target tracking method, device and computer readable storage medium
CN111126276A (en) Lane line detection method, lane line detection device, computer equipment and storage medium
CN111857793B (en) Training method, device, equipment and storage medium of network model
CN109189290B (en) Click area identification method and device and computer readable storage medium
CN112991439A (en) Method, apparatus, electronic device, and medium for positioning target object
CN111753606A (en) Intelligent model upgrading method and device
CN111586279A (en) Method, device and equipment for determining shooting state and storage medium
CN111127541A (en) Vehicle size determination method and device and storage medium
CN111179628B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN111754564B (en) Video display method, device, equipment and storage medium
CN111444749B (en) Method and device for identifying road surface guide mark and storage medium
CN110990728A (en) Method, device and equipment for managing point of interest information and storage medium
CN112241987A (en) System, method, device and storage medium for determining defense area
CN112804481B (en) Method and device for determining position of monitoring point and computer storage medium
CN113592874B (en) Image display method, device and computer equipment
CN112990424B (en) Neural network model training method and device
CN114598992A (en) Information interaction method, device, equipment and computer readable storage medium
CN113938606A (en) Method and device for determining ball machine erection parameters and computer storage medium
CN111444945A (en) Sample information filtering method and device, computer equipment and storage medium
CN111984755A (en) Method and device for determining target parking point, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant