CN111354046A - Indoor camera positioning method and positioning system - Google Patents

Indoor camera positioning method and positioning system Download PDF

Info

Publication number
CN111354046A
CN111354046A CN202010240121.9A CN202010240121A CN111354046A CN 111354046 A CN111354046 A CN 111354046A CN 202010240121 A CN202010240121 A CN 202010240121A CN 111354046 A CN111354046 A CN 111354046A
Authority
CN
China
Prior art keywords
camera
target object
distance
projection point
linear distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010240121.9A
Other languages
Chinese (zh)
Inventor
王婧思
毛龙飞
韩增云
张亮
张清勇
叶姗
苏陆
王清正
孙守富
毛德春
毛允德
甘吉平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xinlongde Big Data Technology Co ltd
Original Assignee
Beijing Xinlongde Big Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xinlongde Big Data Technology Co ltd filed Critical Beijing Xinlongde Big Data Technology Co ltd
Priority to CN202010240121.9A priority Critical patent/CN111354046A/en
Publication of CN111354046A publication Critical patent/CN111354046A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an indoor camera positioning method and a positioning system, wherein a camera acquires image information, a linear distance D between a target object and the camera is calculated, a planar linear distance L 'between the camera and the target object is calculated according to a trigonometric function relation, a position vector of the target object in a camera coordinate system is calculated by using L' and a rotating angle β, a world coordinate of the target object is converted by using coordinates, the linear distance L between a projection point of the target object and a projection point of the camera is compared with an effective distance S of a human face recognition technology, a proper recognition processing technology is selected to obtain personnel information corresponding to the target object, the world coordinate of the personnel information is updated by using the current world coordinate of the target object.

Description

Indoor camera positioning method and positioning system
Technical Field
The embodiment of the invention relates to the technical field of camera positioning, in particular to an indoor camera positioning method and an indoor camera positioning system.
Background
With the development of security monitoring science and technology, the construction safety and the working efficiency of the traditional industry are comprehensively promoted. The coal mine safety is more important, and the coal mine safety is particularly required to be assisted by a security monitoring technology due to the characteristics of wide construction range, high danger degree, more involved personnel and the like.
A large number of cameras are mostly used in the existing civil explosion operation site and the mine site to be combined with manual checkpoints, the cameras are only used for collecting regional video images, centralized control is conducted, the images are thrown onto a large screen, central control personnel are arranged to pay attention to the screen in real time, and violation of regulations is found. This approach has the following serious problems:
1. the degree of dependence on people and the professional level are too high, the workload of manually staring at the screen is very large, the screen is easy to be fatigued, illegal and illegal behaviors are easy to cause and cannot be prevented in time, and accidents and cases are frequently caused.
2. The screen is limited, along with the camera is more and more, the area of screen is bigger and bigger, greatly increased control cost to the scope of people's eye is limited, can't monitor whole screens simultaneously, needs the later stage to return the inquiry, can't in time discover violating the regulations, is unfavorable for the acquisition of evidence and relevant law enforcement.
Disclosure of Invention
Therefore, the embodiment of the invention provides an indoor camera positioning method and a positioning system, so as to solve the problem that the supervision degree cannot meet the existing requirement due to low automation degree in the prior art.
In order to achieve the above object, an embodiment of the present invention provides the following:
an indoor camera positioning method comprises the following steps:
determining world coordinates (X, Y, H) of the camera, wherein X is a longitude value of a position where the camera is located, Y is a latitude value of the position where the camera is located, and H is a height value of the camera;
the camera collects video data in a monitoring area and image information in the video data;
identifying the central point of a target object in the image information, controlling the central point of the camera to focus on the central point of the target object, extracting the image speed width P of the target object, and simultaneously collecting the pitch angle α of the camera and the rotation angle β of the camera;
acquiring a focal length F of the camera when the camera focuses and shoots a target in real time, and calculating a linear distance D between the target object and the camera according to the F and the P;
calculating a linear distance D 'from a projection point of the target object to the camera according to a trigonometric function relation by using the height H of the camera and the pitch angle α of the camera to obtain D/D ═ Q, wherein Q is the proportion of the linear distance D to the direct selection distance D';
calculating a linear distance L between a projection point of the target object and a projection point of the camera according to a trigonometric function relation by using the height H of the camera, and calculating a distance L' between the projection point of the camera and the projection point of the target object in the vertical direction by using the distance L and a proportion Q;
respectively calculating to obtain relative latitudes XT and YT between the projection point of the camera and the projection point in the vertical direction of the target object on a longitudinal axis and a latitudinal axis by using the distance L ' and the rotation angle β according to a trigonometric function relationship, and finally obtaining the current world coordinates (X ', Y ') of the target object through the longitude and latitude (X, Y) and the relative longitude and latitude (XT, YT) of the camera;
if the distance L' is smaller than the effective distance S, performing face recognition processing on the image information;
if the distance L' is smaller than the effective distance S, specific object identification processing is carried out on the image information;
and after the face recognition processing or the specific object recognition processing, obtaining personnel information corresponding to the target object, and updating the world coordinates of the personnel information by using the current world coordinates of the target object.
Further, identifying the center point of the target object in the image information comprises:
and performing color space conversion and external contour feature extraction on the image information, comparing the image information with an established object feature database to identify a target object, and then extracting the central point of the target object and the image speed width P of the target object.
Further, the method for the linear distance D between the target object and the camera includes:
D=(W·F)/P;
in the formula:
p is the target object pixel distance;
w is the target physical width;
f is the camera focal length.
Further, the method for the linear distance D' from the projection point of the target object to the camera includes:
D’=H/cosα;
in the formula:
h is the height value of the camera;
α is the pitch angle of the camera.
Further, the method for the distance L' between the projection point of the target object in the vertical direction and the projection point of the camera is as follows:
L=H·tanα;
L’=L·Q;
in the formula:
h is the height value of the camera;
α is the pitch angle of the camera;
l is the linear distance between the projection point of the target object and the projection point of the camera;
q is the ratio of the linear distance D between the target object and the camera and the linear distance D' between the projection point of the target object and the camera.
Further, the relative latitude XT and the relative longitude YT are calculated by:
XT=L’sinβ;
YT=L’cosβ;
in the formula:
l' is the distance between the projection point of the vertical direction of the target object and the projection point of the camera;
β is the angle of rotation of the camera.
Further, the specific object identification process includes:
carrying out gray processing and Gaussian smooth denoising on the image information, and then adopting a Canny edge detection algorithm of OpenCV to realize the extraction of the edge of the specific object to obtain the profile data of the specific object; and extracting the characteristics of the profile data of the specific object, and comparing the characteristics with the established object characteristic database to identify the personnel identification information corresponding to the profile data of the specific object.
Further, if there are two or more world coordinate update commands of the person information at the same time, the selection rule is: the priority of the updating instruction sent out after the face recognition processing is higher than that of the updating instruction sent out after the specific object recognition processing; the same type of processing issues update instructions, whichever comes later.
A positioning system using an indoor camera positioning method, comprising:
the camera is provided with an electronic holder.
The camera position acquisition module is used for determining current world coordinate information (X, Y, H) of the camera, wherein X is a longitude value of a position where the camera is located, Y is a latitude value of the position where the camera is located, and H is a height value of the camera.
And the target object identification module is used for identifying the central point of the target object in the image information and determining the image velocity width P of the target object.
And the focusing control module controls the electronic holder to focus the camera on the target object, the focusing focus is the central point of the camera, and the pitch angle α of the camera and the rotation angle β of the camera are acquired simultaneously.
And the optical parameter acquisition module is used for acquiring the focal length F of the camera in real time.
And the linear distance acquisition module is used for calculating the linear distance D between the target object and the camera according to the focal length F and the image speed width P of the target object.
And the proportion acquisition module is used for calculating the linear distance D 'from the projection point of the target object to the camera according to the trigonometric function relation according to the height H of the camera and the pitch angle α of the camera, namely D/D ═ Q, and Q is the proportion of the linear distance D and the direct selection distance D'.
And the projection point distance acquisition module is used for calculating a linear distance L from a projection point of the target object to a projection point of the camera according to the height H of the camera and a trigonometric function relationship, and calculating a distance L' between the projection point of the camera and the projection point in the vertical direction of the target object by using the distance L and the proportion Q.
And the coordinate conversion module is used for respectively calculating to obtain relative latitudes XT and YT on a longitude axis and a latitude axis between the projection point of the camera and the projection point in the vertical direction of the target object according to the trigonometric function relation according to the distance L ' and the rotation angle β, and finally obtaining the current world coordinates (X ', Y ') of the target object through the longitude and latitude (X, Y) and the relative longitude and latitude (XT, YT) of the camera.
And the identification selection module is used for comparing the distance L ' with the effective distance S and selecting an identification processing mode according to the comparison structure, wherein if the distance L ' is less than the effective distance S, the face identification processing is carried out on the image information, and if the distance L ' is less than the effective distance S, the specific object identification processing is carried out on the image information.
And the coordinate updating module is used for obtaining the personnel information corresponding to the target object after the identification processing, and updating the world coordinates of the personnel information by using the current world coordinates of the target object.
Further, the system comprises a priority module, which is used for selecting an update instruction according to a priority rule when the world coordinate update instruction of more than two pieces of personnel information exists at the same time, wherein the priority rule is as follows: the priority of the updating instruction sent out after the face recognition processing is higher than that of the updating instruction sent out after the specific object recognition processing; the same type of processing issues update instructions, whichever comes later.
According to the embodiment of the invention, the following advantages are provided:
the indoor camera positioning method and the positioning system provided by the embodiment of the invention establish a dual-coordinate system of the world and the camera, acquire the relative distance between the camera and the target object, obtain the position vector of the target object in the camera coordinate system according to the relative distance, and determine the world coordinate of the target object by using the world coordinate and the position vector of the camera according to the conversion relation of the two coordinate systems, so as to realize the positioning of the target object.
The invention selects the identification mode according to the positioning technology, combines the image processing technology, the face identification technology, the color identification technology and the contour identification technology according to the characteristics of a plurality of identification modes, realizes the effect of remotely identifying the target object, has simple algorithm and small data processing amount, and meets the requirements of low cost and large-range monitoring.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions that the present invention can be implemented, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the effects and the achievable by the present invention, should still fall within the range that the technical contents disclosed in the present invention can cover.
Fig. 1 is a flowchart of a method for positioning an indoor camera according to an embodiment of the present invention;
fig. 2 is a schematic diagram of two coordinates of an indoor camera positioning method according to an embodiment of the present invention;
fig. 3 is a system structure diagram of a camera positioning system based on image processing according to an embodiment of the present invention.
In the figure:
1. a camera; 2. a target object; 3. a camera position acquisition module; 4. a target identification module; 5. a focus control module; 6. an optical parameter acquisition module; 7. a linear distance acquisition module; 8. a proportion obtaining module; 9. a projection point distance obtaining module; 10. a coordinate conversion module; 11. identifying a selection module; 12. a coordinate updating module; 13. a face recognition module; 14. and a specific object identification module.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the present specification, the terms "upper", "lower", "left", "right", "middle", and the like are used for clarity of description, and are not intended to limit the scope of the present invention, and changes or modifications in the relative relationship may be made without substantial changes in the technical content.
The technology uses OpenCV to realize a software part, wherein the OpenCV is a cross-platform computer vision library issued based on BSD license (open source) and can be operated on Linux, Windows, Android and MAc OS operating systems. The method is light and efficient, is composed of a series of C functions and a small number of C + + classes, provides interfaces of languages such as Python, RuBy, MATLAB and the like, and realizes a plurality of general algorithms in the aspects of image processing and computer vision. OpenCV is written in C + + language, and its main interface is also C + + language, but a large amount of C language interfaces are still reserved. The library also has a number of Python, Java and MATLAB/OCTAVE (version 2.5) interfaces. The API interface functions for these languages are available through online documentation. Support for C #, Ch, RuBy, GO is also provided today.
As shown in fig. 1-2, an indoor camera positioning method includes the following steps:
acquiring a world coordinate system XW-OW-YW, wherein OW is used as an origin, OW-XW is used as an abscissa in the world coordinate system, and the OW-XW is a longitude value; OW-YW is the ordinate in the world coordinate system, i.e. latitude value.
Establishing a camera 1 coordinate system X-C' -Y, e.g.
In fig. 2, C' is an origin, X is an abscissa of the coordinate system of the camera 1, and Y is an ordinate of the coordinate system of the camera 1.
Determining world coordinates (XW, YW) of the camera 1, wherein the coordinate XW is a longitude value of the position where the camera 1 is located, and the coordinate YW is a latitude value of the position where the camera 1 is located.
The camera 1 is arranged in a roadway, a coal face support and other areas for monitoring and is used for shooting video data in the monitored area. Image information including the target object 2 is collected from video data taken by the camera 1. In the embodiment, the target 2 is preferably a safety helmet, the external shape (semi-circle) and size of the safety helmet are the same size, the number of characteristic data can be reduced by the same characteristic identification algorithm, an adjustable head band is arranged in the safety helmet to fix the safety helmet on the head, and a part of buffer space is reserved to play a role in shock resistance; the color of the safety helmet is preferably bright yellow or bright red, and the safety helmet is distinguished from the background color of a multi-bit dark color system in a mine, a tunnel and the like, so that the outline of the safety helmet can be conveniently identified by a system.
In the image processing, firstly, color space conversion and external contour feature extraction are carried out on image information, the image information is compared with an established object feature database to identify the target object 2, and then the central point of the target object 2 and the image speed width P of the target object 2 are extracted.
(1) Color space conversion
Common color spaces include RGB, CMY, HSV, HSI, etc., and more than 150 color space conversion methods are available in OpenCV, and the color space adopted in the present invention is RGB or HSV.
In the RGB color space, R means red, G means green, and B means blue, and R, G, and B are used as three-dimensional coordinate systems, where each coordinate point represents one color.
In HSV color space, H refers to color, usually referred to using an angled circle. S means saturation, the center value of the circle is 0, the color is very light, and the color increases more and more deeply along the radius of the circle. V refers to the brightness of the color, the value of the cone at the bottom refers to black, and white at the top. In fact, RGB colors are susceptible to environmental conditions such as glare, low light, shadows, etc. In contrast, the HSV color space is a simplified version of the munsell color space, a perceptually-based color model. It separates the color signal into 3 attributes: hue (Hue, H), Saturation (S), brightness (Value, V). Hue means the wavelength of light reflected from or transmitted through an object, that is, hue is distinguished by the name of color, such as red, yellow, blue; luminance is the shade of the color; saturation is the shade of a color, e.g., deep red, light red. The HSV color space reflects the way of observing colors by people, and is more suitable for object identification.
(2) External contour feature extraction
The gray level processing is firstly carried out on the image information to obtain a gray level image so as to reduce the data processing amount. A gaussian filter is then applied and the image convolved for denoising to smooth the image. And then, applying a Canny threshold algorithm of OpenCV to obtain the image edge, wherein the pixel selection and rejection rule of the algorithm comprises the following a, b and c:
a, if the amplitude of a certain image speed position exceeds a high threshold value, the pixel is reserved as an edge pixel;
b if the amplitude of a certain pixel position is less than the low threshold value, the pixel is excluded;
c if the amplitude of a pixel position is between the high threshold and the ground threshold, and the pixel is a pixel connected with a value higher than the high threshold, the pixel position is reserved.
And after the contour map is obtained, carrying out erosion and expansion treatment on the image in the contour map to obtain clear contour edges. The contour is required to be screened, numerous contours in a picture are found by using a cv2.findContours function, and then the contour of the target object 2 is screened, and methods for screening the contours are various, such as specific rules (screening out semicircular contours with radius in a certain area), contour approximation (excluding non-semicircular contours), keypoint detection (keypoint detection), local invariance descriptors (localinvariant descriptors) and keypoint matching (keypoint matching) to find the target. In the embodiment, the size and the shape of the safety helmet are uniform, so that the processing can be simplified, the identification is carried out by color identification and specific rules, color identification and outline approximation, the processing is simplified under the condition of not reducing the precision, and the calculation amount is greatly reduced. Since the contour is stored in a pixel coordinate manner, the central pixel coordinate and the marginal pixel coordinate, that is, the central point of the target object 2 and the image speed width P of the target object 2 can be determined.
The focal length F of the camera 1 at the time of focusing and shooting the target is acquired in real time.
As shown in fig. 2, a linear distance D, i.e. CT, between the target 2T and the camera 1C is calculated according to the focal length F and the image velocity width P of the target 2, and the algorithm is as follows:
D=(W·F)/P;
in the formula:
d is the linear distance between the target object 2 and the camera 1;
p is the target 2 pixel distance;
w is the physical width of the object 2, in this embodiment the width of the helmet;
f is the camera focal length.
Then, using the height H of the camera 1 and the pitch angle α of the camera 1 to calculate the linear distance D' from the projection point P of the target object 2 to the camera 1C according to the trigonometric function relationship, i.e. CP in fig. 2, the calculation method is:
D’=H/cosα;
in the formula:
d' is the linear distance from the projection point of the target object 2 to the camera 1;
h is the height value of the camera 1;
α is the pitch angle of the camera head 1.
And calculating the proportional relation between the linear distance D (CP) and the linear distance D' (CT):
D/D’=Q;
in the formula:
d is the linear distance between the target object 2 and the camera 1;
d' is the linear distance from the projection point of the target object 2 to the camera 1;
q is the proportional value of the straight-line distance D and the direct-selection distance D'.
Calculating a linear distance L between a projection point C 'of the camera 1 and a projection point P of the target object 2 according to a trigonometric function relation by using the height H of the camera 1, namely C' P in FIG. 2, wherein the method comprises the following steps:
L=H·tanα;
in the formula:
h is the height value of the camera 1;
α is the pitch angle of the camera 1;
l is a linear distance between the projection point of the target 2 and the projection point of the camera 1.
Calculating to obtain a distance L ' between a projection point C ' of the camera 1 and a projection point P ' in the vertical direction of the target object 2 by using the distance L and the proportion Q, namely C ' P ' in the figure 2, wherein the algorithm is as follows;
L’=L·Q;
l' is the distance between the projection point of the camera 1 and the projection point of the target object 2 in the vertical direction;
l is the linear distance between the projection point of the target object 2 and the projection point of the camera 1;
q is a ratio of a linear distance D between the object 2 and the camera 1 and a linear distance D' from a projection point of the object 2 to the camera 1.
Using the distance L' and the rotation angle β to calculate a relative distance XT between the projection point of the camera 1 and the projection point in the vertical direction of the target 2 in the longitudinal direction and a relative distance YT in the latitudinal direction respectively according to the trigonometric function relationship, the calculation method is as follows:
XT=L’sinβ;
YT=L’cosβ;
in the formula:
XT is the relative distance between the projection point of the camera 1 and the projection point of the target object 2 in the vertical direction in the longitudinal direction;
YT is the relative distance between the projection point of the camera 1 and the projection point of the target object 2 in the vertical direction in the latitude direction;
l' is the distance between the projection point of the target object 2 in the vertical direction and the projection point of the camera;
β is the rotation angle of the camera 1.
In a camera coordinate system with a camera as an origin, the relative distance XT is equal to a position vector value of the target object 2 on the X axis, and the relative distance YT is equal to a position vector value of the target object 2 on the Y axis; and meanwhile, the rotation angle b of the camera 1 is acquired, a quadrant of the target object 2 in the camera 1 coordinate system is determined according to the rotation angle b, the direction of a position vector of the target object 2 in the camera coordinate system is determined according to the quadrant, and the position vector (XT, YT) of the target object 2 in the camera coordinate system is obtained. And finally, converting the world coordinates (XW, YW) of the camera and the position vectors (XT, YT) to obtain the world coordinates (XW + XT, YW + YT) of the target object 2, and realizing the positioning of the target object 2.
The effective distance of the existing face recognition technology does not exceed 6 meters under normal conditions, but in a mine or a tunnel, because the working condition is complex and the visibility is relatively low, the effective distance can be reduced to different degrees due to different working conditions. With the increasing construction range, the face recognition technology cannot meet the requirement of mine civil explosion or mine large-range monitoring, so that an auxiliary recognition technology needs to be added on the basis of the civil recognition technology to increase the effective recognition range.
Comparing the obtained linear distance L between the projection point of the target object 2 and the projection point of the camera 1 with the effective distance S of the human face recognition technology, and selecting a recognition processing mode according to the comparison result, specifically:
and if the distance L is smaller than the effective distance S, performing face recognition processing on the image information. The prior art of the face recognition technology is rich and is not the focus of this embodiment, and details are not described in this embodiment. The face feature information in this embodiment is stored in a face feature database, and the face feature information is associated with the person information and used for subsequent person identification.
And if the distance L is greater than the effective distance S, carrying out specific object identification processing on the image information. The specific object identification processing is characterized in that the specific object identification processing is used for identifying an easily-identified pattern with unique information, which is arranged on the body surface of a worker, the pattern is a bar code arranged on clothes or safety helmets of the worker, or a specific coding mark arranged according to a coding rule, preferably the specific coding mark is a reflective specific coding mark, the specific coding mark has the advantages of simple structure, one character is 100mm, the specificity can be realized by one or two characters, 60 double characters can meet 3600 people, and the symbol comprises Arabic numerals, English capital and small cases, mathematical symbols, Chinese part beside parts and the like. The outline characteristic information of the bar code is stored in an object characteristic database, and the outline characteristic information is associated with corresponding personnel information for subsequent personnel identification.
The specific object identification process includes: carrying out gray processing and Gaussian smooth denoising on the image information, and then adopting a Canny edge detection algorithm of OpenCV to realize the extraction of the edge of the specific object to obtain the profile data of the specific object; and extracting the characteristics of the profile data of the specific object, comparing the characteristics with the object characteristic database, and identifying the personnel identification information corresponding to the profile data of the specific object.
After the face recognition processing or the specific object recognition processing, the person information corresponding to the target object 2 is obtained, and the world coordinates of the person information are updated by using the current world coordinates of the target object 2. If more than two world coordinate updating instructions of the personnel information exist at the same time, the selection rule is as follows: the priority of the updating instruction sent out after the face recognition processing is higher than that of the updating instruction sent out after the specific object recognition processing; the same type of processing issues update instructions, whichever comes later.
As shown in fig. 3, a positioning system using an indoor camera positioning method includes:
the camera 1 is provided with an electronic pan-tilt.
The camera position acquisition module 3 is configured to determine current world coordinate information (X, Y, H) of the camera 1, where X is a longitude value of a position where the camera 1 is located, Y is a latitude value of the position where the camera 1 is located, and H is a height value of the camera 1.
And the target object identification module 4 is used for identifying the central point of the target object 2 in the image information and determining the image velocity width P of the target object 2.
And the focusing control module 5 controls the electronic pan-tilt to focus the camera 1 on the target object 2, the focusing focus is the central point of the camera 1, and the pitch angle α of the camera 1 and the rotation angle β of the camera 1 are acquired at the same time.
And the optical parameter acquisition module 6 is used for acquiring the focal length F of the camera 1 in real time.
And the linear distance acquisition module 7 is configured to calculate a linear distance D between the target object 2 and the camera 1 according to the focal length F and the image velocity width P of the target object 2.
And the proportion obtaining module 8 is configured to calculate a linear distance D 'from the projection point of the target object 2 to the camera 1 according to a trigonometric function relationship between the height H of the camera 1 and the pitch angle α of the camera 1, that is, D/D ═ Q, where Q is a proportion of the linear distance D to the direct selection distance D'.
And the projection point distance acquisition module 9 is configured to calculate a linear distance L between the projection point of the target object 2 and the projection point of the camera 1 according to the height H of the camera 1 and a trigonometric function relationship, and calculate a distance L' between the projection point of the camera 1 and the projection point of the target object 2 in the vertical direction by using the distance L and the ratio Q.
And the coordinate conversion module 10 is configured to calculate, according to the distance L ' and the rotation angle β and according to a trigonometric function relationship, a relative latitude XT and a relative longitude YT between the projection point of the camera 1 and the projection point in the vertical direction of the target object 2 on the longitude axis and the latitude axis, and finally obtain the current world coordinates (X ', Y ') of the target object 2 through the longitude and latitude (X, Y) and the relative longitude and latitude (XT, YT) of the camera 1.
The identification selection module 11 is used for comparing the distance L 'with the effective distance S and selecting an identification processing mode according to the comparison structure, wherein if the distance L' is smaller than the effective distance S, the image information is sent to the face identification module 13 for face identification processing; if the distance L' is smaller than the effective distance S, the image information is sent to the specific object recognition module 14, and the specific object recognition processing is performed on the image information.
And the coordinate updating module 12 is configured to obtain the staff information corresponding to the target object 2 after the identification processing, and update the world coordinates of the staff information by using the current world coordinates of the target object 2.
The system also comprises a priority module which is used for selecting the updating instruction according to a priority rule when the world coordinate updating instruction of more than two pieces of personnel information exists at the same time, wherein the priority rule is as follows: the priority of the updating instruction sent out after the face recognition processing is higher than that of the updating instruction sent out after the specific object recognition processing; the same type of processing issues update instructions, whichever comes later.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (10)

1. An indoor camera positioning method is characterized in that:
determining world coordinates (X, Y, H) of the camera, wherein X is a longitude value of a position where the camera is located, Y is a latitude value of the position where the camera is located, and H is a height value of the camera;
the camera collects video data in a monitoring area and image information in the video data;
identifying the central point of a target object in the image information, controlling the central point of the camera to focus on the central point of the target object, extracting the image speed width P of the target object, and simultaneously collecting the pitch angle α of the camera and the rotation angle β of the camera;
acquiring a focal length F of the camera when the camera focuses and shoots a target in real time, and calculating a linear distance D between the target object and the camera according to the F and the P;
calculating a linear distance D 'from a projection point of the target object to the camera according to a trigonometric function relation by using the height H of the camera and the pitch angle α of the camera to obtain D/D ═ Q, wherein Q is the proportion of the linear distance D to the direct selection distance D';
calculating a linear distance L between a projection point of the target object and a projection point of the camera according to a trigonometric function relation by using the height H of the camera, and calculating a distance L' between the projection point of the camera and the projection point of the target object in the vertical direction by using the distance L and a proportion Q;
respectively calculating a relative distance XT between a projection point of the camera and a projection point in the vertical direction of the target object in the longitudinal direction and a relative distance YT in the latitudinal direction according to a trigonometric function relation by using the distance L' and the rotation angle β to obtain a position vector (XT, YT) of the target object in a camera coordinate system;
converting the world coordinates (XW, YW) of the camera and the position vector (XT, YT) to obtain world coordinates (WX + XT, WY + YT) of the target object;
comparing the linear distance L between the projection point of the target object and the projection point of the camera with the effective distance S of the face recognition technology, and if the distance L' is smaller than the effective distance S, carrying out face recognition processing on the image information; if the distance L' is smaller than the effective distance S, specific object identification processing is carried out on the image information;
and after the face recognition processing or the specific object recognition processing, obtaining personnel information corresponding to the target object, and updating the world coordinates of the personnel information by using the current world coordinates of the target object.
2. The indoor camera positioning method according to claim 1, wherein identifying a center point of the target object in the image information comprises:
and performing color space conversion and external contour feature extraction on the image information, comparing the image information with an established object feature database to identify a target object, and then extracting the central point of the target object and the image speed width P of the target object.
3. The indoor camera positioning method according to claim 1, wherein the linear distance D between the target object and the camera is obtained by:
D=(W·F)/P;
in the formula:
p is the target object pixel distance;
w is the target physical width;
f is the camera focal length.
4. The indoor camera positioning method according to claim 1, wherein the method of the linear distance D' from the projection point of the target object to the camera is as follows:
D’=H/cosα;
in the formula:
h is the height value of the camera;
α is the pitch angle of the camera.
5. The indoor camera positioning method according to claim 1, wherein the distance L' between the projection point of the target object in the vertical direction and the projection point of the camera is as follows:
L=H·tanα;
L’=L·Q;
in the formula:
h is the height value of the camera;
α is the pitch angle of the camera;
l is the linear distance between the projection point of the target object and the projection point of the camera;
q is the ratio of the linear distance D between the target object and the camera and the linear distance D' between the projection point of the target object and the camera.
6. The indoor camera positioning method according to claim 1, wherein the relative latitude XT and relative longitude YT are calculated by:
XT=L’sinβ;
YT=L’cosβ;
in the formula:
l' is the distance between the projection point of the vertical direction of the target object and the projection point of the camera;
β is the angle of rotation of the camera.
7. The indoor camera positioning method according to claim 1, wherein the specific object identification process includes:
carrying out gray processing and Gaussian smooth denoising on the image information, and then adopting a Canny edge detection algorithm of OpenCV to realize the extraction of the edge of the specific object to obtain the profile data of the specific object;
and extracting the characteristics of the profile data of the specific object, and comparing the characteristics with the established object characteristic database to identify the personnel identification information corresponding to the profile data of the specific object.
8. An indoor camera positioning method according to claim 1, wherein if there are two or more world coordinate update commands of the person information at the same time, the selection rule is:
the priority of the updating instruction sent out after the face recognition processing is higher than that of the updating instruction sent out after the specific object recognition processing;
the same type of processing issues update instructions, whichever comes later.
9. A positioning system using the indoor camera positioning method according to any one of claims 1 to 8, characterized in that:
the camera is provided with an electronic holder;
the camera position acquisition module is used for determining current world coordinate information (X, Y, H) of the camera, wherein X is a longitude value of a position where the camera is located, Y is a latitude value of the position where the camera is located, and H is a height value of the camera;
the target object identification module is used for identifying the central point of the target object in the image information and determining the image speed width P of the target object;
the focusing control module controls the electronic cloud deck to enable the camera to focus on a target object, the focusing focus is the central point of the camera, and meanwhile, the pitch angle α of the camera and the rotation angle β of the camera are collected;
the optical parameter acquisition module is used for acquiring the focal length F of the camera in real time;
the linear distance acquisition module is used for calculating a linear distance D between the target object and the camera according to the focal length F and the image speed width P of the target object;
the proportion acquisition module is used for calculating a linear distance D 'from a projection point of the target object to the camera according to the trigonometric function relation between the height H of the camera and the pitch angle α of the camera, namely D/D ═ Q, and Q is the proportion of the linear distance D to the direct selection distance D';
the projection point distance acquisition module is used for calculating a linear distance L between a projection point of the target object and a projection point of the camera according to the height H of the camera and a trigonometric function relationship, and calculating a distance L' between the projection point of the camera and the projection point in the vertical direction of the target object by using the distance L and a proportion Q;
the coordinate conversion module is used for respectively calculating to obtain relative latitudes XT and YT on a longitude axis and a latitude axis between a projection point of the camera and a projection point in the vertical direction of the target object according to the trigonometric function relation between the distance L ' and the rotation angle β, and finally obtaining the current world coordinates (X ', Y ') of the target object through the longitude and latitude (X, Y) and the relative longitude and latitude (XT, YT) of the camera;
the identification selection module is used for comparing the distance L ' with the effective distance S and selecting an identification processing mode according to the comparison structure, wherein if the distance L ' is smaller than the effective distance S, the face identification processing is carried out on the image information, and if the distance L ' is smaller than the effective distance S, the specific object identification processing is carried out on the image information;
and the coordinate updating module is used for obtaining the personnel information corresponding to the target object after the identification processing, and updating the world coordinates of the personnel information by using the current world coordinates of the target object.
10. The positioning system according to claim 9, comprising a priority module for selecting the update command according to a priority rule when there are more than two world coordinate update commands of the person information at the same time, wherein the priority rule is:
the priority of the updating instruction sent out after the face recognition processing is higher than that of the updating instruction sent out after the specific object recognition processing;
the same type of processing issues update instructions, whichever comes later.
CN202010240121.9A 2020-03-30 2020-03-30 Indoor camera positioning method and positioning system Pending CN111354046A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010240121.9A CN111354046A (en) 2020-03-30 2020-03-30 Indoor camera positioning method and positioning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010240121.9A CN111354046A (en) 2020-03-30 2020-03-30 Indoor camera positioning method and positioning system

Publications (1)

Publication Number Publication Date
CN111354046A true CN111354046A (en) 2020-06-30

Family

ID=71197535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010240121.9A Pending CN111354046A (en) 2020-03-30 2020-03-30 Indoor camera positioning method and positioning system

Country Status (1)

Country Link
CN (1) CN111354046A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111953937A (en) * 2020-07-31 2020-11-17 云洲(盐城)创新科技有限公司 Drowning person lifesaving system and drowning person lifesaving method
CN112990187A (en) * 2021-04-22 2021-06-18 北京大学 Target position information generation method based on handheld terminal image
CN113518179A (en) * 2021-04-25 2021-10-19 何佳林 Method and device for identifying and positioning objects in large range of video
CN113612984A (en) * 2021-07-29 2021-11-05 江苏动泰运动用品有限公司 Indoor acquisition point positioning method and system based on image processing
CN117291986A (en) * 2023-11-24 2023-12-26 深圳市华意达智能电子技术有限公司 Community security protection discernment positioning system based on multiple fitting of making a video recording

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5521843A (en) * 1992-01-30 1996-05-28 Fujitsu Limited System for and method of recognizing and tracking target mark
CN101916437A (en) * 2010-06-18 2010-12-15 中国科学院计算技术研究所 Method and system for positioning target based on multi-visual information
CN103247030A (en) * 2013-04-15 2013-08-14 丹阳科美汽车部件有限公司 Fisheye image correction method of vehicle panoramic display system based on spherical projection model and inverse transformation model
CN104932683A (en) * 2015-05-28 2015-09-23 重庆大学 Game motion sensing control method based on vision information
CN105320943A (en) * 2015-10-22 2016-02-10 北京天诚盛业科技有限公司 Biometric identification apparatus and biometric identification method therefor
CN107463883A (en) * 2017-07-18 2017-12-12 广东欧珀移动通信有限公司 Biometric discrimination method and Related product
CN108280483A (en) * 2018-01-30 2018-07-13 华南农业大学 Trypetid adult image-recognizing method based on neural network
CN109284725A (en) * 2018-09-30 2019-01-29 武汉虹识技术有限公司 The method and device of iris recognition, electronic equipment, readable storage medium storing program for executing
CN109341692A (en) * 2018-10-31 2019-02-15 江苏木盟智能科技有限公司 Air navigation aid and robot along one kind
CN110243339A (en) * 2019-06-25 2019-09-17 重庆紫光华山智安科技有限公司 A kind of monocular cam localization method, device, readable storage medium storing program for executing and electric terminal
CN110288656A (en) * 2019-07-01 2019-09-27 太原科技大学 A kind of object localization method based on monocular cam

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5521843A (en) * 1992-01-30 1996-05-28 Fujitsu Limited System for and method of recognizing and tracking target mark
CN101916437A (en) * 2010-06-18 2010-12-15 中国科学院计算技术研究所 Method and system for positioning target based on multi-visual information
CN103247030A (en) * 2013-04-15 2013-08-14 丹阳科美汽车部件有限公司 Fisheye image correction method of vehicle panoramic display system based on spherical projection model and inverse transformation model
CN104932683A (en) * 2015-05-28 2015-09-23 重庆大学 Game motion sensing control method based on vision information
CN105320943A (en) * 2015-10-22 2016-02-10 北京天诚盛业科技有限公司 Biometric identification apparatus and biometric identification method therefor
CN107463883A (en) * 2017-07-18 2017-12-12 广东欧珀移动通信有限公司 Biometric discrimination method and Related product
CN108280483A (en) * 2018-01-30 2018-07-13 华南农业大学 Trypetid adult image-recognizing method based on neural network
CN109284725A (en) * 2018-09-30 2019-01-29 武汉虹识技术有限公司 The method and device of iris recognition, electronic equipment, readable storage medium storing program for executing
CN109341692A (en) * 2018-10-31 2019-02-15 江苏木盟智能科技有限公司 Air navigation aid and robot along one kind
CN110243339A (en) * 2019-06-25 2019-09-17 重庆紫光华山智安科技有限公司 A kind of monocular cam localization method, device, readable storage medium storing program for executing and electric terminal
CN110288656A (en) * 2019-07-01 2019-09-27 太原科技大学 A kind of object localization method based on monocular cam

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111953937A (en) * 2020-07-31 2020-11-17 云洲(盐城)创新科技有限公司 Drowning person lifesaving system and drowning person lifesaving method
CN112990187A (en) * 2021-04-22 2021-06-18 北京大学 Target position information generation method based on handheld terminal image
CN112990187B (en) * 2021-04-22 2023-10-20 北京大学 Target position information generation method based on handheld terminal image
CN113518179A (en) * 2021-04-25 2021-10-19 何佳林 Method and device for identifying and positioning objects in large range of video
CN113612984A (en) * 2021-07-29 2021-11-05 江苏动泰运动用品有限公司 Indoor acquisition point positioning method and system based on image processing
CN113612984B (en) * 2021-07-29 2022-10-21 江苏动泰运动用品有限公司 Indoor acquisition point positioning method and system based on image processing
CN117291986A (en) * 2023-11-24 2023-12-26 深圳市华意达智能电子技术有限公司 Community security protection discernment positioning system based on multiple fitting of making a video recording
CN117291986B (en) * 2023-11-24 2024-02-09 深圳市华意达智能电子技术有限公司 Community security protection discernment positioning system based on multiple fitting of making a video recording

Similar Documents

Publication Publication Date Title
CN111354046A (en) Indoor camera positioning method and positioning system
CN107506760B (en) Traffic signal detection method and system based on GPS positioning and visual image processing
CN102314602B (en) Shadow removal in image captured by vehicle-based camera using optimized oriented linear axis
CN103208126B (en) Moving object monitoring method under a kind of physical environment
CN111460967B (en) Illegal building identification method, device, equipment and storage medium
KR101589814B1 (en) Apparatus for recognizing of object in coast and method thereof
KR101409340B1 (en) Method for traffic sign recognition and system thereof
CN112819094A (en) Target detection and identification method based on structural similarity measurement
US20180114089A1 (en) Attachable matter detection apparatus and attachable matter detection method
CN102314601A (en) Use nonlinear optical to remove by the shade in the image of catching based on the camera of vehicle according to constant nuclear
CN103761529A (en) Open fire detection method and system based on multicolor models and rectangular features
CN112287838B (en) Cloud and fog automatic identification method and system based on static meteorological satellite image sequence
CN109916415B (en) Road type determination method, device, equipment and storage medium
KR20140133713A (en) Apparatus for recognizing of object and method thereof
CN108274476A (en) A kind of method of anthropomorphic robot crawl sphere
CN107038690A (en) A kind of motion shadow removal method based on multi-feature fusion
CN113506275B (en) Urban image processing method based on panorama
JP2007256280A (en) Object recognition system and displacement measurement method of object using the same
CN107241643A (en) A kind of multimedia volume adjusting method and system
CN114692775A (en) Model training method, target detection method, target rendering method, storage medium, and program product
CN108710843A (en) Type of face detection method and device for attendance
Sereewattana et al. Color marker detection with various imaging conditions and occlusion for UAV automatic landing control
CN112488031A (en) Safety helmet detection method based on color segmentation
Liu et al. Monitoring System Based on VIN Recognition
Rajkumar et al. Vehicle Detection and Tracking System from CCTV Captured Image for Night Vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200630

WD01 Invention patent application deemed withdrawn after publication