CN113362392A - Visual field generation method and device, computing equipment and storage medium - Google Patents
Visual field generation method and device, computing equipment and storage medium Download PDFInfo
- Publication number
- CN113362392A CN113362392A CN202010145503.3A CN202010145503A CN113362392A CN 113362392 A CN113362392 A CN 113362392A CN 202010145503 A CN202010145503 A CN 202010145503A CN 113362392 A CN113362392 A CN 113362392A
- Authority
- CN
- China
- Prior art keywords
- camera
- image
- orientation
- determining
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 230000000007 visual effect Effects 0.000 title claims abstract description 65
- 230000003068 static effect Effects 0.000 claims description 21
- 238000012544 monitoring process Methods 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 14
- 238000013507 mapping Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 24
- 238000004458 analytical method Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The application provides a visual field generation method, a visual field generation device, a computing device and a storage medium. A visual field generation method, comprising: acquiring the geographic position of a first camera; determining the azimuth angle of the sun according to the geographic position; detecting a region of a target object and a shadow region of the target object under the sun light in an image acquired by a first camera; determining the shadow direction of the shadow area in an image coordinate system according to the area of the target object and the shadow area; determining a first reference orientation of the first camera based on the azimuth and shadow directions of the sun.
Description
Technical Field
The present application relates to the field of video surveillance technologies, and in particular, to a method and an apparatus for generating a visual field, a computing device, and a storage medium.
Background
In an application scene such as video monitoring, a visible field of an image acquisition device such as a camera needs to be generated in an electronic map. The viewable area is the field of view coverage area of the device for image acquisition. In a two-dimensional electronic map, the visual field of the image capture device may be represented as a sector area in the electronic map.
In order to determine the visual field of the image capturing device, the current visual field analysis scheme needs to acquire the location information of the image capturing device, and further acquire key location information (e.g., store information, bank information, hotel information, etc.) in the vicinity of the image capturing device from an electronic map. Then, according to the picture shot by the image acquisition equipment, matching the identifiable information in the shot picture with the key position information to finish the analysis of the visible field.
However, current viewable area analysis schemes are unable to determine the viewable area of an image capture device without being able to acquire critical location information.
Therefore, the technical problem to be solved is how to determine the visual field without key position information.
Disclosure of Invention
The application provides a visual field generation method, a visual field generation device, a computing device and a storage medium, which can determine the visual field of a camera on the premise of no position information.
According to an aspect of the present application, there is provided a visual field generating method, including:
acquiring the geographic position of a first camera;
determining the azimuth angle of the sun according to the geographic position;
detecting a region of a target object and a shadow region of the target object under the sun light in an image acquired by a first camera;
determining the shadow direction of the shadow area in an image coordinate system according to the area of the target object and the shadow area;
determining a first reference orientation of the first camera based on the azimuth and shadow directions of the sun.
In some embodiments, the above method further comprises:
acquiring an electronic map containing the geographic position;
determining, in the electronic map, a target road near the first camera and a first direction of the target road in a geographic coordinate system;
carrying out target tracking on the image frame sequence acquired by the first camera to determine the motion direction of a tracking target, and taking the motion direction as a second direction of the target road in an image coordinate system;
and determining a second reference orientation of the first camera according to the mapping relation between the first direction and the second direction.
In some embodiments, the above method further comprises:
detecting a building area in the image and signboard information corresponding to the building area;
determining an orientation of the architectural area in the image;
inquiring landmark buildings corresponding to the signboard information and azimuth information of the landmark buildings from an electronic map;
determining a third reference orientation of the first camera based on the orientation of the architectural area in the image and the orientation information.
In some embodiments, the above method further comprises:
detecting a road area in the image;
determining the extending direction of the road area in an image coordinate system;
detecting traffic sign information corresponding to the road area;
determining a third direction of the road area in a geographic coordinate system according to the traffic sign information;
determining a fourth reference orientation of the first camera based on the third direction and the extension direction.
In some embodiments, the above method further comprises:
detecting a static object in the image, such as a building or a second camera, and determining an orientation of the static object in an image coordinate system;
acquiring a fourth direction of the static object in a geographic coordinate system;
determining a fifth reference orientation of the first camera based on the fourth direction and an orientation of a static object in an image coordinate system.
In some embodiments, the above method further comprises:
weighting and summing at least two reference orientations of the first reference orientation, the second reference orientation, the third reference orientation, the fourth reference orientation and the fifth reference orientation to obtain a calibration orientation of the first camera;
and determining the visual field of the first camera in the electronic map according to the calibration orientation.
In some embodiments, the above method further comprises:
determining a target monitoring area in the image;
and determining an adjusting parameter of the pitch angle of the first camera according to the position of the target detection area in the image.
According to an aspect of the present application, there is provided a visual field generating apparatus including:
a position acquisition unit which acquires the geographical position of the first camera;
the sun azimuth determining unit is used for determining the azimuth angle of the sun according to the geographic position;
the shadow detection unit is used for detecting the area of the target object in the image collected by the first camera and the shadow area of the target object under the sunlight;
a shadow direction determining unit which determines a shadow direction of the shadow region in an image coordinate system according to the region of the target object and the shadow region;
an orientation determination unit determines a first reference orientation of the first camera according to the azimuth angle and the shadow direction of the sun.
According to an aspect of the application, there is provided a computing device comprising: a memory; a processor; a program stored in the memory and configured to be executed by the processor, the program including instructions for performing the above-described visual field generation method.
According to an aspect of the present application, there is provided a storage medium storing a program including instructions that, when executed by a computing device, cause the computing device to execute the above-described visual field generation method.
In summary, according to the visual field generation scheme of the present application, without acquiring key location information (e.g., information about key locations of shops, banks, hotels, etc. around the first camera) in the electronic map, the direction of the first camera can be determined according to the shadow direction and the azimuth angle of the sun by analyzing the shadow direction of the target object in the image. On this basis, the embodiment of the application can generate the visual field of the first camera in the electronic map according to the orientation of the first camera when the electronic map is acquired.
Drawings
FIG. 1 illustrates a schematic diagram of an application scenario in accordance with some embodiments of the present application;
FIG. 2 illustrates a flow diagram of a method 200 of visual field generation according to some embodiments of the present application;
figure 3 shows a schematic view of a utility pole taken by a first camera;
FIG. 4 illustrates a flow diagram of a visual field generation method 400 according to some embodiments of the present application;
FIG. 5A illustrates a schematic view of a target road in an electronic map, in accordance with some embodiments of the present application;
FIG. 5B shows a schematic view of a road region in an image according to some embodiments of the present application;
FIG. 6 illustrates a flow diagram of a visual field generation method 600 according to some embodiments of the present application;
FIG. 7A illustrates a schematic view of an area of a building in an image according to some embodiments of the present application;
FIG. 7B illustrates a schematic diagram of landmark buildings in an electronic map, in accordance with some embodiments of the present application;
FIG. 8 illustrates a flow diagram of a visual field generation method 800 according to some embodiments of the present application;
FIG. 9 illustrates a schematic view of an image captured by a first camera according to some embodiments of the present application;
FIG. 10 illustrates a flow diagram of a visual field generation method 1000 according to some embodiments of the present application;
FIG. 11 illustrates a flow diagram of a visual field generation method 1100 according to some embodiments of the present application;
FIG. 12 illustrates a schematic diagram of a visual field generation apparatus 1200 according to some embodiments of the present application;
FIG. 13 illustrates a schematic diagram of a visual field generation apparatus 1300 according to some embodiments of the present application;
FIG. 14 illustrates a schematic diagram of a computing device according to some embodiments of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below by referring to the accompanying drawings and examples.
FIG. 1 illustrates a schematic diagram of an application scenario in accordance with some embodiments of the present application. As shown in FIG. 1, an application scenario may include at least one camera (e.g., 110-1, 110-2, and 110-N shown in FIG. 1), a computing device 120, and a map server 130. Wherein N is a positive integer.
The cameras may be deployed at various shooting positions.
The map server 130 may store map information.
The computing device 120 is, for example, a server, a laptop, a tablet, or other smart device. The computing device 120 may obtain images taken by the camera that require visual field analysis as well as the geographic location of the camera. The geographic location is, for example, a positioning result such as latitude and longitude.
The computing device 120 may communicate with a camera, for example, to obtain images and positioning information. In addition, the camera's image and geographic location may be stored in a data storage device (not shown in FIG. 1), such as a video recorder. The computing device 120 may also obtain the camera's image and geographic location from the data storage device.
In some embodiments, the computing device 120 may also communicate with the map server 130 to obtain electronic maps. The computing device 120 may analyze the visual field of the camera based on the electronic map, the image of the camera, and the geographic location.
In some embodiments, the camera of fig. 1 may also acquire an electronic map. In addition, in fig. 1, a plurality of cameras can send image and positioning information to one camera node, and the camera node performs visual field analysis on the plurality of cameras.
FIG. 2 illustrates a flow diagram of a method 200 of visual field generation according to some embodiments of the present application. The method 200 may be performed, for example, by the computing device 120 or the first camera, which is not limited in this application. Here, the first camera may be any one of the cameras in fig. 1, but is not limited thereto.
As shown in fig. 2, in step S201, the geographical position of the first camera is acquired. Here, the geographical position may be, for example, latitude and longitude. In some embodiments, if the latitude and longitude of the first camera cannot be obtained, step S201 may replace the latitude and longitude of the area where the first camera is located with the latitude and longitude of the first camera.
In step S202, the azimuth of the sun is determined based on the geographic location. The azimuth angle of the sun is the azimuth of the sun, which refers to the included angle between the projection of the sun rays on the ground plane and the local meridian, and can be approximately regarded as the included angle between the shadow of a straight line standing on the ground in the sun and the south direction.
In step S203, a region of the target object and a shadow region of the target object under the sun light in the image captured by the first camera are detected. Here, the target object may be, for example, a target object such as a pedestrian, a utility pole, an automobile, or the like in the first camera shooting scene. Taking a utility pole as an example, step S203 may determine an image area of the body of the utility pole and a shadow area of the utility pole (i.e., a projection of the utility pole under solar rays).
In step S204, the shadow direction of the shadow area in the image coordinate system is determined from the area of the target object and the shadow area. Taking the utility pole as an example, the shadow direction is the extending direction of the shadow area of the utility pole. For example, fig. 3 shows a schematic view of a utility pole photographed by a first camera. As shown in fig. 3, step S204 may detect utility pole 301 of fig. 3 and shaded area 302 of utility pole 301. The direction of the shaded area 302 in the image coordinate system O (X, Y) is indicated by arrow 303.
In step S205, a first reference orientation of the first camera is determined based on the azimuth and the shadow direction of the sun. Here, the first reference orientation is the orientation of the first camera determined in step S205. The orientation of the first camera is the orientation of the first camera in the horizontal plane, i.e. the direction of the viewing angle in the horizontal plane. The angle between the azimuth of the sun and the true north direction being, for example, 180-As。AsIs the azimuth angle of the sun. The shadow direction makes an angle α with the X-axis in the image coordinate system, for example. The first reference orientation of the first camera is assumed to be θ. θ can be calculated according to the following formula.
θ=360-α-As
In summary, according to the visual field generating method 200 of the present application, without acquiring key location information (e.g., information about key locations around the first camera, such as shops, banks, hotels, etc.), the shadow direction of the target object in the image can be analyzed, and then the orientation of the first camera can be determined according to the shadow direction and the azimuth angle of the sun. On this basis, the embodiment of the application can generate the visual field of the first camera in the electronic map according to the orientation of the first camera when the electronic map is acquired. The visible area is, for example, a sector area.
In some embodiments, step S202 above may calculate the azimuth angle of the sun and the altitude angle of the sun according to the latitude and longitude and the sun earth model. The solar altitude is an included angle between the incident direction of the solar ray and the ground plane.
The calculation formula of the altitude angle of the sun is as follows:
sinHs=sinφ×sinδ+cosφcosδcost
the calculation formula of the azimuth angle of the sun is as follows:
cosAs=(sinHs×sinφ-sinδ)/(cos Hs×cosφ)
wherein HsIs the altitude of the sun, AsIs the azimuth angle of the sun, phi is the geographical latitude, delta is the declination of the sun, and t is the time angle.
The formula for calculating the declination of the sun is as follows:
δ=0.006918-0.399912cos b+0.070257sinb-0.006758cos 2b+0.000907sin 2b-0.002697cos 3b+0.00148sin 3b
where b is 2 × pi × (N-1)/365, and N is the number of days from month 1 per year.
The formula for calculating the time angle is as follows:
t=(t0-12) x 15, wherein t0True solar time, true solar time t0Current time- (chrono-current longitude) × 4. The timing longitude is the present initial meridian, i.e., 0 longitude.
It should be noted that, in addition to determining the orientation of the first camera by using the shadow direction, the embodiments of the present application may also determine the orientation of the first camera in other manners. This is explained below with reference to fig. 4.
FIG. 4 illustrates a flow diagram of a visual field generation method 400 according to some embodiments of the present application. The method 400 may be performed by the computing device 120 or the first camera, for example.
In step S401, an electronic map containing the geographical position of the first camera is acquired. Here, step S401 may determine to acquire the electronic map of the area around the first camera from the map server 130, for example, according to the latitude and longitude.
In step S402, in the electronic map, a target road near the first camera and a first direction of the target road in the geographic coordinate system are determined. In other words, step S402 may determine the target road near the first camera and the direction of the target road in the electronic map from the electronic map. Here, the target road means a one-way path near the first camera.
In step S403, the image frame sequence captured by the first camera is subject to target tracking, and the moving direction of the tracked target is determined and is taken as the second direction of the target road in the image coordinate system. Here, the moving object is, for example, a pedestrian or a vehicle. Step S403 may determine the moving direction of the tracking target based on various target tracking algorithms.
In step S404, a second reference orientation of the first camera is determined according to the mapping relationship between the first direction and the second direction. Here, the second reference orientation is the orientation of the first camera determined in step S404. The first reference orientation and the second reference orientation are orientations of the first camera determined in different ways.
In some scenarios, there is only one unidirectional target road near the first camera. The target road determined in step S402 is shown in fig. 5A, for example. Step S402 may determine a target road 501 in the electronic map. The coordinate system X1Y1 in fig. 5A is a geographical coordinate system. The positive orientation of axis X1 is east and the positive orientation of axis Y1 is north. Fig. 5B shows an image display of a road region (i.e., the target road 501) in the image. A coordinate system X2Y2 is established with a point on the road region in the image as the origin. The coordinate axis X2 is parallel to the X-axis of the image and Y2 is parallel to the Y-axis. The direction of motion of the target object is the direction indicated by arrow 502. Step S403 may determine that the road in the image has an angle α with X2. Here, the angle α may represent the second direction. Step S402 may determine that the angle between the road in the image and X1 (i.e., the angle between the target road 501 and X1) is β. The angle beta may represent the first direction. On this basis, step S404 may determine an angle θ between the coordinate axis Y2 (which coincides with the Y-axis direction of the image coordinate system) and the X1 axis in the geographic coordinate system. The angle θ may represent a second reference orientation. The angle θ can be calculated, for example, according to the following formula.
θ=90-α+β
In summary, the method 400 according to an embodiment of the present application may acquire a first direction of a target road near a first camera. Through target tracking, the method 400 according to an embodiment of the present application may determine a second direction of the target road in the image coordinate system. On this basis, the method 400 may determine the orientation of the first camera by mapping the first direction and the second direction.
FIG. 6 illustrates a flow diagram of a visual field generation method 600 according to some embodiments of the present application. The method 600 may be performed by the computing device 120 or the first camera, for example.
As shown in fig. 6, in step S601, a building area in an image and signboard information corresponding to the building area are detected. Here, the signboard information is, for example, signboard information of an object such as a hotel, a shop, or a bank. Step S601 may perform object detection on the image to determine a building area. The building area may be considered a landmark building to be identified.
In step S602, the orientation of the building area in the image is determined. In some embodiments, step S602 may detect an edge line of the building area parallel to the ground, for example, and regard a direction perpendicular to the edge line in the image as the building area orientation in the image. In addition, in step S602, other image processing algorithms may be used for direction labeling, which is not limited in the present application.
In step S603, the landmark building corresponding to the signboard information and the orientation information of the landmark building are inquired from the electronic map. In some embodiments, step S603 may query landmark buildings near the first camera from the electronic map, and then image-match the signboard information with the landmark buildings, so as to determine the landmark buildings corresponding to the building area.
In step S604, a third reference orientation of the first camera is determined based on the orientation and orientation information of the architectural area in the image. Here, the orientation of the building area in the image is the orientation of the landmark building corresponding to the building area in the image coordinate system. The first camera may be oriented vertically upwards in the image coordinate system. Step S604 may determine the third reference orientation of the first camera according to the mapping relationship between the orientation in the image and the orientation information (i.e., the orientation of the landmark building in the geographic coordinate system). For example, fig. 7A shows a schematic diagram of a building region in an image. Step S601 may determine the building area 701 and the signboard information 702 in fig. 7A. Step S602 may determine that the direction of the architectural region 701 in the image coordinate system is 703. Fig. 7B shows a schematic diagram of landmark buildings in a map. Landmark building 704 in fig. 7B corresponds to building area 701 in fig. 7A. Step S603 may determine that the direction 705 of the landmark building 704 makes an angle a with the east direction. Direction 706 is the same as direction 705 in fig. 7A. Direction 706 coincides with direction 705 in a geographic coordinate system. The third reference orientation (i.e., the direction of the y coordinate axis in the image coordinate system) b is 180-a.
In summary, the visual field generation method 600 according to the embodiment of the present application can determine the orientation of the first camera by recognizing the signboard information corresponding to the building area in the image and the orientation of the building area in the image coordinate system, and determining the orientation information of the landmark building corresponding to the signboard information through the electronic map.
FIG. 8 illustrates a flow diagram of a visual field generation method 800 according to some embodiments of the present application. The method 800 may be performed by the computing device 120 or the first camera, for example.
As shown in fig. 8, in step S801, a road region in an image captured by a first camera is detected.
In step S802, the direction in which the road region extends in the image coordinate system is determined. In some embodiments, step S802 may detect a moving direction of a moving object (e.g., a car or a pedestrian) in the road area based on the image frame sequence captured by the first camera, and use the moving direction of the moving object as an extending direction of the road area.
In step S803, traffic sign information corresponding to the road area is detected. Here, the traffic sign information is, for example, upper indication information of a traffic sign. The traffic sign information includes, for example, "east" or "south" and the like indicating the direction of the road.
In step S804, a third direction of the road area in the geographic coordinate system is determined according to the traffic sign information.
In step S805, a fourth reference orientation of the first camera is determined based on the third direction and the extension direction. For example, fig. 9 shows a schematic diagram of an image frame according to some embodiments of the present application. Step S801 may identify the road region 901 in fig. 9. Step S802 may determine that the extending direction of the road is 902. The extending direction is 902 and the included angle of the Y coordinate axis direction is c. Step S803 may detect traffic sign information "xx east road" from the traffic sign board 903. On this basis, step S804 may determine that the third direction of the road area 901 is the east from the traffic sign information. Step S805 may determine the fourth reference orientation (i.e., the Y coordinate axis direction) as an angle c with the eastern direction according to the extending direction 902 and the third direction.
In summary, according to the visual field generating method 800 of the present application, by recognizing the traffic sign information and determining the extending direction of the road area, the orientation of the first camera can be determined.
FIG. 10 illustrates a flow diagram of a visual field generation method 1000 according to some embodiments of the present application. The method 1000 may be performed by the computing device 120 or the first camera, for example.
As illustrated in fig. 10, in step S1001, a static object in the image is detected, and the orientation of the static object in the image coordinate system is determined. Here, the static object is, for example, a building or a second camera. The second camera is a camera that the first camera can capture.
In step S1002, a fourth direction of the static object in the geographic coordinate system is acquired.
In step S1003, a fifth reference orientation of the first camera is determined from the fourth orientation and the orientation of the static object in the image coordinate system.
In some embodiments, the static object is a building, and step S1001 may detect a building area in the image and determine an orientation of the building in the building area in the image coordinate system. For example, the building area is a residential community. Step S1001 may identify the orientation of the building area in the image coordinate system. Step S1002 may perform semantic analysis on the building area. For example, step S1002 may determine that a balcony is included in the building area. Here, the balcony is generally arranged facing south. On this basis, step S1002 may determine that the fourth direction of the building area in the geographic coordinate system is toward south. In step S1003, a fifth reference orientation of the first camera is determined according to the fourth direction of the building area and the orientation in the image coordinate system.
In some embodiments, step S1001 detects a first camera in the image and determines the orientation of the first camera in the image coordinate system through image analysis. Step S1002 may acquire a fourth direction of the first camera in the image coordinate system in the image. Step S1002 may acquire a fourth direction in the geographic coordinate system of the first camera captured in the image. For example, step S1002 may query for an identification of a nearby first camera based on a geographic location of the first camera that captured the image, and take the queried identification of the first camera as the identification of the captured first camera. On this basis, step S1002 may query the fourth direction of the photographed first camera in the geographic coordinate system according to the identification of the photographed first camera. In this way, step S1002 may determine a fifth reference orientation of the first camera according to the fourth direction and the photographed orientation of the first camera in the image coordinate system.
In summary, according to the visual field generating method 1000 of the present application, the orientation of the first camera can be determined according to the orientation of the static object in the image coordinate system and the orientation of the static object in the geographic coordinate system.
FIG. 11 illustrates a flow diagram of a visual field generation method 1100 according to some embodiments of the present application. Method 1100 may be performed by computing device 120 or a first camera, for example.
As shown in fig. 11, in step S1101, the geographical position of the first camera is acquired. Here, the geographical position may be, for example, latitude and longitude. In some embodiments, if the latitude and longitude of the first camera cannot be obtained, step S1101 may replace the latitude and longitude of the region where the first camera is located with the latitude and longitude of the first camera.
In step S1102, the azimuth of the sun is determined according to the geographic location. The azimuth angle of the sun is the azimuth of the sun, which refers to the included angle between the projection of the sun rays on the ground plane and the local meridian, and can be approximately regarded as the included angle between the shadow of a straight line standing on the ground in the sun and the south direction.
In step S1103, a region of the target object and a shadow region of the target object under the sun light in the image captured by the first camera are detected. Here, the target object may be, for example, a target object such as a pedestrian, a utility pole, an automobile, or the like in the first camera shooting scene.
In step S1104, the shadow direction of the shadow area in the image coordinate system is determined from the area of the target object and the shadow area.
In step S1105, a first reference orientation of the first camera is determined based on the azimuth and shadow direction of the sun.
More specific implementations of steps S1101-S1105 are consistent with method 200 and will not be described herein.
In step S1106, an electronic map containing the geographical position of the first camera is acquired. Here, step S401 may determine to acquire the electronic map of the area around the first camera from the map server 130, for example, according to the latitude and longitude.
In step S1107, in the electronic map, a target road near the first camera and a first direction of the target road in the geographic coordinate system are determined.
In step S1108, the image frame sequence captured by the first camera is subject to target tracking to determine a moving direction of the tracked target, and the moving direction is taken as a second direction of the target road in the image coordinate system.
In step S1109, a second reference orientation of the first camera is determined according to the mapping relationship between the first direction and the second direction.
More specific implementations of steps S1106-S1109 are consistent with method 400 and will not be described herein.
In step S1110, a building area in the image and signboard information corresponding to the building area are detected. Here, the signboard information is, for example, signboard information of an object such as a hotel, a shop, or a bank.
In step S1111, it is determined that the building area is oriented in the image.
In step S1112, the landmark building corresponding to the signboard information and the direction information of the landmark building are inquired from the electronic map.
In step S1113, a third reference orientation of the first camera is determined based on the orientation and orientation information of the building area in the image.
More specific implementations of steps S1110-S1113 are consistent with method 600 and will not be described herein.
In step S1114, a road area in the image captured by the first camera is detected.
In step S1115, the extending direction of the road area in the image coordinate system is determined.
In step S1116, traffic sign information corresponding to the road region is detected.
In step S1117, a third direction of the road area in the geographic coordinate system is determined according to the traffic sign information.
In step S1118, a fourth reference orientation of the first camera is determined based on the road direction and the extension direction.
More specific implementations of steps S1114-S1118 are consistent with method 800 and will not be described further herein.
In step S1119, a static object in the image is detected, and the orientation of the static object in the image coordinate system is determined. Here, the static object is, for example, a building or a second camera.
In step S1120, a fourth direction of the static object in the geographic coordinate system is acquired.
In step S1121, a fifth reference orientation of the first camera is determined from the fourth orientation and the orientation of the static object in the image coordinate system.
More specific implementations of steps S1119-S1121 are consistent with method 1000 and will not be described here.
In some embodiments, the method 1100 may further include step S1122 of performing a weighted summation of at least two of the first, second, third, fourth, and fifth reference orientations to obtain a calibrated orientation of the first camera.
In some embodiments, step S1122 may perform a weighted summation of the successfully acquired reference orientations. For example, the method 1100 successfully acquires the first reference orientation and the second reference orientation. Then step S1122 may weight sum the first and second reference orientations.
In some embodiments, step S1122 may use the confidence level of each reference orientation as a weight value. For example, step S1122 may take the confidence level output by the detection algorithm when the shadow region is detected as the confidence level of the first reference orientation. Step S1122 may take the confidence level output by the target detection algorithm when detecting the road region as the confidence level of the second reference orientation. Step S1122 may take the confidence level of the output of the detection algorithm in detecting the signboard information as the confidence level of the third reference orientation. Step S1122 may take the confidence level output by the detection algorithm at the time of detecting the traffic sign information as the confidence level of the fourth reference orientation. Step S1122 may take the confidence level output by the detection algorithm when detecting the static object as the confidence level of the fifth reference orientation.
In step S1123, a visual field of the first camera in the electronic map is determined according to the calibration orientation. The visible area is, for example, a sector area.
In summary, the method 1100 through steps S1122 and S1123 can perform data fusion based on the orientations determined in various ways, thereby making the determined orientation of the first camera more accurate and improving the accuracy of the visual field of the first camera.
In some embodiments, the method 1100 may further include a step S1124 of determining a target monitoring region in the image captured by the first camera. Here, depending on different application scenarios, embodiments of the present application may determine different monitoring objects. The monitoring object is, for example, an object such as a vehicle or a pedestrian. For the application scenario of monitoring the vehicle, step S1124 may detect the position of the vehicle in the image, i.e. determine the corresponding region of the vehicle. For the application scenario of monitoring pedestrians, step S1124 may detect a region in the image corresponding to the pedestrian.
In step S1125, an adjustment parameter for the pitch angle of the first camera is determined according to the position of the target detection region in the image. For example, the area corresponding to the vehicle is at the upper edge in the image. Step S1125 may determine an adjustment parameter that enables the first camera to increase the pitch angle so as to aim the field of view of the first camera at the vehicle in the road. If the first camera adjusts the pitch angle according to the adjustment parameter, the vehicle in the image taken by the first camera will be adjusted from the upper edge to the middle of the image.
In summary, the method 1100 can determine the adjustment parameter for the pitch angle of the first camera by analyzing the position of the monitoring object in the image through steps S1114 and S1125, so that the shooting angle of the monitoring object by the first camera can be optimized.
Fig. 12 illustrates a schematic diagram of a visual field generation apparatus 1200 according to some embodiments of the present application. The apparatus 1200 may be deployed, for example, in the computing device 120 or the first camera.
As shown in fig. 12, the visual field generation apparatus 1200 includes: a position acquisition unit 1201, a sun azimuth determination unit 1202, a shadow detection unit 1203, a shadow direction determination unit 1204, and an orientation determination unit 1205.
A position acquisition unit 1201 acquires the geographical position of the first camera.
A sun azimuth determination unit 1202, which determines the azimuth angle of the sun according to the geographical position.
The shadow detection unit 1203 detects a region of the target object and a shadow region of the target object under the sun light in the image captured by the first camera.
A shadow direction determining unit 1204, which determines a shadow direction of the shadow region in an image coordinate system according to the region of the target object and the shadow region.
An orientation determination unit 1205 determines a first reference orientation of the first camera based on the azimuth and shadow direction of the sun. More specific embodiments of the apparatus 1200 are consistent with the method 200 and will not be described herein.
In summary, the visual field generation apparatus 1200 according to the embodiment of the present application can determine the orientation of the first camera according to the shadow direction and the azimuth of the sun by analyzing the shadow direction of the target object in the image without acquiring the key location information (for example, information of the key location of the shop, the bank, the hotel, and the like around the first camera) in the electronic map. On this basis, the embodiment of the application can generate the visual field of the first camera in the electronic map according to the orientation of the first camera when the electronic map is acquired.
Fig. 13 illustrates a schematic diagram of a visual field generation apparatus 1300 according to some embodiments of the present application. The apparatus 1300 may be deployed, for example, in the computing device 120 or the first camera.
As shown in fig. 13, apparatus 1300 may include: first reference orientation determining unit 1301, second reference orientation determining unit 1302, third reference orientation determining unit 1303, fourth reference orientation determining unit 1304, fifth reference orientation determining unit 1305, calibrating unit 1306, and region recommending unit 1307.
The first reference orientation determination unit 1301 may, for example, perform operations consistent with the method 200. The second reference orientation determination unit 1302 may perform operations consistent with the method 400. The third reference orientation determination unit 1304 may perform operations consistent with the method 600. The fourth reference orientation determination unit 1304 may perform operations consistent with the method 800. The fifth reference orientation determining unit 1305 may perform operations consistent with the method 1000.
The calibration unit 1306 performs a weighted summation of at least two of the first, second, third, fourth and fifth reference orientations to obtain a calibration orientation of the first camera. On the basis, the calibration unit 1306 determines the visual field of the first camera in the electronic map according to the calibration orientation. Here, the calibration unit 1306 performs data fusion based on the orientations determined in a plurality of ways, thereby making the determined orientation of the first camera more accurate and improving the accuracy of the field of view of the first camera.
The area recommending unit 1307 determines a target monitoring area in the image acquired by the first camera. Here, depending on different application scenarios, embodiments of the present application may determine different monitoring objects. The monitoring object is, for example, an object such as a vehicle or a pedestrian. The region recommending unit 1307 determines an adjustment parameter for the pitch angle of the first camera according to the position of the target detection region in the image.
Here, the region recommending unit 1307 can determine an adjustment parameter for the pitch angle of the first camera by analyzing the position of the monitoring object in the image, so that the shooting angle of the monitoring object by the first camera can be optimized.
FIG. 14 illustrates a schematic diagram of a computing device according to some embodiments of the present application. As shown in fig. 14, the computing device includes one or more processors (CPUs) 1402, a communication module 1404, a memory 1406, a user interface 1410, and a communication bus 1408 for interconnecting these components.
The processor 1402 can receive and transmit data via the communication module 1404 to enable network communication and/or local communication.
User interface 1410 includes one or more output devices 1412 including one or more speakers and/or one or more visual displays. The user interface 1410 also includes one or more input devices 1414. The user interface 1410 may receive, for example, an instruction of a remote controller, but is not limited thereto.
Memory 1406 may be high speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; or non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
Memory 1406 stores sets of instructions executable by processor 1402, including:
an operating system 1416, including programs for handling various basic system services and for performing hardware related tasks;
the application 1418, which includes various programs for implementing the above-described visual field generation method, may include, for example, the visual field generation apparatus 1200 or 1300.
In addition, each of the embodiments of the present application can be realized by a data processing program executed by a data processing apparatus such as a computer. It is clear that the data processing program constitutes the invention. Further, the data processing program, which is generally stored in one storage medium, is executed by directly reading the program out of the storage medium or by installing or copying the program into a storage device (such as a hard disk and/or a memory) of the data processing device. Such a storage medium therefore also constitutes the present invention. The storage medium may use any type of recording means, such as a paper storage medium (e.g., paper tape, etc.), a magnetic storage medium (e.g., a flexible disk, a hard disk, a flash memory, etc.), an optical storage medium (e.g., a CD-ROM, etc.), a magneto-optical storage medium (e.g., an MO, etc.), and the like.
The present application thus also discloses a non-volatile storage medium in which a program is stored. The program includes instructions that, when executed by a processor, cause a computing device to perform a visual field generation method according to the present application.
In addition, the method steps described in this application may be implemented by hardware, for example, logic gates, switches, Application Specific Integrated Circuits (ASICs), programmable logic controllers, embedded microcontrollers, and the like, in addition to data processing programs. Such hardware capable of implementing the methods described herein may also constitute the present application.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of the present application.
Claims (10)
1. A visual field generation method, comprising:
acquiring the geographic position of a first camera;
determining the azimuth angle of the sun according to the geographic position;
detecting a region of a target object and a shadow region of the target object under the sun light in an image acquired by a first camera;
determining the shadow direction of the shadow area in an image coordinate system according to the area of the target object and the shadow area;
determining a first reference orientation of the first camera based on the azimuth and shadow directions of the sun.
2. The visual field generation method of claim 1, further comprising:
acquiring an electronic map containing the geographic position;
determining, in the electronic map, a target road near the first camera and a first direction of the target road in a geographic coordinate system;
carrying out target tracking on the image frame sequence acquired by the first camera to determine the motion direction of a tracking target, and taking the motion direction as a second direction of the target road in an image coordinate system;
and determining a second reference orientation of the first camera according to the mapping relation between the first direction and the second direction.
3. The visual field generation method of claim 1, further comprising:
detecting a building area in the image and signboard information corresponding to the building area;
determining an orientation of the architectural area in the image;
inquiring landmark buildings corresponding to the signboard information and azimuth information of the landmark buildings from an electronic map;
determining a third reference orientation of the first camera based on the orientation of the architectural area in the image and the orientation information.
4. The visual field generation method of claim 1, further comprising:
detecting a road area in the image;
determining the extending direction of the road area in an image coordinate system;
detecting traffic sign information corresponding to the road area;
determining a third direction of the road area in a geographic coordinate system according to the traffic sign information;
determining a fourth reference orientation of the first camera based on the third direction and the extension direction.
5. The visual field generation method of claim 1, further comprising:
detecting a target object in the image and determining an orientation of a static object, such as a building or a second camera, in an image coordinate system;
acquiring a fourth direction of the static object in a geographic coordinate system;
determining a fifth reference orientation of the first camera based on the fourth direction and an orientation of a static object in an image coordinate system.
6. The visual field generation method of claim 1, further comprising:
weighting and summing at least two reference orientations of the first reference orientation, the second reference orientation, the third reference orientation, the fourth reference orientation and the fifth reference orientation to obtain a calibration orientation of the first camera;
and determining the visual field of the first camera in the electronic map according to the calibration orientation.
7. The visual field generation method of claim 1, further comprising:
determining a target monitoring area in the image;
and determining an adjusting parameter of the pitch angle of the first camera according to the position of the target detection area in the image.
8. A visual field generation apparatus, comprising:
a position acquisition unit which acquires the geographical position of the first camera;
the sun azimuth determining unit is used for determining the azimuth angle of the sun according to the geographic position;
the shadow detection unit is used for detecting the area of the target object in the image collected by the first camera and the shadow area of the target object under the sunlight;
a shadow direction determining unit which determines a shadow direction of the shadow region in an image coordinate system according to the region of the target object and the shadow region;
an orientation determination unit determines a first reference orientation of the first camera according to the azimuth angle and the shadow direction of the sun.
9. A computing device, comprising:
a memory;
a processor;
a program stored in the memory and configured to be executed by the processor, the program comprising instructions for performing the visual field generation method of any of claims 1-7.
10. A storage medium storing a program comprising instructions that, when executed by a computing device, cause the computing device to perform the visual field generation method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010145503.3A CN113362392B (en) | 2020-03-05 | 2020-03-05 | Visual field generation method, device, computing equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010145503.3A CN113362392B (en) | 2020-03-05 | 2020-03-05 | Visual field generation method, device, computing equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113362392A true CN113362392A (en) | 2021-09-07 |
CN113362392B CN113362392B (en) | 2024-04-23 |
Family
ID=77523554
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010145503.3A Active CN113362392B (en) | 2020-03-05 | 2020-03-05 | Visual field generation method, device, computing equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113362392B (en) |
Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06167333A (en) * | 1992-11-27 | 1994-06-14 | Mitsubishi Electric Corp | Device for determining absolute azimuth |
JP2005012415A (en) * | 2003-06-18 | 2005-01-13 | Matsushita Electric Ind Co Ltd | System and server for monitored video image monitoring and monitored video image generating method |
JP2007259002A (en) * | 2006-03-23 | 2007-10-04 | Fujifilm Corp | Image reproducing apparatus, its control method, and its control program |
US20110199479A1 (en) * | 2010-02-12 | 2011-08-18 | Apple Inc. | Augmented reality maps |
CN102177719A (en) * | 2009-01-06 | 2011-09-07 | 松下电器产业株式会社 | Apparatus for detecting direction of image pickup device and moving body comprising same |
JP2014185908A (en) * | 2013-03-22 | 2014-10-02 | Pasco Corp | Azimuth estimation device and azimuth estimation program |
CN104281840A (en) * | 2014-09-28 | 2015-01-14 | 无锡清华信息科学与技术国家实验室物联网技术中心 | Method and device for positioning and identifying building based on intelligent terminal |
US20150125035A1 (en) * | 2013-11-05 | 2015-05-07 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium for position and orientation measurement of a measurement target object |
CN104639824A (en) * | 2013-11-13 | 2015-05-20 | 杭州海康威视***技术有限公司 | Electronic map based camera control method and device |
CN104717462A (en) * | 2014-01-03 | 2015-06-17 | 杭州海康威视***技术有限公司 | Supervision video extraction method and device |
CN105389375A (en) * | 2015-11-18 | 2016-03-09 | 福建师范大学 | Viewshed based image index setting method and system, and retrieving method |
CN106331618A (en) * | 2016-08-22 | 2017-01-11 | 浙江宇视科技有限公司 | Method and device for automatically confirming visible range of camera |
CN108038897A (en) * | 2017-12-06 | 2018-05-15 | 北京像素软件科技股份有限公司 | Shadow map generation method and device |
CN108921900A (en) * | 2018-07-18 | 2018-11-30 | 江苏实景信息科技有限公司 | A kind of method and device in the orientation of monitoring video camera |
CN108965687A (en) * | 2017-05-22 | 2018-12-07 | 阿里巴巴集团控股有限公司 | Shooting direction recognition methods, server and monitoring method, system and picture pick-up device |
US20190164309A1 (en) * | 2017-11-29 | 2019-05-30 | Electronics And Telecommunications Research Institute | Method of detecting shooting direction and apparatuses performing the same |
KR20190063350A (en) * | 2017-11-29 | 2019-06-07 | 한국전자통신연구원 | Method of detecting a shooting direction and apparatuses performing the same |
CN110176030A (en) * | 2019-05-24 | 2019-08-27 | 中国水产科学研究院 | A kind of autoegistration method, device and the electronic equipment of unmanned plane image |
CN110243364A (en) * | 2018-03-07 | 2019-09-17 | 杭州海康机器人技术有限公司 | Unmanned plane course determines method, apparatus, unmanned plane and storage medium |
CN110458895A (en) * | 2019-07-31 | 2019-11-15 | 腾讯科技(深圳)有限公司 | Conversion method, device, equipment and the storage medium of image coordinate system |
CN111526291A (en) * | 2020-04-29 | 2020-08-11 | 济南博观智能科技有限公司 | Method, device and equipment for determining monitoring direction of camera and storage medium |
CN112101339A (en) * | 2020-09-15 | 2020-12-18 | 北京百度网讯科技有限公司 | Map interest point information acquisition method and device, electronic equipment and storage medium |
WO2022217877A1 (en) * | 2021-04-12 | 2022-10-20 | 浙江商汤科技开发有限公司 | Map generation method and apparatus, and electronic device and storage medium |
-
2020
- 2020-03-05 CN CN202010145503.3A patent/CN113362392B/en active Active
Patent Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06167333A (en) * | 1992-11-27 | 1994-06-14 | Mitsubishi Electric Corp | Device for determining absolute azimuth |
JP2005012415A (en) * | 2003-06-18 | 2005-01-13 | Matsushita Electric Ind Co Ltd | System and server for monitored video image monitoring and monitored video image generating method |
JP2007259002A (en) * | 2006-03-23 | 2007-10-04 | Fujifilm Corp | Image reproducing apparatus, its control method, and its control program |
CN102177719A (en) * | 2009-01-06 | 2011-09-07 | 松下电器产业株式会社 | Apparatus for detecting direction of image pickup device and moving body comprising same |
US20110199479A1 (en) * | 2010-02-12 | 2011-08-18 | Apple Inc. | Augmented reality maps |
JP2014185908A (en) * | 2013-03-22 | 2014-10-02 | Pasco Corp | Azimuth estimation device and azimuth estimation program |
US20150125035A1 (en) * | 2013-11-05 | 2015-05-07 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium for position and orientation measurement of a measurement target object |
CN104639824A (en) * | 2013-11-13 | 2015-05-20 | 杭州海康威视***技术有限公司 | Electronic map based camera control method and device |
CN104717462A (en) * | 2014-01-03 | 2015-06-17 | 杭州海康威视***技术有限公司 | Supervision video extraction method and device |
CN104281840A (en) * | 2014-09-28 | 2015-01-14 | 无锡清华信息科学与技术国家实验室物联网技术中心 | Method and device for positioning and identifying building based on intelligent terminal |
CN105389375A (en) * | 2015-11-18 | 2016-03-09 | 福建师范大学 | Viewshed based image index setting method and system, and retrieving method |
CN106331618A (en) * | 2016-08-22 | 2017-01-11 | 浙江宇视科技有限公司 | Method and device for automatically confirming visible range of camera |
CN108965687A (en) * | 2017-05-22 | 2018-12-07 | 阿里巴巴集团控股有限公司 | Shooting direction recognition methods, server and monitoring method, system and picture pick-up device |
US20190164309A1 (en) * | 2017-11-29 | 2019-05-30 | Electronics And Telecommunications Research Institute | Method of detecting shooting direction and apparatuses performing the same |
KR20190063350A (en) * | 2017-11-29 | 2019-06-07 | 한국전자통신연구원 | Method of detecting a shooting direction and apparatuses performing the same |
CN108038897A (en) * | 2017-12-06 | 2018-05-15 | 北京像素软件科技股份有限公司 | Shadow map generation method and device |
CN110243364A (en) * | 2018-03-07 | 2019-09-17 | 杭州海康机器人技术有限公司 | Unmanned plane course determines method, apparatus, unmanned plane and storage medium |
CN108921900A (en) * | 2018-07-18 | 2018-11-30 | 江苏实景信息科技有限公司 | A kind of method and device in the orientation of monitoring video camera |
CN110176030A (en) * | 2019-05-24 | 2019-08-27 | 中国水产科学研究院 | A kind of autoegistration method, device and the electronic equipment of unmanned plane image |
CN110458895A (en) * | 2019-07-31 | 2019-11-15 | 腾讯科技(深圳)有限公司 | Conversion method, device, equipment and the storage medium of image coordinate system |
CN111526291A (en) * | 2020-04-29 | 2020-08-11 | 济南博观智能科技有限公司 | Method, device and equipment for determining monitoring direction of camera and storage medium |
CN112101339A (en) * | 2020-09-15 | 2020-12-18 | 北京百度网讯科技有限公司 | Map interest point information acquisition method and device, electronic equipment and storage medium |
WO2022217877A1 (en) * | 2021-04-12 | 2022-10-20 | 浙江商汤科技开发有限公司 | Map generation method and apparatus, and electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113362392B (en) | 2024-04-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Manweiler et al. | Satellites in our pockets: an object positioning system using smartphones | |
CN104748728B (en) | Intelligent machine attitude matrix calculation method and its applied to photogrammetric method | |
CN103134489B (en) | The method of target localization is carried out based on mobile terminal | |
US9497581B2 (en) | Incident reporting | |
EP3593324B1 (en) | Target detection and mapping | |
CN104428817A (en) | Sensor-aided wide-area localization on mobile devices | |
CN109596121B (en) | Automatic target detection and space positioning method for mobile station | |
CN101506850A (en) | Modeling and texturing digital surface models in a mapping application | |
US11682103B2 (en) | Selecting exterior images of a structure based on capture positions of indoor images associated with the structure | |
CN103874193A (en) | Method and system for positioning mobile terminal | |
US20090086020A1 (en) | Photogrammetric networks for positional accuracy | |
US20200320732A1 (en) | Vision-Enhanced Pose Estimation | |
CN106537409B (en) | Determining compass fixes for imagery | |
CN101729765B (en) | Image pickup device for providing subject GPS coordinate and method for detecting subject GPS coordinate | |
Masiero et al. | Toward the use of smartphones for mobile mapping | |
KR100679864B1 (en) | Cellular phone capable of displaying geographic information and a method thereof | |
Elias et al. | Photogrammetric water level determination using smartphone technology | |
Zhang et al. | Online ground multitarget geolocation based on 3-D map construction using a UAV platform | |
Debnath et al. | Tagpix: Automatic real-time landscape photo tagging for smartphones | |
US11481920B2 (en) | Information processing apparatus, server, movable object device, and information processing method | |
CN113362392B (en) | Visual field generation method, device, computing equipment and storage medium | |
CN107449432A (en) | One kind utilizes dual camera air navigation aid, device and terminal | |
Jeon et al. | Design of positioning DB automatic update method using Google tango tablet for image based localization system | |
CN111121825B (en) | Method and device for determining initial navigation state in pedestrian inertial navigation system | |
Moun et al. | Localization and building identification in outdoor environment for smartphone using integrated GPS and camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |