CN113362392B - Visual field generation method, device, computing equipment and storage medium - Google Patents

Visual field generation method, device, computing equipment and storage medium Download PDF

Info

Publication number
CN113362392B
CN113362392B CN202010145503.3A CN202010145503A CN113362392B CN 113362392 B CN113362392 B CN 113362392B CN 202010145503 A CN202010145503 A CN 202010145503A CN 113362392 B CN113362392 B CN 113362392B
Authority
CN
China
Prior art keywords
camera
image
orientation
determining
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010145503.3A
Other languages
Chinese (zh)
Other versions
CN113362392A (en
Inventor
浦世亮
郭阶添
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010145503.3A priority Critical patent/CN113362392B/en
Publication of CN113362392A publication Critical patent/CN113362392A/en
Application granted granted Critical
Publication of CN113362392B publication Critical patent/CN113362392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a visual field generation method, a visual field generation device, a computing device and a storage medium. A method of visual field generation, comprising: acquiring the geographic position of a first camera; determining an azimuth angle of the sun according to the geographic position; detecting a region of a target object and a shadow region of the target object under the sun ray in an image acquired by a first camera; determining a shadow direction of the shadow region in an image coordinate system according to the region of the target object and the shadow region; a first reference orientation of the first camera is determined based on the azimuth and shadow directions of the sun.

Description

Visual field generation method, device, computing equipment and storage medium
Technical Field
The present application relates to the field of video monitoring technologies, and in particular, to a method, an apparatus, a computing device, and a storage medium for generating a visual field.
Background
In an application scene such as video monitoring, a visual field of an image acquisition device such as a camera needs to be generated in an electronic map. The visual field is the field of view coverage of the device for image acquisition. In a two-dimensional electronic map, the visual field of the image capturing device may be represented as a sector area in the electronic map.
In order to determine the visibility of the image capturing device, the current visibility analysis scheme needs to acquire location information of the image capturing device, and further acquire key location information (such as store information, bank information, hotel information, etc.) of the vicinity of the image capturing device from an electronic map. And then, matching the identifiable information in the shot picture with the key position information according to the picture shot by the image acquisition equipment, and completing analysis of the visual field.
However, in the event that critical location information cannot be obtained, current visual field analysis schemes cannot determine the visual field of the image capturing device.
Therefore, how to determine the technical problem that needs to be solved for the visual field without critical position information.
Disclosure of Invention
The application provides a visual field generating method, a visual field generating device, a computing device and a storage medium, which can determine the visual field of a camera on the premise of no position information.
According to an aspect of the present application, there is provided a visual field generating method including:
acquiring the geographic position of a first camera;
Determining an azimuth angle of the sun according to the geographic position;
Detecting a region of a target object and a shadow region of the target object under the sun ray in an image acquired by a first camera;
Determining a shadow direction of the shadow region in an image coordinate system according to the region of the target object and the shadow region;
a first reference orientation of the first camera is determined based on the azimuth and shadow directions of the sun.
In some embodiments, the above method further comprises:
acquiring an electronic map containing the geographic position;
Determining a target road near the first camera and a first direction of the target road in a geographic coordinate system in the electronic map;
Performing target tracking on the image frame sequence acquired by the first camera to determine the moving direction of a tracked target, and taking the moving direction as a second direction of the target road in an image coordinate system;
and determining a second reference orientation of the first camera according to the mapping relation between the first direction and the second direction.
In some embodiments, the above method further comprises:
detecting a building area in the image and sign information corresponding to the building area;
Determining an orientation of the building area in the image;
Inquiring landmark buildings and azimuth information of landmark buildings corresponding to the signboard information from an electronic map;
A third reference orientation of the first camera is determined based on the orientation of the architectural area in the image and the orientation information.
In some embodiments, the above method further comprises:
detecting a road area in the image;
Determining the extending direction of the road area in an image coordinate system;
Detecting traffic sign information corresponding to the road area;
Determining a third direction of the road area in a geographic coordinate system according to the traffic sign information;
A fourth reference orientation of the first camera is determined based on the third direction and the extension direction.
In some embodiments, the above method further comprises:
Detecting a static object in the image, and determining the orientation of the static object in an image coordinate system, such as a building or a second camera;
Acquiring a fourth direction of the static object in a geographic coordinate system;
And determining a fifth reference orientation of the first camera according to the fourth direction and the orientation of the static object in an image coordinate system.
In some embodiments, the above method further comprises:
the method comprises the steps of carrying out weighted summation on at least two reference directions of a first reference direction, a second reference direction, a third reference direction, a fourth reference direction and a fifth reference direction to obtain a calibration direction of the first camera;
And determining the visual field of the first camera in the electronic map according to the calibration orientation.
In some embodiments, the above method further comprises:
determining a target monitoring area in the image;
and determining an adjusting parameter for the pitch angle of the first camera according to the position of the target detection area in the image.
According to an aspect of the present application, there is provided a visual field generating apparatus including:
A position acquisition unit that acquires a geographic position of the first camera;
a sun azimuth determining unit for determining an azimuth angle of the sun according to the geographic position;
The shadow detection unit is used for detecting the area of the target object in the image acquired by the first camera and the shadow area of the target object under the sun ray;
A shadow direction determining unit that determines a shadow direction of the shadow region in an image coordinate system according to a region of the target object and the shadow region;
And an orientation determining unit for determining a first reference orientation of the first camera according to the azimuth angle and the shadow direction of the sun.
According to an aspect of the present application, there is provided a computing device comprising: a memory; a processor; a program stored in the memory and configured to be executed by the processor, the program including instructions for executing the above-described visual field generating method.
According to an aspect of the present application, there is provided a storage medium storing a program comprising instructions that, when executed by a computing device, cause the computing device to perform the above-described visual field generation method.
In summary, according to the visual field generating scheme of the application, under the condition that key position information (such as information of key positions of shops, banks, hotels and the like around the first camera) in the electronic map is not acquired, the direction of shadow of the target object in the image can be analyzed, and then the direction of the first camera can be determined according to the direction of shadow and the azimuth angle of the sun. On the basis, when the electronic map is acquired, the embodiment of the application can generate the visual field of the first camera in the electronic map according to the orientation of the first camera.
Drawings
FIG. 1 illustrates a schematic diagram of an application scenario according to some embodiments of the application;
FIG. 2 illustrates a flow chart of a method 200 of visual field generation according to some embodiments of the application;
FIG. 3 shows a schematic view of a utility pole taken by a first camera;
FIG. 4 illustrates a flow chart of a method 400 of generating a visual field according to some embodiments of the application;
FIG. 5A illustrates a schematic diagram of a target link in an electronic map according to some embodiments of the application;
FIG. 5B illustrates a schematic view of a road region in an image according to some embodiments of the application;
FIG. 6 illustrates a flow chart of a method 600 of generating a visual field according to some embodiments of the application;
FIG. 7A illustrates a schematic view of a building area in an image according to some embodiments of the application;
FIG. 7B illustrates a schematic diagram of landmark buildings in an electronic map according to some embodiments of the application;
FIG. 8 illustrates a flow chart of a method 800 of generating a visual field according to some embodiments of the application;
FIG. 9 illustrates a schematic view of an image captured by a first camera according to some embodiments of the application;
FIG. 10 illustrates a flow chart of a method 1000 of generating a visual field according to some embodiments of the application;
FIG. 11 illustrates a flow chart of a method 1100 of visual field generation according to some embodiments of the application;
fig. 12 illustrates a schematic diagram of a visual field generating apparatus 1200 according to some embodiments of the application;
fig. 13 illustrates a schematic diagram of a visual field generating apparatus 1300 according to some embodiments of the application;
FIG. 14 illustrates a schematic diagram of a computing device according to some embodiments of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below by referring to the accompanying drawings and examples.
Fig. 1 illustrates a schematic diagram of an application scenario according to some embodiments of the application. As shown in fig. 1, the application scenario may include at least one camera (e.g., 110-1, 110-2, and 110-N shown in fig. 1), a computing device 120, and a map server 130. Wherein N is a positive integer.
Cameras may be deployed at various shooting locations.
The map server 130 may store map information.
The computing device 120 is, for example, a server, a notebook computer, a tablet computer, or the like. The computing device 120 may obtain images captured by the camera that require visual field analysis and the geographic location of the camera. The geographic location is a positioning result such as longitude and latitude.
The computing device 120 may communicate with a camera, for example, to obtain images and positioning information. In addition, the images and geographic location of the camera may be stored in a data storage device (not shown in FIG. 1) such as a video recorder. The computing device 120 may also obtain images and geographic locations of the cameras from a data storage device.
In some embodiments, the computing device 120 may also communicate with a map server 130 to obtain an electronic map. The computing device 120 may analyze the visual field of the camera based on the electronic map, the image of the camera, and the geographic location.
In some embodiments, the camera of FIG. 1 may also acquire an electronic map. In addition, the multiple cameras in fig. 1 may send images and positioning information to one camera node, and the multiple cameras may be visually analyzed by this camera node.
Fig. 2 illustrates a flow chart of a method 200 of visual field generation according to some embodiments of the application. The method 200 may be performed, for example, by the computing device 120 or the first camera, as the application is not limited in this regard. Here, the first camera may be any one of the cameras of fig. 1, but is not limited thereto.
As shown in fig. 2, in step S201, the geographic position of the first camera is acquired. Here, the geographic location may be, for example, latitude and longitude. In some embodiments, if the longitude and latitude of the first camera cannot be obtained, step S201 may replace the longitude and latitude of the first camera with the longitude and latitude of the region where the first camera is located.
In step S202, the azimuth angle of the sun is determined from the geographical position. The azimuth angle of the sun is the azimuth angle of the sun, which means the included angle between the projection of the sun rays on the ground plane and the local meridian, and can be approximately regarded as the included angle between the shadow of the straight line erected on the ground under the sun and the right south direction.
In step S203, the area of the target object and the shadow area of the target object under the sun' S rays in the image acquired by the first camera are detected. Here, the target object may be, for example, a pedestrian, a utility pole, an automobile, or the like in the first camera shooting scene. Taking the pole as an example, step S203 may determine the image area of the pole body and the shadow area of the pole (i.e., the projection of the pole under the sun' S rays).
In step S204, a shadow direction of the shadow region in the image coordinate system is determined from the region of the target object and the shadow region. Taking the utility pole above as an example, the shadow direction is the extending direction of the shadow area of the utility pole. For example, fig. 3 shows a schematic view of a utility pole taken by a first camera. As shown in fig. 3, step S204 may detect the utility pole 301 of fig. 3 and the shaded area 302 of the utility pole 301. The direction of the shaded area 302 in the image coordinate system O (X, Y) is indicated by arrow 303.
In step S205, a first reference orientation of the first camera is determined based on the azimuth angle and the shadow direction of the sun. Here, the first reference orientation is the orientation of the first camera determined in step S205. The orientation of the first camera is the orientation of the first camera in the horizontal plane, i.e. the viewing direction in the horizontal plane. The angle between the azimuth angle of the sun and the north direction is 180-A s.As, for example, the azimuth angle of the sun. The angle between the shadow direction and the X-axis in the image coordinate system is for example alpha. The first reference orientation of the first camera is assumed to be θ. θ can be calculated according to the following formula.
θ=360-α-As
In summary, the method 200 for generating a visual field according to the present application can determine the orientation of the first camera by analyzing the shadow direction of the target object in the image without acquiring the key location information (such as information of key locations of shops, banks, hotels, etc. around the first camera) in the electronic map, and further according to the shadow direction and the azimuth angle of the sun. On the basis, when the electronic map is acquired, the embodiment of the application can generate the visual field of the first camera in the electronic map according to the orientation of the first camera. The visual field is for example a sector.
In some embodiments, step S202 above may calculate the azimuth angle of the sun and the altitude angle of the sun from the longitude and latitude and the sun earth model. The solar altitude is the included angle between the incident direction of solar rays and the ground plane.
The altitude angle of the sun is calculated as follows:
sinHs=sinφ×sinδ+cosφcosδcost
the calculation formula of the azimuth angle of the sun is as follows:
cosAs=(sinHs×sinφ-sinδ)/(cos Hs×cosφ)
Wherein H s is the altitude of the sun, A s is the azimuth of the sun, phi is the geographical latitude, delta is the declination of the sun, and t is the time angle.
The calculation formula of solar declination is as follows:
δ=0.006918-0.399912cos b+0.070257sinb-0.006758cos 2b+0.000907sin 2b-0.002697cos 3b+0.00148sin 3b
Where b=2×pi× (N-1)/365, N is the number of days per year from 1 month No. 1.
The calculation formula of the time angle is as follows:
t= (t 0 -12) ×15, where t 0 is true solar time, t 0 = current time- (chronolongitude-current longitude) ×4. The chronolongitude is the primary meridian, i.e., 0 longitude.
It is further noted that embodiments of the present application may determine the orientation of the first camera in other ways than by using the shadow direction. The following is a description with reference to fig. 4.
Fig. 4 illustrates a flow chart of a method 400 of generating a visual field according to some embodiments of the application. The method 400 may be performed, for example, by the computing device 120 or the first camera.
In step S401, an electronic map containing the geographic position of the first camera is acquired. Here, step S401 may determine to acquire the electronic map of the area around the first camera from the map server 130, for example, according to the latitude and longitude.
In step S402, in the electronic map, a target road in the vicinity of the first camera and a first direction of the target road in a geographic coordinate system are determined. In other words, step S402 may determine the target road near the first camera and the direction of the target road in the electronic map according to the electronic map. Here, the target road refers to a unidirectional path near the first camera.
In step S403, object tracking is performed on the image frame sequence acquired by the first camera, the moving direction of the tracked object is determined, and the moving direction is taken as the second direction of the object road in the image coordinate system. Here, the moving object is, for example, a pedestrian or a vehicle. Step S403 may determine the moving direction of the tracked target based on various target tracking algorithms.
In step S404, a second reference orientation of the first camera is determined according to the mapping between the first direction and the second direction. Here, the second reference orientation is the orientation of the first camera determined in step S404. The first reference orientation and the second reference orientation are orientations of the first camera determined differently.
In some scenarios, there is only one unidirectional target road near the first camera. The target road determined in step S402 is shown in fig. 5A, for example. Step S402 may determine a target road 501 in the electronic map. In fig. 5A, the coordinate system X1Y1 is a geographic coordinate system. The forward direction of the coordinate axis X1 is easting, and the forward direction of the coordinate axis Y1 is northing. Fig. 5B shows a road area in an image (i.e., an image display of the target road 501). And establishing a coordinate system X2Y2 by taking one point on the road area in the image as an origin. Coordinate axis X2 is parallel to the X-axis of the image and Y2 is parallel to the Y-axis. The direction of movement of the target object is the direction indicated by arrow 502. Step S403 may determine that the included angle between the road and X2 in the image is α. Here, the angle α may represent the second direction. Step S402 may determine that the angle between the road and X1 (i.e. the angle between the target road 501 and X1) in the image is β. The angle beta may represent a first direction. Based on this, step S404 may determine an angle θ between the coordinate axis Y2 (which coincides with the Y-axis direction of the image coordinate system) and the X1 axis in the geographic coordinate system. The angle θ may represent a second reference orientation. The angle θ can be calculated, for example, according to the following formula.
θ=90-α+β
In summary, the method 400 according to the embodiment of the present application may acquire the first direction of the target road near the first camera. Through target tracking, the method 400 according to an embodiment of the present application may determine a second direction of the target link in the image coordinate system. On this basis, the method 400 may determine the orientation of the first camera by a mapping relationship according to the first direction and the second direction.
Fig. 6 illustrates a flow chart of a method 600 of generating a visual field according to some embodiments of the application. The method 600 may be performed, for example, by the computing device 120 or the first camera.
As shown in fig. 6, in step S601, a building area in an image, and sign information corresponding to the building area are detected. Here, the sign information is sign information of an object such as a hotel, a shop, or a bank, for example. Step S601 may perform object detection on the image to determine a building area. The building area may be considered a landmark building to be identified.
In step S602, the building area is determined to be oriented in the image. In some embodiments, step S602 may detect, for example, an edge line of the building area parallel to the ground, and orient the direction in the image perpendicular to the edge line as the building area in the image. In addition, in step S602, other image processing algorithms may be used for labeling the orientation, which is not limited by the present application.
In step S603, landmark buildings and azimuth information of landmark buildings corresponding to the sign information are queried from the electronic map. In some embodiments, step S603 may query the electronic map for landmark buildings near the first camera and then image match the sign information with the landmark buildings to determine landmark buildings corresponding to the building area.
In step S604, a third reference orientation of the first camera is determined from the orientation and azimuth information of the architectural area in the image. Here, the orientation of the building region in the image is the orientation of the landmark building corresponding to the building region in the image coordinate system. The first camera's orientation in the image coordinate system may be considered to be vertically upward. Step S604 may determine a third reference orientation of the first camera based on the mapping of the orientation and the azimuth information in the image (i.e., the orientation of the landmark building in the geographic coordinate system). For example, fig. 7A shows a schematic view of a building area in an image. Step S601 may determine the building area 701 and sign information 702 in fig. 7A. Step S602 may determine that the architectural region 701 is oriented 703 in the image coordinate system. Fig. 7B shows a schematic diagram of landmark buildings in a map. Landmark building 704 in fig. 7B corresponds to building area 701 in fig. 7A. Step S603 may determine that the direction 705 of the landmark building 704 makes an angle a with the east direction. Direction 706 is the same as direction 705 in fig. 7A. Direction 706 coincides with direction 705 in the geographic coordinate system. The third reference orientation (i.e., the direction of the y coordinate axis in the image coordinate system) b=180-a.
In summary, the method 600 for generating a visual field according to an embodiment of the present application can determine the orientation of the first camera by identifying sign information corresponding to a building area in an image and the orientation of the building area in an image coordinate system, and determining azimuth information of a landmark building corresponding to the sign information through an electronic map.
Fig. 8 illustrates a flow chart of a method 800 of generating a visual field according to some embodiments of the application. The method 800 may be performed, for example, by the computing device 120 or the first camera.
As shown in fig. 8, in step S801, a road area in an image acquired by a first camera is detected.
In step S802, the direction in which the road area extends in the image coordinate system is determined. In some embodiments, step S802 may detect a moving direction of a moving object (e.g., an automobile or a pedestrian) in a road area based on a sequence of image frames acquired by the first camera, and take the moving direction of the moving object as an extension direction of the road area.
In step S803, traffic sign information corresponding to the road area is detected. Here, the traffic sign information is, for example, upper indication information of a traffic sign board. The traffic sign information includes, for example, information indicating the direction of the road such as "east" or "south".
In step S804, a third direction of the road area in the geographic coordinate system is determined according to the traffic sign information.
In step S805, a fourth reference orientation of the first camera is determined according to the third direction and the extension direction. For example, fig. 9 shows a schematic diagram of an image frame according to some embodiments of the application. Step S801 may identify the road area 901 in fig. 9. Step S802 may determine that the extending direction of the road is 902. The extending direction is 902 and the included angle between the Y coordinate axis direction is c. Step S803 may detect traffic flag information "xx east road" from the traffic flag 903. On this basis, step S804 may determine that the third direction of the road area 901 is east according to the traffic sign information. Step S805 may determine that the fourth reference direction (i.e. the Y coordinate axis direction) is at an angle c to the eastward direction according to the extending direction 902 and the third direction.
In summary, according to the visual field generating method 800 of the present application, the orientation of the first camera can be determined by identifying traffic sign information and determining the extending direction of the road area.
Fig. 10 illustrates a flow chart of a method 1000 of generating a visual field according to some embodiments of the application. The method 1000 may be performed, for example, by the computing device 120 or the first camera.
As shown in fig. 10, in step S1001, a static object in the image is detected, and an orientation of the static object in an image coordinate system is determined. Here, the static object is, for example, a building or a second camera. The second camera means one camera that the first camera can photograph.
In step S1002, a fourth direction of the static object in the geographic coordinate system is acquired.
In step S1003, a fifth reference orientation of the first camera is determined according to the fourth direction and the orientation of the static object in the image coordinate system.
In some embodiments, the static object is a building, and step S1001 may detect a building area in the image and determine an orientation of the building in the image coordinate system in the building area. For example, the building area is a residential cell. Step S1001 may identify the orientation of the architectural area in the image coordinate system. Step S1002 may perform semantic analysis on the building area. For example, step S1002 may determine that a balcony is included in the building area. Here, the balcony is usually arranged to face south. On this basis, step S1002 may determine that the fourth direction of the building area in the geographic coordinate system is south-facing. In step S1003, a fifth reference orientation of the first camera is determined from the fourth direction of the building area and the orientation in the image coordinate system.
In some embodiments, step S1001 detects the first camera in the image and determines the orientation of the first camera in the image coordinate system by image analysis. Step S1002 may acquire a fourth direction of the first camera in the image coordinate system. Step S1002 may acquire a fourth direction of the first camera captured in the image in the geographic coordinate system. For example, step S1002 may query the identities of nearby first cameras based on the geographic location of the first camera capturing the image, and use the queried first camera identity as the identity of the first camera captured. On this basis, step S1002 may query the fourth direction of the photographed first camera in the geographic coordinate system according to the identification of the photographed first camera. In this way, step S1002 may determine a fifth reference orientation of the first camera according to the fourth direction and the captured orientation of the first camera in the image coordinate system.
In summary, according to the method 1000 for generating a visual field of the present application, the orientation of the first camera may be determined according to the orientation of the static object in the image coordinate system and the orientation of the static object in the geographic coordinate system.
Fig. 11 illustrates a flow chart of a method 1100 of visual field generation according to some embodiments of the application. The method 1100 may be performed, for example, by the computing device 120 or the first camera.
As shown in fig. 11, in step S1101, the geographical position of the first camera is acquired. Here, the geographic location may be, for example, latitude and longitude. In some embodiments, if the longitude and latitude of the first camera cannot be obtained, step S1101 may replace the longitude and latitude of the first camera with the longitude and latitude of the region where the first camera is located.
In step S1102, the azimuth angle of the sun is determined from the geographical position. The azimuth angle of the sun is the azimuth angle of the sun, which means the included angle between the projection of the sun rays on the ground plane and the local meridian, and can be approximately regarded as the included angle between the shadow of the straight line erected on the ground under the sun and the right south direction.
In step S1103, the region of the target object and the shadow region of the target object under the sun' S rays in the image acquired by the first camera are detected. Here, the target object may be, for example, a pedestrian, a utility pole, an automobile, or the like in the first camera shooting scene.
In step S1104, a shadow direction of the shadow region in the image coordinate system is determined from the region of the target object and the shadow region.
In step S1105, a first reference orientation of the first camera is determined based on the azimuth angle and the shadow direction of the sun.
More specific embodiments of steps S1101-S1105 are consistent with the method 200 and are not described in detail herein.
In step S1106, an electronic map containing the geographic location of the first camera is acquired. Here, step S401 may determine to acquire the electronic map of the area around the first camera from the map server 130, for example, according to the latitude and longitude.
In step S1107, in the electronic map, the target road in the vicinity of the first camera and the first direction of the target road in the geographic coordinate system are determined.
In step S1108, the image frame sequence acquired by the first camera is subject to target tracking to determine the moving direction of the tracked target, and the moving direction is taken as the second direction of the target road in the image coordinate system.
In step S1109, a second reference orientation of the first camera is determined according to the mapping relationship between the first direction and the second direction.
More specific embodiments of steps S1106-S1109 are consistent with method 400 and are not described in detail herein.
In step S1110, a building area in the image, and sign information corresponding to the building area are detected. Here, the sign information is sign information of an object such as a hotel, a shop, or a bank, for example.
In step S1111, the building region is determined to be oriented in the image.
In step S1112, landmark buildings and azimuth information of landmark buildings corresponding to the sign information are queried from the electronic map.
In step S1113, a third reference orientation of the first camera is determined from the orientation and azimuth information of the building area in the image.
More specific embodiments of steps S1110-S1113 are consistent with method 600 and are not described in detail herein.
In step S1114, a road area in the image captured by the first camera is detected.
In step S1115, the direction in which the road area extends in the image coordinate system is determined.
In step S1116, traffic sign information corresponding to the road area is detected.
In step S1117, a third direction of the road area in the geographic coordinate system is determined according to the traffic sign information.
In step S1118, a fourth reference orientation of the first camera is determined based on the road direction and the extension direction.
More specific embodiments of steps S1114-S1118 are consistent with method 800 and are not described in detail herein.
In step S1119, a static object in an image is detected, and the orientation of the static object in the image coordinate system is determined. Here, the static object is, for example, a building or a second camera.
In step S1120, a fourth direction of the static object in the geographic coordinate system is acquired.
In step S1121, a fifth reference orientation of the first camera is determined according to the fourth direction and the orientation of the static object in the image coordinate system.
More specific embodiments of steps S1119-S1121 are consistent with method 1000 and are not described in detail herein.
In some embodiments, the method 1100 may further include step S1122 of weighting and summing at least two of the first reference orientation, the second reference orientation, the third reference orientation, the fourth reference orientation, and the fifth reference orientation to obtain a calibration orientation of the first camera.
In some embodiments, step S1122 may weight sum the successfully acquired reference orientations. For example, the method 1100 successfully acquires the first reference orientation and the second reference orientation. Then, step S1122 may weight sum the first and second reference orientations.
In some embodiments, step S1122 may take the confidence of each reference orientation as a weight value. For example, step S1122 may take the confidence of the detection algorithm output when detecting the shadow region as the confidence of the first reference orientation. Step S1122 may take the confidence of the target detection algorithm output when detecting the road region as the confidence of the second reference orientation. Step S1122 may take the confidence level of the detection algorithm output at the time of detecting the sign information as the confidence level of the third reference orientation. Step S1122 may take the confidence level of the detection algorithm output at the time of detecting the traffic sign information as the confidence level of the fourth reference orientation. Step S1122 may take the confidence of the detection algorithm output at the time of detecting the static object as the confidence of the fifth reference orientation.
In step S1123, the first camera' S visual field in the electronic map is determined according to the calibration orientation. The visual field is for example a sector.
To sum up, the method 1100, through steps S1122 and S1123, may perform data fusion based on the orientation determined in various ways, thereby making the determined first camera orientation more accurate and improving the accuracy of the first camera' S visual field.
In some embodiments, the method 1100 may further include step S1124 of determining a target monitoring area in the image captured by the first camera. Here, depending on different application scenarios, embodiments of the present application may determine different monitoring objects. The monitoring object is an object such as a vehicle or a pedestrian. For monitoring the application scenario of the vehicle, step S1124 may detect the position of the vehicle in the image, i.e. determine the area to which the vehicle corresponds. For the application scenario of monitoring pedestrians, step S1124 may detect an area corresponding to a pedestrian in the image.
In step S1125, an adjustment parameter for the pitch angle of the first camera is determined according to the position of the target detection area in the image. For example, the region corresponding to the vehicle is at the upper edge in the image. Step S1125 may determine an adjustment parameter that enables the first camera to increase the pitch angle so as to align the field of view of the first camera with the vehicle in the road. If the first camera adjusts the pitch angle according to the adjustment parameters, the vehicle in the image taken by the first camera will adjust from the upper edge to the middle of the image.
In summary, the method 1100 can determine the adjustment parameter for the pitch angle of the first camera by analyzing the position of the monitoring object in the image in steps S1114 and S1125, so that the shooting angle of the monitoring object by the first camera can be optimized.
Fig. 12 illustrates a schematic diagram of a visual field generating apparatus 1200 according to some embodiments of the application. The apparatus 1200 may be deployed in the computing device 120 or the first camera, for example.
As shown in fig. 12, the visual field generating apparatus 1200 includes: a position acquisition unit 1201, a sun azimuth determination unit 1202, a shadow detection unit 1203, a shadow direction determination unit 1204, and an orientation determination unit 1205.
The position acquisition unit 1201 acquires the geographic position of the first camera.
A sun azimuth determination unit 1202 determines the azimuth angle of the sun from the geographical position.
The shadow detection unit 1203 detects an area of the target object and a shadow area of the target object under the sun's rays in the image acquired by the first camera.
A shadow direction determining unit 1204 that determines a shadow direction of the shadow region in an image coordinate system, based on the region of the target object and the shadow region.
An orientation determining unit 1205 determines a first reference orientation of the first camera according to the azimuth angle and the shadow direction of the sun. More specific embodiments of apparatus 1200 are consistent with method 200 and are not described in detail herein.
In summary, the visual field generating apparatus 1200 according to the embodiment of the present application may determine the direction of the first camera by analyzing the shadow direction of the target object in the image without acquiring the key location information (such as information of key locations of shops, banks, hotels and the like around the first camera) in the electronic map, and further according to the shadow direction and the azimuth angle of the sun. On the basis, when the electronic map is acquired, the embodiment of the application can generate the visual field of the first camera in the electronic map according to the orientation of the first camera.
Fig. 13 illustrates a schematic diagram of a visual field generating apparatus 1300 according to some embodiments of the application. The apparatus 1300 may be deployed in the computing device 120 or the first camera, for example.
As shown in fig. 13, the apparatus 1300 may include: a first reference orientation determination unit 1301, a second reference orientation determination unit 1302, a third reference orientation determination unit 1303, a fourth reference orientation determination unit 1304, a fifth reference orientation determination unit 1305, a calibration unit 1306, and an area recommendation unit 1307.
The first reference orientation determining unit 1301 may for example perform operations consistent with the method 200. The second reference orientation determining unit 1302 may perform operations consistent with the method 400. The third reference orientation determining unit 1304 may perform operations consistent with the method 600. The fourth reference orientation determining unit 1304 may perform operations consistent with the method 800. The fifth reference orientation determining unit 1305 may perform operations consistent with the method 1000.
The calibration unit 1306 performs weighted summation on at least two reference orientations among the first reference orientation, the second reference orientation, the third reference orientation, the fourth reference orientation, and the fifth reference orientation, to obtain a calibration orientation of the first camera. On this basis, the calibration unit 1306 determines the visual field of the first camera in the electronic map according to the calibration orientation. Here, the calibration unit 1306 performs data fusion based on the orientations determined in various ways, thereby making the determined first camera orientations more accurate and improving the accuracy of the visual field of the first camera.
The region recommending unit 1307 determines a target monitoring region in the image acquired by the first camera. Here, depending on different application scenarios, embodiments of the present application may determine different monitoring objects. The monitoring object is an object such as a vehicle or a pedestrian. The region recommending unit 1307 determines an adjustment parameter for the pitch angle of the first camera according to the position of the target detection region in the image.
Here, the region recommendation unit 1307 can determine an adjustment parameter for the pitch angle of the first camera by analyzing the position of the monitoring object in the image, so that the photographing angle of the monitoring object by the first camera can be optimized.
FIG. 14 illustrates a schematic diagram of a computing device according to some embodiments of the application. As shown in fig. 14, the computing device includes one or more processors (CPUs) 1402, a communication module 1404, a memory 1406, a user interface 1410, and a communication bus 1408 for interconnecting these components.
The processor 1402 may receive and transmit data via the communication module 1404 to enable network communication and/or local communication.
The user interface 1410 includes one or more output devices 1412 that include one or more speakers and/or one or more visual displays. The user interface 1410 also includes one or more input devices 1414. The user interface 1410 may receive an instruction of a remote controller, for example, but is not limited thereto.
Memory 1406 may be a high-speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; or non-volatile memory such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices.
Memory 1406 stores a set of instructions executable by processor 1402, including:
an operating system 1416 including programs for handling various basic system services and for performing hardware related tasks;
the application 1418, including various programs for implementing the above-described visual field generating method, may include the visual field generating apparatus 1200 or 1300, for example.
In addition, each of the embodiments of the present application can be realized by a data processing program executed by a data processing apparatus such as a computer. Obviously, the data processing program constitutes the application. In addition, a data processing program typically stored in one storage medium is executed by directly reading the program out of the storage medium or by installing or copying the program into a storage device (such as a hard disk and/or a memory) of the data processing apparatus. Therefore, such a storage medium also constitutes the present application. The storage medium may use any type of recording means, such as paper storage medium (e.g., paper tape, etc.), magnetic storage medium (e.g., floppy disk, hard disk, flash memory, etc.), optical storage medium (e.g., CD-ROM, etc.), magneto-optical storage medium (e.g., MO, etc.), etc.
The present application also discloses a nonvolatile storage medium in which a program is stored. The program comprises instructions which, when executed by a processor, cause a computing device to perform a method of generating a visual field according to the application.
In addition, the method steps of the present application may be implemented by hardware, such as logic gates, switches, application Specific Integrated Circuits (ASIC), programmable logic controllers, embedded microcontrollers, etc., in addition to data processing programs. Such hardware capable of carrying out the methods of the application may therefore also constitute the application.
The foregoing description of the preferred embodiments of the application is not intended to be limiting, but rather is to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the application.

Claims (7)

1. A method of generating a visual field, comprising:
acquiring the geographic position of a first camera;
Determining an azimuth angle of the sun according to the geographic position;
Detecting a region of a target object and a shadow region of the target object under the sun ray in an image acquired by a first camera;
Determining a shadow direction of the shadow region in an image coordinate system according to the region of the target object and the shadow region;
Determining a first reference orientation of the first camera according to the azimuth angle and the shadow direction of the sun;
The second reference orientation, the fourth reference orientation and the fifth reference orientation are weighted and summed with the first reference orientation to obtain the calibration orientation of the first camera;
determining a visual field of the first camera in an electronic map according to the calibration orientation;
wherein the manner of determining the second reference orientation comprises:
acquiring an electronic map containing the geographic position;
Determining a target road near the first camera and a first direction of the target road in a geographic coordinate system in the electronic map;
Performing target tracking on the image frame sequence acquired by the first camera to determine the moving direction of a tracked target, and taking the moving direction as a second direction of the target road in an image coordinate system; wherein the tracking target travels in the target road direction;
determining a second reference orientation of the first camera according to the mapping relation between the first direction and the second direction;
the means for determining the fourth reference orientation comprises:
detecting a road area in the image;
Detecting the moving direction of a moving object in the road area based on the image frame sequence acquired by the first camera, and taking the moving direction of the moving object as the extending direction of the road area in an image coordinate system;
detecting traffic sign information corresponding to a road area in the image;
Determining a third direction of the road area in a geographic coordinate system according to the traffic sign information;
determining a fourth reference orientation of the first camera according to the third direction and the extension direction;
the means for determining the fifth reference orientation comprises:
Detecting a target object in the image, and determining the orientation of a static object in an image coordinate system, wherein the static object is a building;
semantic analysis is carried out on a building, wherein the building comprises a balcony, and the balcony direction is south facing;
Acquiring a fourth direction of the static object in a geographic coordinate system;
And determining a fifth reference orientation of the first camera according to the fourth direction and the orientation of the static object in an image coordinate system.
2. The visual field generating method according to claim 1, further comprising:
detecting a building area in the image and sign information corresponding to the building area;
Determining an orientation of the building area in the image;
Inquiring landmark buildings and azimuth information of landmark buildings corresponding to the signboard information from an electronic map;
A third reference orientation of the first camera is determined based on the orientation of the architectural area in the image and the orientation information.
3. The method of generating a visual field according to claim 2, wherein a third reference orientation is added to the weighted summation process when deriving the calibration orientation of the first camera.
4. The visual field generating method according to claim 1, further comprising:
determining a target monitoring area in the image;
and determining an adjusting parameter for the pitch angle of the first camera according to the position of the target detection area in the image.
5. A visual field generating apparatus, comprising:
A position acquisition unit that acquires a geographic position of the first camera;
a sun azimuth determining unit for determining an azimuth angle of the sun according to the geographic position;
The shadow detection unit is used for detecting the area of the target object in the image acquired by the first camera and the shadow area of the target object under the sun ray;
A shadow direction determining unit that determines a shadow direction of the shadow region in an image coordinate system according to a region of the target object and the shadow region;
an orientation determining unit that determines a first reference orientation of the first camera according to an azimuth angle and a shadow direction of the sun;
A second reference orientation determining unit for obtaining an electronic map containing the geographic position; determining a target road near the first camera and a first direction of the target road in a geographic coordinate system in the electronic map; performing target tracking on the image frame sequence acquired by the first camera to determine the moving direction of a tracked target, and taking the moving direction as a second direction of the target road in an image coordinate system; determining a second reference orientation of the first camera according to the mapping relation between the first direction and the second direction; wherein the tracking target travels in the target road direction;
a fourth reference orientation determination unit that detects a road area in the image; detecting the moving direction of a moving object in the road area based on the image frame sequence acquired by the first camera, and taking the moving direction of the moving object as the extending direction of the road area in an image coordinate system; detecting traffic sign information corresponding to a road area in the image; determining a third direction of the road area in a geographic coordinate system according to the traffic sign information; determining a fourth reference orientation of the first camera according to the third direction and the extension direction;
A fifth reference orientation determining unit that detects a target object in the image and determines an orientation of a static object in an image coordinate system, the static object being a building; semantic analysis is carried out on a building, the building comprises a balcony, and the balcony direction is south facing; acquiring a fourth direction of the static object in a geographic coordinate system; determining a fifth reference orientation of the first camera according to the fourth direction and the orientation of the static object in an image coordinate system;
the calibration unit is used for carrying out weighted summation on the second reference orientation, the fourth reference orientation and the fifth reference orientation and the first reference orientation to obtain a calibration orientation of the first camera; and determining the visual field of the first camera in the electronic map according to the calibration orientation.
6. A computing device, comprising:
A memory;
A processor;
A program stored in the memory and configured to be executed by the processor, the program comprising instructions for performing the visual field generating method of any one of claims 1 to 4.
7. A storage medium storing a program comprising instructions that, when executed by a computing device, cause the computing device to perform the method of generating a visual field of any one of claims 1-4.
CN202010145503.3A 2020-03-05 2020-03-05 Visual field generation method, device, computing equipment and storage medium Active CN113362392B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010145503.3A CN113362392B (en) 2020-03-05 2020-03-05 Visual field generation method, device, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010145503.3A CN113362392B (en) 2020-03-05 2020-03-05 Visual field generation method, device, computing equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113362392A CN113362392A (en) 2021-09-07
CN113362392B true CN113362392B (en) 2024-04-23

Family

ID=77523554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010145503.3A Active CN113362392B (en) 2020-03-05 2020-03-05 Visual field generation method, device, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113362392B (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06167333A (en) * 1992-11-27 1994-06-14 Mitsubishi Electric Corp Device for determining absolute azimuth
JP2005012415A (en) * 2003-06-18 2005-01-13 Matsushita Electric Ind Co Ltd System and server for monitored video image monitoring and monitored video image generating method
JP2007259002A (en) * 2006-03-23 2007-10-04 Fujifilm Corp Image reproducing apparatus, its control method, and its control program
CN102177719A (en) * 2009-01-06 2011-09-07 松下电器产业株式会社 Apparatus for detecting direction of image pickup device and moving body comprising same
JP2014185908A (en) * 2013-03-22 2014-10-02 Pasco Corp Azimuth estimation device and azimuth estimation program
CN104281840A (en) * 2014-09-28 2015-01-14 无锡清华信息科学与技术国家实验室物联网技术中心 Method and device for positioning and identifying building based on intelligent terminal
CN104639824A (en) * 2013-11-13 2015-05-20 杭州海康威视***技术有限公司 Electronic map based camera control method and device
CN104717462A (en) * 2014-01-03 2015-06-17 杭州海康威视***技术有限公司 Supervision video extraction method and device
CN105389375A (en) * 2015-11-18 2016-03-09 福建师范大学 Viewshed based image index setting method and system, and retrieving method
CN106331618A (en) * 2016-08-22 2017-01-11 浙江宇视科技有限公司 Method and device for automatically confirming visible range of camera
CN108038897A (en) * 2017-12-06 2018-05-15 北京像素软件科技股份有限公司 Shadow map generation method and device
CN108921900A (en) * 2018-07-18 2018-11-30 江苏实景信息科技有限公司 A kind of method and device in the orientation of monitoring video camera
CN108965687A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 Shooting direction recognition methods, server and monitoring method, system and picture pick-up device
KR20190063350A (en) * 2017-11-29 2019-06-07 한국전자통신연구원 Method of detecting a shooting direction and apparatuses performing the same
CN110176030A (en) * 2019-05-24 2019-08-27 中国水产科学研究院 A kind of autoegistration method, device and the electronic equipment of unmanned plane image
CN110243364A (en) * 2018-03-07 2019-09-17 杭州海康机器人技术有限公司 Unmanned plane course determines method, apparatus, unmanned plane and storage medium
CN110458895A (en) * 2019-07-31 2019-11-15 腾讯科技(深圳)有限公司 Conversion method, device, equipment and the storage medium of image coordinate system
CN111526291A (en) * 2020-04-29 2020-08-11 济南博观智能科技有限公司 Method, device and equipment for determining monitoring direction of camera and storage medium
CN112101339A (en) * 2020-09-15 2020-12-18 北京百度网讯科技有限公司 Map interest point information acquisition method and device, electronic equipment and storage medium
WO2022217877A1 (en) * 2021-04-12 2022-10-20 浙江商汤科技开发有限公司 Map generation method and apparatus, and electronic device and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9488488B2 (en) * 2010-02-12 2016-11-08 Apple Inc. Augmented reality maps
JP6271953B2 (en) * 2013-11-05 2018-01-31 キヤノン株式会社 Image processing apparatus and image processing method
US20190164309A1 (en) * 2017-11-29 2019-05-30 Electronics And Telecommunications Research Institute Method of detecting shooting direction and apparatuses performing the same

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06167333A (en) * 1992-11-27 1994-06-14 Mitsubishi Electric Corp Device for determining absolute azimuth
JP2005012415A (en) * 2003-06-18 2005-01-13 Matsushita Electric Ind Co Ltd System and server for monitored video image monitoring and monitored video image generating method
JP2007259002A (en) * 2006-03-23 2007-10-04 Fujifilm Corp Image reproducing apparatus, its control method, and its control program
CN102177719A (en) * 2009-01-06 2011-09-07 松下电器产业株式会社 Apparatus for detecting direction of image pickup device and moving body comprising same
JP2014185908A (en) * 2013-03-22 2014-10-02 Pasco Corp Azimuth estimation device and azimuth estimation program
CN104639824A (en) * 2013-11-13 2015-05-20 杭州海康威视***技术有限公司 Electronic map based camera control method and device
CN104717462A (en) * 2014-01-03 2015-06-17 杭州海康威视***技术有限公司 Supervision video extraction method and device
CN104281840A (en) * 2014-09-28 2015-01-14 无锡清华信息科学与技术国家实验室物联网技术中心 Method and device for positioning and identifying building based on intelligent terminal
CN105389375A (en) * 2015-11-18 2016-03-09 福建师范大学 Viewshed based image index setting method and system, and retrieving method
CN106331618A (en) * 2016-08-22 2017-01-11 浙江宇视科技有限公司 Method and device for automatically confirming visible range of camera
CN108965687A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 Shooting direction recognition methods, server and monitoring method, system and picture pick-up device
KR20190063350A (en) * 2017-11-29 2019-06-07 한국전자통신연구원 Method of detecting a shooting direction and apparatuses performing the same
CN108038897A (en) * 2017-12-06 2018-05-15 北京像素软件科技股份有限公司 Shadow map generation method and device
CN110243364A (en) * 2018-03-07 2019-09-17 杭州海康机器人技术有限公司 Unmanned plane course determines method, apparatus, unmanned plane and storage medium
CN108921900A (en) * 2018-07-18 2018-11-30 江苏实景信息科技有限公司 A kind of method and device in the orientation of monitoring video camera
CN110176030A (en) * 2019-05-24 2019-08-27 中国水产科学研究院 A kind of autoegistration method, device and the electronic equipment of unmanned plane image
CN110458895A (en) * 2019-07-31 2019-11-15 腾讯科技(深圳)有限公司 Conversion method, device, equipment and the storage medium of image coordinate system
CN111526291A (en) * 2020-04-29 2020-08-11 济南博观智能科技有限公司 Method, device and equipment for determining monitoring direction of camera and storage medium
CN112101339A (en) * 2020-09-15 2020-12-18 北京百度网讯科技有限公司 Map interest point information acquisition method and device, electronic equipment and storage medium
WO2022217877A1 (en) * 2021-04-12 2022-10-20 浙江商汤科技开发有限公司 Map generation method and apparatus, and electronic device and storage medium

Also Published As

Publication number Publication date
CN113362392A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
US9497581B2 (en) Incident reporting
US9189853B1 (en) Automatic pose estimation from uncalibrated unordered spherical panoramas
Manweiler et al. Satellites in our pockets: an object positioning system using smartphones
EP3593324B1 (en) Target detection and mapping
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
US8264537B2 (en) Photogrammetric networks for positional accuracy
CN109596121B (en) Automatic target detection and space positioning method for mobile station
WO2014011346A1 (en) Sensor-aided wide-area localization on mobile devices
US11682103B2 (en) Selecting exterior images of a structure based on capture positions of indoor images associated with the structure
US20130135446A1 (en) Street view creating system and method thereof
CN106537409A (en) Determining compass orientation of imagery
Masiero et al. Toward the use of smartphones for mobile mapping
KR100679864B1 (en) Cellular phone capable of displaying geographic information and a method thereof
Elias et al. Photogrammetric water level determination using smartphone technology
US11703820B2 (en) Monitoring management and control system based on panoramic big data
US20160171004A1 (en) Method and system for improving the location precision of an object taken in a geo-tagged photo
CN113362392B (en) Visual field generation method, device, computing equipment and storage medium
US11481920B2 (en) Information processing apparatus, server, movable object device, and information processing method
Lee et al. Distant object localization with a single image obtained from a smartphone in an urban environment
CN110162658A (en) Position information acquisition method, device, terminal and storage medium
Jeon et al. Design of positioning DB automatic update method using Google tango tablet for image based localization system
Moun et al. Localization and building identification in outdoor environment for smartphone using integrated GPS and camera
Chang et al. Augmented reality services of photos and videos from filming sites using their shooting locations and attitudes
Wang et al. Fisheye‐Lens‐Based Visual Sun Compass for Perception of Spatial Orientation
Etzold et al. MIPos: towards mobile image positioning in mixed reality web applications based on mobile sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant