WO2022205208A1 - 拍摄方法、装置、计算机可读存储介质和终端设备 - Google Patents

拍摄方法、装置、计算机可读存储介质和终端设备 Download PDF

Info

Publication number
WO2022205208A1
WO2022205208A1 PCT/CN2021/084724 CN2021084724W WO2022205208A1 WO 2022205208 A1 WO2022205208 A1 WO 2022205208A1 CN 2021084724 W CN2021084724 W CN 2021084724W WO 2022205208 A1 WO2022205208 A1 WO 2022205208A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
route
images
image
circumnavigation
Prior art date
Application number
PCT/CN2021/084724
Other languages
English (en)
French (fr)
Inventor
杨志华
张明磊
梁家斌
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/084724 priority Critical patent/WO2022205208A1/zh
Priority to CN202180078855.4A priority patent/CN116745579A/zh
Publication of WO2022205208A1 publication Critical patent/WO2022205208A1/zh
Priority to US18/374,553 priority patent/US20240025571A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U20/00Constructional aspects of UAVs
    • B64U20/80Arrangement of on-board electronics, e.g. avionics systems or wiring
    • B64U20/87Mounting of imaging devices, e.g. mounting of gimbals
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/243Means capturing signals occurring naturally from the environment, e.g. ambient optical, acoustic, gravitational or magnetic signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/246Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM]
    • G05D1/2465Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM] using a 3D model of the environment
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/60Intended control result
    • G05D1/656Interaction with payloads or external entities
    • G05D1/689Pointing payloads towards fixed or moving targets
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/15UAVs specially adapted for particular uses or applications for conventional or electronic warfare
    • B64U2101/18UAVs specially adapted for particular uses or applications for conventional or electronic warfare for dropping bombs; for firing ammunition
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2201/00UAVs characterised by their flight controls
    • B64U2201/10UAVs characterised by their flight controls autonomous, i.e. by navigating independently from ground or air stations, e.g. by using inertial navigation systems [INS]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2105/00Specific applications of the controlled vehicles
    • G05D2105/80Specific applications of the controlled vehicles for information gathering, e.g. for academic research
    • G05D2105/89Specific applications of the controlled vehicles for information gathering, e.g. for academic research for inspecting structures, e.g. wind mills, bridges, buildings or vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2109/00Types of controlled vehicles
    • G05D2109/20Aircraft, e.g. drones
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/10Optical signals

Definitions

  • the present application relates to the technical field of image shooting, and in particular, to a shooting method, apparatus, computer-readable storage medium, and terminal device
  • the 3D reconstruction technology based on UAV images is increasingly used in the refined modeling of cultural relics, electrical towers, signal towers, bridges and other objects.
  • the drone can be controlled to fly along the planned route, and images of the subject can be captured during the flight, and a three-dimensional model of the subject can be established using the captured images.
  • it takes a lot of time for drones to take images, and the work efficiency still needs to be improved.
  • the embodiments of the present application provide a photographing method, a device, a computer-readable storage medium, and a model acquisition method.
  • One of the purposes of the embodiments of the present application is to improve the efficiency of photographing images by a drone.
  • a first aspect of the embodiments of the present application provides a shooting method, including:
  • the image collected by the camera and the image collected by the drone on the second circumnavigation route include the image point of the same name of the subject, and the multiple images are used to establish the image of the subject. diorama.
  • a second aspect of the embodiments of the present application provides a model acquisition method, including:
  • the initial model including position information of the surface points of the subject
  • Plan a second route based on the location information and the preset distance, the second route includes a plurality of waypoints, and the distances between the plurality of the waypoints and the surface of the subject are approximately equal;
  • the initial model of the subject is optimized based on the supplemental image.
  • a third aspect of the embodiments of the present application provides a shooting method, including:
  • the drone is controlled to move along the second circumnavigation route, and the subject is photographed during the movement, so as to obtain a plurality of second images of the subject, the first image and the subject.
  • the second image is used to establish a three-dimensional model of the subject.
  • a fourth aspect of an embodiment of the present application provides a photographing device, comprising: a processor and a memory storing a computer program, where the processor implements the following steps when executing the computer program:
  • the image collected by the camera and the image collected by the drone on the second circumnavigation route include the image point of the same name of the subject, and the multiple images are used to establish the image of the subject. diorama.
  • a fifth aspect of an embodiment of the present application provides an apparatus for obtaining a model, including: a processor and a memory storing a computer program, where the processor implements the following steps when executing the computer program:
  • the initial model including position information of the surface points of the subject
  • Plan a second route based on the location information and the preset distance, the second route includes a plurality of waypoints, and the distances between the plurality of the waypoints and the surface of the subject are approximately equal;
  • the initial model of the subject is optimized based on the supplemental image.
  • a sixth aspect of an embodiment of the present application provides a photographing device, comprising: a processor and a memory storing a computer program, where the processor implements the following steps when executing the computer program:
  • the drone is controlled to move along the second circumnavigation route, and the subject is photographed during the movement, so as to obtain a plurality of second images of the subject, the first image and the subject.
  • the second image is used to establish a three-dimensional model of the subject.
  • a seventh aspect of the embodiments of the present application provides a terminal device, including:
  • a communication module for establishing a connection with the drone
  • a processor and a memory storing a computer program the processor implements the following steps when executing the computer program:
  • the image collected by the camera and the image collected by the drone on the second circumnavigation route include the image point of the same name of the subject, and the multiple images are used to establish the image of the subject. diorama.
  • An eighth aspect of the embodiments of the present application provides a terminal device, including:
  • a communication module for establishing a connection with the drone
  • a processor and a memory storing a computer program the processor implements the following steps when executing the computer program:
  • the initial model including position information of the surface points of the subject
  • Plan a second route based on the location information and the preset distance, the second route includes a plurality of waypoints, and the distances between the plurality of the waypoints and the surface of the subject are approximately equal;
  • the initial model of the subject is optimized based on the supplemental image.
  • a ninth aspect of an embodiment of the present application provides a terminal device, including:
  • a communication module for establishing a connection with the drone
  • a processor and a memory storing a computer program the processor implements the following steps when executing the computer program:
  • the drone is controlled to move along the second circumnavigation route, and the subject is photographed during the movement, so as to obtain a plurality of second images of the subject, the first image and the subject.
  • the second image is used to establish a three-dimensional model of the subject.
  • a tenth aspect of the embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the shooting method provided in the first aspect.
  • An eleventh aspect of embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the model acquisition method provided in the second aspect.
  • a twelfth aspect of an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the shooting method provided in the third aspect.
  • a first circumnavigation route and a second circumnavigation route are planned, since the images collected by the drone on the first circumnavigation route are the same as the images collected by the drone on the second circumnavigation route camera.
  • the image includes the image point of the same name of the subject, so the image captured by the drone in the second circumnavigation route can be matched and connected with the image captured by the drone in the first circumnavigation route, and then the drone is in the second circumnavigation route.
  • the degree of overlap between the acquired images can be reduced, thereby reducing the number of images to be captured.
  • the shooting distance corresponding to the second circumnavigation route is smaller than the shooting distance corresponding to the first circumnavigation route, so the images collected by the drone in the second circumnavigation route can retain more details on the surface of the subject, so that the established three-dimensional model has sufficient accuracy. It can be seen that the shooting method provided by the embodiment of the present application can reduce the number of images to be shot on the basis of ensuring that the accuracy of the three-dimensional model meets the requirements, and improve the operation efficiency of the UAV.
  • FIG. 1 is a flowchart of a photographing method provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a route provided by an embodiment of the present application.
  • FIG. 3 is a top view of the first circumnavigation route of the imitation surface provided by the embodiment of the present application.
  • FIG. 4 is a side view of the second circumnavigation route of the imitation surface provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of a route when planning a plurality of first circumnavigation routes according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a route provided by an embodiment of the present application when two second circumnavigation routes are planned.
  • FIG. 7 is a side view of three planned routes provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of feature point matching using an initial model provided by an embodiment of the present application.
  • FIG. 9 is a flowchart of a model acquisition method provided by an embodiment of the present application.
  • FIG. 10 is a flowchart of a photographing method provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram 1 of an interactive interface provided by an embodiment of the present application.
  • FIG. 12 is a second schematic diagram of an interactive interface provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a photographing apparatus provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of an apparatus for obtaining a model provided by an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • the 3D reconstruction technology based on UAV images is increasingly used in the refined modeling of cultural relics, electrical towers, signal towers, bridges and other objects.
  • the drone can be controlled to fly along the planned route, and images of the subject can be captured during the flight, and a three-dimensional model of the subject can be established using the captured images.
  • it takes a lot of time for drones to take images, and the work efficiency still needs to be improved.
  • a route that is closer to the surface of the subject can be planned, and when the drone moves along the route and takes pictures of the subject, Since the distance between the drone and the subject is relatively close, the captured image can retain more details on the surface of the subject, which can ensure that the established model has higher accuracy.
  • the captured image can cover a small area. In order to completely cover the entire subject and meet the requirements of the overlap between the images, the drone needs to Shooting a large number of images consumes a lot of time and the work efficiency is low.
  • FIG. 1 is a flowchart of the photographing method provided by the embodiment of the present application. The method includes the following steps:
  • S102 Acquire position information of a subject.
  • S104 Plan a first circumnavigation route and a second circumnavigation route for photographing the subject based on the position information.
  • S106 Control the UAV to move along the first circumnavigation route and the second circumnavigation route, respectively.
  • the position information of the subject can indicate the position of the subject.
  • the location information of the photographed object may be the geometric coordinates of the location of the photographed object.
  • the location information of the subject may be input or selected by the user.
  • the location information of the subject can be obtained by the drone through the sensor perception.
  • the drone can be equipped with a radar, and the location information of the subject can be obtained through radar detection.
  • the drone can The position information of the subject is obtained by means of visual positioning.
  • the first circumnavigation route and the second circumnavigation route may be planned based on the position information of the subject.
  • the first circumnavigation route and the second circumnavigation route can be used for taking pictures around the subject.
  • the first circumnavigation route and the second circumnavigation route may take the location of the subject as the circumcenter.
  • the drone can be controlled to move along the first circumnavigation route and the second circumnavigation route, wherein the drone can maintain a first distance from the subject during the movement along the first circumnavigation route, and the drone can move along the second circumnavigation route.
  • a second distance is maintained with the subject in the process of moving around the route, where the first distance may be greater than the second distance.
  • the first circumnavigation route may include multiple waypoints, and the distance from the waypoint on the first circumnavigation route to the subject may be the first distance.
  • the first circumnavigation route corresponds to The shooting distance may be the first distance.
  • the second circumnavigation route may include multiple waypoints, and the distance from the waypoint on the second circumnavigation route to the subject may be the second distance.
  • the shooting distance corresponding to the flight route may be the second distance.
  • maintaining the first distance from the subject when the drone moves along the first circumnavigation route means that the distance between the drone and the subject is approximately maintained at the first distance when the drone moves along the first circumnavigation route. Nearby, that is, it may be slightly larger than the first distance or slightly smaller than the first distance.
  • maintaining a second distance from the subject when the drone moves along the second circumnavigation route means that the distance between the drone and the subject is approximately maintained near the second distance when the drone moves along the second circumnavigation route. That is, it may be slightly larger than the second distance or slightly smaller than the second distance.
  • the UAV can be equipped with a camera.
  • the UAV can be controlled to use the mounted camera to take pictures of the subject during the movement process, so as to obtain the captured image. multiple images of the subject.
  • the drone moves along the first circumnavigation route or the second circumnavigation route, in one embodiment, the drone can be controlled to take one or more images every preset time interval, in one embodiment , you can control the drone to take one or more images every time it passes through a preset angle or preset distance.
  • setting the shooting interval it can be set according to the imaging range of the camera and the required degree of image overlap.
  • the image captured by the camera on the first circumnavigation route of the drone includes a first image area
  • the image captured by the drone on the second circumnavigation route by the camera includes a second image area
  • the first image area and the second image area correspond to the same position of the subject.
  • the subject is a high-voltage transmission tower structure
  • the first image area includes the imaging area where the tower body and the electric wire overlap
  • the second image area includes the same imaging area.
  • controlling the UAV to photograph the subject based on the camera mounted on the UAV during the movement process to obtain multiple images of the subject and the UAV is in the
  • the image captured by the camera on the first circumnavigation route and the image captured by the camera on the second circumnavigation route of the drone include image points with the same name, including: the drone on the first circumnavigation route of the drone. a position, the camera captures a first image; the drone is in a second position on the second circumnavigation route, the camera captures a second image; the first image and the second image include images with the same name point, the first position and the second position have a preset relative positional relationship. For example, the first position and the second position are located in the same direction of the subject.
  • the image collected by the drone on the first circumnavigation route and the image captured by the drone on the second circumnavigation route may include imaging of the same area of the subject, that is, the drone
  • the image captured by the camera on the first circumnavigation route and the image captured by the camera on the second circumnavigation route of the UAV include the image points of the same name of the subject, so when using the images for 3D reconstruction, The images captured by the drone in the first circumnavigation route can be matched with the images captured by the drone in the second circumnavigation route.
  • the multiple images captured can be used to build a stereo model of the subject.
  • a multi-view geometric algorithm can be used to establish a three-dimensional point cloud model of the subject.
  • the three-dimensional model of the subject can also be various models, such as a point cloud model or a mesh model.
  • a first circumnavigation route and a second circumnavigation route are planned, and the images collected by the camera on the first circumnavigation route of the drone are the same as the images captured by the drone on the second circumnavigation route.
  • the image captured by the camera around the route includes the image point of the same name of the subject, so the image captured by the drone on the second route can be matched and connected with the image captured by the drone on the first route, and there is no
  • the degree of overlap between the images captured by the man and the machine on the second circumnavigation route can be reduced, thereby reducing the number of images to be captured.
  • the shooting distance corresponding to the second circumnavigation route is smaller than the shooting distance corresponding to the first circumnavigation route, so the images collected by the drone in the second circumnavigation route can retain more details on the surface of the subject, so that the established three-dimensional model has sufficient accuracy. It can be seen that the shooting method provided by the embodiment of the present application can reduce the number of images to be shot on the basis of ensuring that the accuracy of the three-dimensional model meets the requirements, and improve the operation efficiency of the UAV.
  • the first circumnavigation route may include multiple waypoints, the multiple waypoints may be distributed in different directions of the subject, the distances between the multiple waypoints and the subject may be approximately the same, and the multiple waypoints may be located at approximately the same distance from the subject. Points can be approximately at the same height.
  • FIG. 2 is a schematic diagram of a route provided by an embodiment of the present application.
  • the first circumnavigation course may also be referred to as a horizontal circumnavigation course.
  • the drone When the drone is controlled to move along the first circumnavigation route, the drone can move around the subject on a horizontal plane at a certain height.
  • the second circumnavigation route may include multiple vertical route segments, and the multiple vertical route segments may be distributed in different directions of the subject.
  • Each vertical flight segment may include a plurality of waypoints distributed at different heights, and the projection positions of these waypoints on the horizontal plane may be approximately the same, please refer to FIG. 2 .
  • the second circumnavigation course may also be referred to as a vertical circumnavigation course.
  • the images collected by the drone on the first circumnavigation route can completely cover the surface of the subject in all directions.
  • the images captured by the drone in a certain vertical flight segment can cover the surface of the subject at various heights in a certain direction.
  • the drone can take images of the subject in all directions, and through the vertical route section, the drone can take images of the subject at different heights in a specific direction.
  • the range covered by the images captured by the drone on the first circumnavigation route includes the range covered by the images captured by the drone on the vertical flight segment, so the images captured by the drone on the vertical flight segment can be compared with
  • the images captured by the UAV on the first circumnavigation route are well matched and meet the requirements of overlapping degree, so the requirement of image overlapping degree between vertical route segments can be greatly reduced (can be lower than 20%), which can eliminate the need for intensive planning
  • the vertical flight section of the aircraft reduces the time used for drone shooting and improves the operation efficiency.
  • the distribution of vertical flight segments can be relatively sparse, it can adapt to more complex scenes, and the scene adaptability is greatly improved.
  • the coverage of the subject is more comprehensive, and the existence of shooting blind spots can be avoided.
  • the shape of the planned first circumnavigation route may match the contour shape of the subject on the horizontal plane.
  • the distance from each waypoint on the first circumnavigation route to the surface point corresponding to the subject may be The same, both are the first distance.
  • the first circumnavigation route whose shape matches the contour shape of the subject on the horizontal plane can be referred to as the face-like first circumnavigation route.
  • FIG. 3 is a top view of the first circumnavigation route of the imitation surface provided by the embodiment of the present application.
  • the shape of the vertical route segment in the planned second circumnavigation route may match the contour shape of the subject on the vertical plane, so that each waypoint on the vertical route segment reaches the subject.
  • the distances of the corresponding surface points may be the same, which are the second distances.
  • the second circumnavigation route in which the shape of the vertical route segment matches the contour shape of the subject on the vertical plane can be referred to as a plane-like second circumnavigation route.
  • FIG. 4 is a side view of the second circumnavigation route of the imitation surface provided by the embodiment of the present application.
  • the first circumnavigation route of the imitation surface can be obtained by planning according to the initial model of the subject and the set first distance
  • the second circumnavigation route of the imitation surface can be obtained according to the initial model of the photographed object and the setting The second distance planning of
  • the initial model of the subject may include position information of the surface points of the subject, so the distance between the waypoint and the surface points of the subject may be determined.
  • the initial model of the subject can be obtained by three-dimensional reconstruction using multiple primary images of the subject.
  • the initial model of the subject needs to be acquired in advance.
  • the primary image of the subject can be acquired in various ways. In one example, multiple primary images can be obtained by photographing the subject with a camera in advance; Shooting around from a long distance to get multiple primary images.
  • the primary image is captured by the drone at a long distance, the established initial model has low accuracy and can be called a coarse model.
  • FIG. 5 is a schematic diagram of a route when planning a plurality of first circumnavigation routes according to an embodiment of the present application.
  • the drone can be controlled to move along each first circumnavigation route and take images of the subject, so that multiple primary images that completely cover the subject can be obtained.
  • the plurality of primary images can establish an initial model of the subject, and a second circumnavigation route of the imitation surface can be planned by using the initial model.
  • a plurality of first circumnavigation routes with relatively large shooting distances can be planned, so that the images collected by the drone on the first circumnavigation route can cover a wide range of scenes, even if the planned multiple first circumnavigation routes There is a large interval between the routes, and the image overlap between the routes can also meet the requirements.
  • there is a relatively large interval (ie, relatively sparse) between the multiple first circumnavigation routes which can improve the adaptability to complex scenes and reduce the requirement for the space of the scene.
  • the difference between the first distance corresponding to the first circumnavigation route and the second distance corresponding to the second circumnavigation route should be within a reasonable range.
  • the first distance and the second distance may satisfy a specific proportional relationship, for example, the second distance may be 1/2 of the first distance, the first distance may be recorded as D, and the second distance may be recorded as 0.5D.
  • the shooting distance of the first circumnavigation route is far, the shooting distance of the second circumnavigation route is closer than that of the first circumnavigation route, but the distance difference is greater than that of the first circumnavigation route. It may still be farther under the limitation of , for example, the first distance of the first circumnavigation route is 20 meters, then the second distance of the second circumnavigation route may be 10 meters, which is still far from the subject. In this case, the images collected by the UAV in the second circumnavigation route may be insufficient in detail, resulting in the accuracy of the final model cannot meet the requirements.
  • a plurality of second circumnavigation routes corresponding to different shooting distances may be planned.
  • the shooting distance between the first circumnavigation route and each second circumnavigation route may satisfy a proportional relationship. For example, if two second circumnavigation routes are planned, the shooting distance of the first circumnavigation route may be recorded as D, then The shooting distances of the two second circumnavigation routes can be D/2 and D/4 respectively.
  • FIG. 6 is a schematic diagram of a route provided by an embodiment of the present application when two second circumnavigation routes are planned.
  • the shooting distance (ie, the first distance) of the first circumnavigation route may be determined according to the size information of the subject.
  • the size information of the subject can be matched with the three-dimensional figure into which the subject is abstracted. For example, if the subject is abstracted as a cuboid, the size information of the subject can include the length, width and height of the subject.
  • the photographed object is abstracted into a cylinder, and the size information of the photographed object may include the height and the diameter of the bottom surface of the photographed object. Referring to FIG. 2 , the photographed object in FIG. 2 is abstracted as a cylinder.
  • the size information of the subject in one example, may be input by the user, or in one example, may be measured by a drone through visual measurement or the like.
  • the size information of the subject may be converted into a shooting distance corresponding to the first circumnavigation route by using a preset calculation formula.
  • a preset calculation formula can be, for example, N times the diameter of the bottom surface, and the shooting distance of the first circumnavigation route can be N meters.
  • the shooting distance of the route, the distance between the drone and the subject, and the distance between the waypoint and the subject described in the embodiments of this application may all be horizontal distances, such as the waypoint and the subject.
  • the distance can be the distance from the projected position of the waypoint on the horizontal plane to the projected position of the subject on the same horizontal plane.
  • a third flight route for close-up photography of the region of interest may be planned.
  • the area of interest on the surface of the subject selected by the user can be acquired, and a third route can be planned according to the area of interest, so that the drone can be controlled to move along the third route and photograph the subject during the movement close-up multiple images.
  • the image taken by the drone along the first circumnavigation route can be called the first image
  • the image taken along the second circumnavigation route is called the second image
  • the image taken along the third route is called the third image
  • FIG. 7 is a side view of three planned routes provided by an embodiment of the present application.
  • the planned route includes 3 first circumnavigation routes A, B and C for photographing the subject at a long distance, and 2 second circumnavigation routes A and B which are used for photographing the subject at a medium distance. , and a third route for close-up shots of subjects.
  • the shooting distances of the first circumnavigation route, the plurality of second circumnavigation routes, and the third route may satisfy a proportional relationship. For example, if two circumnavigation routes are planned For the second circumnavigation route, the shooting distance of the first circumnavigation route can be recorded as D, then the shooting distance of the two second circumnavigation routes can be D/2 and D/4 respectively, and the shooting distance of the third route can be D/8.
  • multiple waypoints may be planned at a preset distance from the surface of the region of interest of the subject, and may be planned according to the plurality of flight routes. Click to plan a third route.
  • the planned multiple waypoints may be relatively uniformly distributed on the region of interest of the subject, and are separated from the surface of the subject by the preset distance.
  • the third route may include a plurality of route segments, the lateral overlap between route segments may be greater than 60%, and the heading overlap within the route segments may be greater than 80%.
  • the user may select it from the image of the subject that has been photographed, and here, the image of the subject that has been photographed may include the first The first image and the second image may also include a currently captured preview image.
  • the region of interest may be selected by the user on an initial model of the subject.
  • the plurality of images used for the 3D reconstruction may include the images captured by the drone on the (each) first circumnavigation route and the image captured by the (each) second circumnavigation route, and in other examples, may also include the drone Images taken on the third route.
  • the first image may be used to refer to any one of the multiple images.
  • the first image and other images other than the first image may be separately identified. Perform feature point matching. Although the image matching the first image can be determined in this way, the amount of computation is large and the reconstruction efficiency is low.
  • feature point matching between multiple images may be performed using the initial model of the subject.
  • candidate images may be determined from multiple images by using the initial model, and feature point matching of the first image and the candidate images may be performed.
  • the camera pose information corresponding to the first image may be obtained, and the camera pose information matching the camera pose information of the first image may be filtered out from multiple images.
  • Multiple undetermined images and use the initial model to determine candidate images from the multiple undetermined images.
  • Camera pose information may be information carried in an image that may indicate the position and orientation of the camera when the image was captured. It is understandable that the closer the pose of the camera when shooting, the higher the similarity between the captured images, and the more matching feature points. Therefore, multiple images can be screened according to the camera pose information, and multiple the pending image.
  • An initial model of the subject can be used when determining candidate images from multiple pending images.
  • the feature points of the first image may be projected onto the initial model, and the following operations may be performed on each pending image: back-projecting the feature points from the initial model according to the camera pose information of the pending image to the plane where the undetermined image is located, and count the number of feature points that fall within the undetermined image.
  • candidate images can be determined according to the number of feature points corresponding to each pending image.
  • N undetermined images containing the largest number of feature points may be determined as candidate images, and N may be a natural number greater than 0.
  • FIG. 8 is a schematic diagram of feature point matching using an initial model provided by an embodiment of the present application. After the feature point x in the first image is projected to the initial model, a three-dimensional point p can be obtained, and the landing point after the three-dimensional point p is back-projected to the plane where the candidate image is located can be xp.
  • the feature point x can be matched with the feature points within the range of the landing point xp in the candidate image, which improves the matching efficiency and matching success rate, and makes the 3D reconstruction process more robust.
  • the range where the landing point is located may be a range of a specific distance from the landing point position.
  • a first circumnavigation route and a second circumnavigation route are planned, and the images collected by the camera on the first circumnavigation route of the drone are the same as the images captured by the drone on the second circumnavigation route.
  • the image captured by the camera in the circumnavigation route includes the image point of the same name, so the image captured by the drone in the second circumnavigation route can be matched and connected with the image captured by the drone in the first circumnavigation route, and then the drone is in the second circumnavigation route.
  • the degree of overlap between the images captured by the airline can be reduced, thereby reducing the number of images to be captured.
  • the shooting distance corresponding to the second circumnavigation route is smaller than the shooting distance corresponding to the first circumnavigation route, so the images collected by the drone in the second circumnavigation route can retain more details on the surface of the subject, so that the established three-dimensional model has sufficient accuracy. It can be seen that the shooting method provided by the embodiment of the present application can reduce the number of images to be shot on the basis of ensuring that the accuracy of the three-dimensional model meets the requirements, and improve the operation efficiency of the UAV.
  • FIG. 9 is a flowchart of a model acquisition method provided by an embodiment of the present application. The method may include the following steps:
  • S902 controlling the drone to move along a first circumnavigation route, and photographing the subject during the movement to acquire multiple primary images of the subject.
  • the first circumnavigation route is used to photograph the subject in circumnavigation.
  • the initial model includes position information of the object surface points.
  • the second route includes a plurality of waypoints, and the distances between the plurality of the waypoints and the surface of the subject are approximately equal.
  • the initial model includes the position information of the surface points of the subject, so the second route can be planned according to the position information of the surface points of the subject and the preset distance.
  • the distances from the waypoints to the surface of the subject may be approximately equal, which are the preset distances.
  • the preset distance may be the distance from the waypoint on the second route to the subject in one expression, and may be the shooting distance of the second route in one expression.
  • the drone can be controlled to move along the second route and photograph the subject, and multiple supplementary images corresponding to the subject can be obtained.
  • the preset distance may be smaller than the distance between the waypoint and the subject on the first circumnavigation route, so that when the drone flies along the second route, the subject can be photographed at a closer distance The object is photographed, and the resulting supplementary image can improve the accuracy of the initial model.
  • three-dimensional reconstruction can be performed by using multiple supplementary images and multiple first images collected by the drone on the first circumnavigation route, and the reconstruction can be obtained with high accuracy An optimized model based on the initial model.
  • an initial model of a subject can be established by using a plurality of primary images taken when the drone moves along the first circumnavigation route, and the positions of the surface points of the subject included in the initial model can be used to establish the initial model.
  • the information and the preset distance are used to plan the second route, which improves the accuracy of the second route and can make the distances between the waypoints on the second route and the surface of the subject approximately equal.
  • the multiple compensation images captured by the drone moving along the second route can be used to optimize the initial model of the subject, so that the quality of the initial model can be improved.
  • a plurality of second flight routes may be planned based on the position information of the subject and a plurality of preset distances.
  • the second route may correspond to the aforementioned second circumnavigation route, which may include multiple vertical route segments, and the multiple vertical route segments may be distributed in different directions of the subject, and each vertical route segment may use It is used to guide the UAV to move up or down in the height direction.
  • the preset distances (shooting distances) corresponding to each of the plurality of planned second routes may satisfy a proportional relationship.
  • three second routes can be planned, the shooting distance of the first circumnavigation route can be recorded as D, and the shooting distances of the three second routes can be D/2, D/4, and D/8 respectively.
  • the planned second flight route may be used for close-up photography of the region of interest, and in this case, the planned second flight route may correspond to the third flight route in the foregoing.
  • the region of interest selected by the user can be acquired, so that the second route can be planned according to the position information of the surface point of the subject, the region of interest and the preset distance.
  • the optimizing the initial model of the subject based on the supplementary image includes:
  • Feature point matching is performed on a plurality of the supplementary images, and the initial model of the photographed object is optimized according to the result of the feature point matching.
  • performing feature point matching on a plurality of the supplementary images includes:
  • Feature point matching between the plurality of supplementary images is performed using the initial model.
  • using the initial model to perform feature point matching between the multiple supplementary images includes:
  • the initial model uses the initial model to determine a candidate supplementary image from the plurality of supplementary images for feature point matching with a first supplementary image, the first supplementary image being any supplementary image in the plurality of supplementary images;
  • Feature point matching is performed on the first supplementary image and the candidate supplementary image.
  • the use of the initial model to determine candidate supplementary images for feature point matching with the first supplementary image from the plurality of supplementary images includes:
  • the candidate supplementary image is determined from the plurality of pending supplementary images using the initial model.
  • the determining the candidate supplementary image from the plurality of pending supplementary images by using the initial model includes:
  • the undetermined supplementary images For each of the undetermined supplementary images, back-project the feature points from the initial model to the plane where the undetermined supplementary images are located according to the camera pose information corresponding to the undetermined supplementary images, and determine that the feature points are located in the undetermined supplementary images. the number of feature points within the image;
  • the candidate supplementary images are determined according to the number of the feature points corresponding to each of the pending supplementary images.
  • an initial model of a subject can be established by using a plurality of primary images taken when the drone moves along the first circumnavigation route, and the positions of the surface points of the subject included in the initial model can be used to establish the initial model.
  • the information and the preset distance are used to plan the second route, which improves the accuracy of the second route and can make the distances between the waypoints on the second route and the surface of the subject approximately equal.
  • the multiple compensation images captured by the drone moving along the second route can be used to optimize the initial model of the subject, so that the quality of the initial model can be improved.
  • FIG. 10 is a flowchart of a shooting method provided by an embodiment of the present application, and the method may include the following steps:
  • the first image and the second image are used to establish a three-dimensional model of the subject.
  • a second circumnavigation route can be planned according to the position information of the subject and the plurality of first images.
  • the position information of the subject may be used to determine the circle center of the second circle route.
  • the plurality of first images may be used to determine the distance between the waypoint and the subject on the second circumnavigation route.
  • the shooting distance corresponding to the planned second circumnavigation route may be smaller than the shooting distance corresponding to the first circumnavigation route, that is, the distance between the waypoint on the second circumnavigation route and the subject may be smaller than the distance between the waypoint and the subject on the first circumnavigation route. distance to the subject.
  • the drone may be controlled to maintain a test distance from the subject, and the test distance may be smaller than the first Any distance between the distance between the waypoint and the subject on the surrounding route, the drone can be controlled to shoot the subject at the test distance, and a test image can be obtained, and the test image can be connected with the drone along the first surround. Similarity matching is performed on multiple first images taken when the route is moving. If the similarity obtained by matching does not meet the conditions, the test distance can be adjusted. If the similarity obtained by matching meets the conditions, the adjusted test distance can be adjusted. Determined as the distance between the waypoint on the second circumnavigation route and the subject.
  • the similarity obtained by matching may be the highest similarity obtained after the test image and the plurality of first images are respectively matched for similarity.
  • the shooting distance of the second circumnavigation route can be smaller than the shooting distance of the first circumnavigation route
  • the test distance which is an attempt value of the shooting distance of the second circumnavigation route
  • the shooting distance of the first circumnavigation route that is, it can be Less than the distance between the waypoint and the subject on the first circumnavigation route.
  • the similarity between the images captured by the drone on the second circumnavigation route and the images captured by the drone on the first circumnavigation route should not be too high, because higher similarity means the shooting distance of the second circumnavigation route.
  • it is necessary to plan more routes corresponding to different shooting distances which greatly increases the workload of drone shooting and greatly reduces the shooting efficiency.
  • the test distance can be increased, so that the second surrounding The shooting distance of the route is a little closer to the shooting distance of the first circumnavigation route to ensure that the images captured by the drone on the second circumnavigation route can be connected with the images captured by the drone on the first circumnavigation route.
  • the similarity obtained by the matching is greater than the upper limit of the similarity, it means that the shooting distance of the second circumnavigation route is too close to the photographing distance of the first circumnavigation route, and the test distance can be reduced to make the drone Images taken on the second circumnavigation route can contribute more to the improvement of model accuracy.
  • the camera pose information corresponding to the test image can be obtained, a plurality of first images are screened according to the camera pose information corresponding to the test image, and the camera with the camera pose information corresponding to the test image is screened out.
  • the camera pose information may be information carried by the test image.
  • the camera pose information may be measured by an inertial measurement unit on a drone or a camera.
  • the test image can be matched with the first screened image for similarity, thereby improving the matching efficiency.
  • multiple first images may also be screened by an image retrieval algorithm, so that a small number or a single first image may also be screened for similarity matching with the test image.
  • the matching result between the photographed test image and the first image does not satisfy the condition, for example, the similarity is greater than the upper limit of the similarity Or less than the lower limit of the similarity, the corresponding matching result can be fed back to the user to guide the user to adjust the test distance.
  • the matching result does not satisfy the condition, information indicating that the current test distance is not suitable may be displayed on the display interface of the terminal, such as the BAD in FIG. 11 . If the result satisfies the condition, the information indicating that the current test distance is suitable can be displayed on the display interface of the terminal, such as GOOD in Figure 12.
  • the similarity matching between the test image and the first image there can be various ways.
  • feature extraction may be performed on the test image and the first image respectively, and the extracted feature may be a high-dimensional feature vector, then the feature vector corresponding to the test image and the feature vector corresponding to the first image may be used Calculate the similarity between the test image and the first image, for example, the similarity can be the angle between the feature vector of the test image and the first image, or the distance between the feature vector of the test image and the first image .
  • the shooting method provided by the embodiment of the present application can plan the second circumnavigation route according to a plurality of first images taken when the drone moves along the first circumnavigation route, thereby ensuring that the images captured by the drone on the second circumnavigation route can be matched with the The images captured by the drone in the first circumnavigation route are matched to avoid the problem that the images cannot be connected during 3D reconstruction.
  • FIG. 13 is a schematic structural diagram of a photographing apparatus provided by an embodiment of the present application.
  • the photographing apparatus provided by this embodiment of the present application includes: a processor 1310 and a memory 1320 storing a computer program.
  • the processor implements the following steps when executing the computer program:
  • the image collected by the camera and the image collected by the drone on the second circumnavigation route include image points with the same name, and a plurality of the images are used to establish a three-dimensional model of the subject.
  • the first circumnavigation route includes a plurality of waypoints, the plurality of waypoints are distributed in different directions of the photographed object, are approximately the same distance from the photographed object, and are approximately at the same height,
  • the first circumnavigation route is used to guide the drone to move around the subject on a horizontal plane.
  • the distance from each waypoint on the first circumnavigation route to the surface point of the subject is the first distance
  • the shape of the first circumnavigation route is the same as the distance of the subject on the horizontal plane.
  • the second circumnavigation route includes a plurality of vertical route segments, the plurality of vertical route segments are distributed in different directions of the subject, and each vertical route segment is used to guide the The drone moves up or down in the height direction.
  • the distance from each waypoint on the vertical route segment to the surface point of the subject is the second distance
  • the shape of the vertical route segment is in the vertical plane with the subject. to match the outline shape on.
  • the position information of the surface point of the subject is determined according to an initial model of the subject, and the initial model is established in advance based on a plurality of primary images of the subject.
  • the processor is also used for:
  • the drone is controlled to move along the third route, and the subject is photographed during the movement.
  • the distance between the waypoint on the third route and the subject is smaller than the second distance.
  • the processor uses the plurality of images to establish a three-dimensional model of the subject, the processor is configured to:
  • Feature point matching is performed on the plurality of images, and three-dimensional reconstruction is performed according to the result of the feature point matching, so as to obtain a three-dimensional model of the photographed object.
  • the processor when the processor performs feature point matching on the multiple images, it is used for:
  • the feature point matching between the multiple images is performed by using an initial model of the subject, and the initial model is established in advance based on the multiple primary images of the subject.
  • the processor uses the initial model of the subject to perform feature point matching between the multiple images:
  • Feature point matching is performed on the first image and the candidate image.
  • the processor uses the initial model of the subject to determine from the plurality of images a candidate image for performing feature point matching with the first image, the processor is used for:
  • the candidate image is determined from the plurality of pending images using the initial model.
  • the processor uses the initial model to determine the candidate image from the plurality of undetermined images, it is used to:
  • the candidate images are determined according to the number of the feature points corresponding to each of the pending images.
  • a first circumnavigation route and a second circumnavigation route are planned.
  • the images collected by the camera in the second circumnavigation route include image points with the same name, so the images collected by the UAV in the second circumnavigation route can be matched and connected with the images captured by the UAV in the first circumnavigation route, and then the UAV in the second circumnavigation route can be matched and connected.
  • the degree of overlap between images captured around the route can be reduced, reducing the number of images to be captured.
  • the shooting distance corresponding to the second circumnavigation route is smaller than the shooting distance corresponding to the first circumnavigation route, so the images collected by the drone in the second circumnavigation route can retain more details on the surface of the subject, so that the established three-dimensional model has sufficient accuracy. It can be seen that the photographing device provided by the embodiment of the present application can reduce the number of images to be photographed on the basis of ensuring that the accuracy of the three-dimensional model meets the requirements, and improve the operation efficiency of the UAV.
  • FIG. 14 is a schematic structural diagram of an apparatus for obtaining a model provided by an embodiment of the present application.
  • the photographing device provided by the embodiment of the present application includes: a processor 1410 and a memory 1420 storing a computer program, and the processor implements the following steps when executing the computer program:
  • the initial model including position information of the surface points of the subject
  • Plan a second route based on the location information and the preset distance, the second route includes a plurality of waypoints, and the distances between the plurality of the waypoints and the surface of the subject are approximately equal;
  • the initial model of the subject is optimized based on the supplemental image.
  • the preset distance is smaller than the distance between the waypoint on the first circumnavigation route and the subject.
  • the processor plans the second flight route based on the location information and the preset distance:
  • a plurality of second routes are planned based on the location information and the plurality of preset distances.
  • a plurality of the preset distances satisfy an equal ratio relationship.
  • the processor plans the second flight route based on the location information and the preset distance:
  • a second route is planned based on the location information, the region of interest selected from the initial model, and a preset distance, and waypoints on the second route are distributed within the region of interest.
  • the first circumnavigation route includes a plurality of waypoints, the plurality of waypoints are distributed in different directions of the photographed object, are approximately the same distance from the photographed object, and are approximately at the same height,
  • the first circumnavigation route is used to guide the drone to move around the subject on a horizontal plane.
  • the second route includes a plurality of vertical route segments, the multiple vertical route segments are distributed in different directions of the subject, and each vertical route segment is used to guide the The man-machine moves up or down in the height direction.
  • the processor when optimizing the initial model of the subject based on the supplementary image, is configured to:
  • Feature point matching is performed on a plurality of the supplementary images, and the initial model of the photographed object is optimized according to the result of the feature point matching.
  • the processor when the processor performs feature point matching on a plurality of the supplementary images, it is used for:
  • Feature point matching between the plurality of supplementary images is performed using the initial model.
  • the processor uses the initial model to perform feature point matching among the plurality of supplementary images, it is used for:
  • the initial model uses the initial model to determine a candidate supplementary image from the plurality of supplementary images for feature point matching with a first supplementary image, the first supplementary image being any supplementary image in the plurality of supplementary images;
  • Feature point matching is performed on the first supplementary image and the candidate supplementary image.
  • the processor uses the initial model to determine a candidate supplementary image from the plurality of supplementary images for performing feature point matching with the first supplementary image, the processor is used for:
  • the candidate supplementary image is determined from the plurality of pending supplementary images using the initial model.
  • the processor determines the candidate supplementary image from the plurality of pending supplementary images by using the initial model:
  • the undetermined supplementary images For each of the undetermined supplementary images, back-project the feature points from the initial model to the plane where the undetermined supplementary images are located according to the camera pose information corresponding to the undetermined supplementary images, and determine that the feature points are located in the undetermined supplementary images. the number of feature points within the image;
  • the candidate supplementary images are determined according to the number of the feature points corresponding to each of the pending supplementary images.
  • the model obtaining device can use a plurality of primary images taken when the drone moves along the first circumnavigation route to establish the initial model of the subject, and can establish the initial model of the subject based on the positions of the surface points of the subject included in the initial model
  • the information and the preset distance are used to plan the second route, which improves the accuracy of the second route and can make the distances between the waypoints on the second route and the surface of the subject approximately equal.
  • the multiple compensation images captured by the drone moving along the second route can be used to optimize the initial model of the subject, so that the quality of the initial model can be improved.
  • the embodiment of the present application also provides a photographing device, the structure of which may refer to FIG. 13 , and the processor of the device implements the following steps when executing the computer program stored in the memory:
  • the drone is controlled to move along the second circumnavigation route, and the subject is photographed during the movement, so as to obtain a plurality of second images of the subject, the first image and the subject.
  • the second image is used to establish a three-dimensional model of the subject.
  • the distance between the waypoint on the second circumnavigation route and the subject is determined according to a plurality of the first images.
  • the processor determines the distance between the waypoint on the second circumnavigation route and the subject according to the plurality of first images:
  • the adjusted test distance is determined as the distance between the waypoint on the second circumnavigation route and the subject.
  • the test distance is increased.
  • the test distance is reduced.
  • Similarity matching is performed between the test image and the filtered first image.
  • the photographing device provided by the embodiment of the present application can plan the second circumnavigation route according to a plurality of first images captured when the drone moves along the first circumnavigation route, thereby ensuring that the images captured by the drone on the second circumnavigation route can be matched with the The images captured by the drone in the first circumnavigation route are matched to avoid the problem that the images cannot be connected during 3D reconstruction.
  • FIG. 15 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • the terminal equipment may include:
  • a communication module 1510 for establishing a connection with the drone
  • the processor implements the following steps when executing the computer program:
  • the image collected by the camera and the image collected by the drone on the second circumnavigation route include image points with the same name, and a plurality of the images are used to establish a three-dimensional model of the subject.
  • the first circumnavigation route includes a plurality of waypoints, the plurality of waypoints are distributed in different directions of the photographed object, are approximately the same distance from the photographed object, and are approximately at the same height,
  • the first circumnavigation route is used to guide the drone to move around the subject on a horizontal plane.
  • the distance from each waypoint on the first circumnavigation route to the surface point of the subject is the first distance
  • the shape of the first circumnavigation route is the same as the distance of the subject on the horizontal plane.
  • the second circumnavigation route includes a plurality of vertical route segments, the plurality of vertical route segments are distributed in different directions of the subject, and each vertical route segment is used to guide the The drone moves up or down in the height direction.
  • the distance from each waypoint on the vertical route segment to the surface point of the subject is the second distance
  • the shape of the vertical route segment is in the vertical plane with the subject. to match the outline shape on.
  • the position information of the surface point of the subject is determined according to an initial model of the subject, and the initial model is established in advance based on a plurality of primary images of the subject.
  • the processor is also used for:
  • the drone is controlled to move along the third route, and the subject is photographed during the movement.
  • the distance between the waypoint on the third route and the subject is smaller than the second distance.
  • the processor uses the plurality of images to establish a three-dimensional model of the subject, the processor is configured to:
  • Feature point matching is performed on the plurality of images, and three-dimensional reconstruction is performed according to the result of the feature point matching, so as to obtain a three-dimensional model of the photographed object.
  • the processor when the processor performs feature point matching on the multiple images, it is used for:
  • the feature point matching between the multiple images is performed by using an initial model of the subject, and the initial model is established in advance based on the multiple primary images of the subject.
  • the processor uses the initial model of the subject to perform feature point matching between the multiple images:
  • Feature point matching is performed on the first image and the candidate image.
  • the processor utilizes the initial model of the subject to determine from the plurality of images a candidate image for performing feature point matching with the first image, it is used for:
  • the candidate image is determined from the plurality of pending images using the initial model.
  • the processor uses the initial model to determine the candidate image from the plurality of undetermined images, it is used to:
  • the candidate images are determined according to the number of the feature points corresponding to each of the pending images.
  • a first circumnavigation route and a second circumnavigation route are planned.
  • the images collected by the camera in the second circumnavigation route include image points with the same name, so the images collected by the UAV in the second circumnavigation route can be matched and connected with the images captured by the UAV in the first circumnavigation route, and then the UAV in the second circumnavigation route can be matched and connected.
  • the degree of overlap between images captured around the route can be reduced, reducing the number of images to be captured.
  • the shooting distance corresponding to the second circumnavigation route is smaller than the shooting distance corresponding to the first circumnavigation route, so the images collected by the drone in the second circumnavigation route can retain more details on the surface of the subject, so that the established three-dimensional model has sufficient accuracy. It can be seen that the terminal device provided by the embodiment of the present application can reduce the number of images to be shot on the basis of ensuring that the accuracy of the three-dimensional model meets the requirements, and improve the operation efficiency of the UAV.
  • An embodiment of the present application also provides a terminal device, the structure of which may refer to FIG. 15 , and the processor in the terminal device implements the following steps when executing a computer program:
  • the initial model including position information of the surface points of the subject
  • Plan a second route based on the location information and the preset distance, the second route includes a plurality of waypoints, and the distances between the plurality of the waypoints and the surface of the subject are approximately equal;
  • the initial model of the subject is optimized based on the supplemental image.
  • the preset distance is smaller than the distance between the waypoint on the first circumnavigation route and the subject.
  • the processor plans the second flight route based on the location information and the preset distance:
  • a plurality of second routes are planned based on the location information and the plurality of preset distances.
  • a plurality of the preset distances satisfy an equal ratio relationship.
  • the processor plans the second flight route based on the location information and the preset distance:
  • a second route is planned based on the location information, the region of interest selected from the initial model, and a preset distance, and waypoints on the second route are distributed within the region of interest.
  • the first circumnavigation route includes a plurality of waypoints, the plurality of waypoints are distributed in different directions of the photographed object, are approximately the same distance from the photographed object, and are approximately at the same height,
  • the first circumnavigation route is used to guide the drone to move around the subject on a horizontal plane.
  • the second route includes a plurality of vertical route segments, the multiple vertical route segments are distributed in different directions of the subject, and each vertical route segment is used to guide the The man-machine moves up or down in the height direction.
  • the processor when optimizing the initial model of the subject based on the supplementary image, is configured to:
  • Feature point matching is performed on a plurality of the supplementary images, and the initial model of the photographed object is optimized according to the result of the feature point matching.
  • the processor when the processor performs feature point matching on a plurality of the supplementary images, it is used for:
  • Feature point matching between the plurality of supplementary images is performed using the initial model.
  • the processor uses the initial model to perform feature point matching among the plurality of supplementary images, it is used for:
  • the initial model uses the initial model to determine a candidate supplementary image from the plurality of supplementary images for feature point matching with a first supplementary image, the first supplementary image being any supplementary image in the plurality of supplementary images;
  • Feature point matching is performed on the first supplementary image and the candidate supplementary image.
  • the processor uses the initial model to determine a candidate supplementary image from the plurality of supplementary images for performing feature point matching with the first supplementary image, the processor is used for:
  • the candidate supplementary image is determined from the plurality of pending supplementary images using the initial model.
  • the processor determines the candidate supplementary image from the plurality of pending supplementary images by using the initial model:
  • the undetermined supplementary images For each of the undetermined supplementary images, back-project the feature points from the initial model to the plane where the undetermined supplementary images are located according to the camera pose information corresponding to the undetermined supplementary images, and determine that the feature points are located in the undetermined supplementary images. the number of feature points within the image;
  • the candidate supplementary images are determined according to the number of the feature points corresponding to each of the pending supplementary images.
  • the terminal device provided by the embodiment of the present application can use a plurality of primary images taken when the drone moves along the first circumnavigation route to establish an initial model of the subject, and can use the position information of the surface points of the subject included in the initial model
  • the second route is planned with the preset distance, the accuracy of the second route is improved, and the distance between the waypoint on the second route and the surface of the subject can be approximately equal.
  • the multiple compensation images captured by the drone moving along the second route can be used to optimize the initial model of the subject, so that the quality of the initial model can be improved.
  • An embodiment of the present application also provides a terminal device, the structure of which may refer to FIG. 15 , and the processor in the terminal device implements the following steps when executing a computer program:
  • the drone is controlled to move along the second circumnavigation route, and the subject is photographed during the movement, so as to obtain a plurality of second images of the subject, the first image and the subject.
  • the second image is used to establish a three-dimensional model of the subject.
  • the distance between the waypoint on the second circumnavigation route and the subject is determined according to a plurality of the first images.
  • the processor determines the distance between the waypoint on the second circumnavigation route and the subject according to the plurality of first images:
  • the adjusted test distance is determined as the distance between the waypoint on the second circumnavigation route and the subject.
  • the test distance is increased.
  • the test distance is reduced.
  • Similarity matching is performed between the test image and the filtered first image.
  • the terminal device provided by the embodiment of the present application can plan the second circumnavigation route according to a plurality of first images taken when the drone moves along the first circumnavigation route, thereby ensuring that the images captured by the drone on the second circumnavigation route can be matched with the The images captured by the drone in the first circumnavigation route are matched to avoid the problem that the images cannot be connected during 3D reconstruction.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, any one of the shooting methods and any of the shooting methods provided by the embodiments of the present application is implemented.
  • a model acquisition method
  • Embodiments of the present application may take the form of a computer program product implemented on one or more storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having program code embodied therein.
  • Computer-usable storage media includes permanent and non-permanent, removable and non-removable media, and storage of information can be accomplished by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PRAM phase-change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • Flash Memory or other memory technology
  • CD-ROM Compact Disc Read Only Memory
  • CD-ROM Compact Disc Read Only Memory
  • DVD Digital Versatile Disc
  • Magnetic tape cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other non-

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Processing (AREA)

Abstract

本申请实施例公开了一种拍摄方法,包括:获取被摄对象的位置信息;基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线和第二环绕航线;控制所述无人机分别沿所述第一环绕航线和所述第二环绕航线运动;控制所述无人机在运动过程中基于所述无人机搭载的相机对所述被摄对象拍摄以获取所述被摄对象的多张图像,所述无人机在所述第一环绕航线所述相机采集的图像与所述无人机在所述第二环绕航线所述相机采集的图像包括所述被摄对象的同名像点,多张所述图像用于建立所述被摄对象的立体模型。本申请实施例公开的方法,可以提高无人机拍摄用于三维重建的图像时的拍摄效率。

Description

拍摄方法、装置、计算机可读存储介质和终端设备 技术领域
本申请涉及图像拍摄技术领域,尤其涉及一种拍摄方法、装置、计算机可读存储介质和终端设备
背景技术
基于无人机影像的三维重建技术被越来越多的应用在文物、电塔、信号塔、桥梁等物体的精细化建模上。在利用无人机进行建模时,可以控制无人机沿规划的航线飞行,并在飞行过程中对被摄对象拍摄图像,利用拍摄所得的图像可以建立被摄对象的立体模型。目前,无人机在拍摄图像时需要耗费大量的时间,作业效率仍有待提高。
发明内容
有鉴于此,本申请实施例提供了一种拍摄方法、装置、计算机可读存储介质,一种模型获取方法,本申请实施例的目的之一是提高无人机拍摄图像的效率。
本申请实施例第一方面提供一种拍摄方法,包括:
获取被摄对象的位置信息;
基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线和第二环绕航线;
控制所述无人机分别沿所述第一环绕航线和所述第二环绕航线运动;其中,所述无人机沿所述第一环绕航线运动的过程中与所述被摄对象保持第一距离,所述无人机沿所述第二环绕航线运动的过程中与所述被摄对象保持第二距离,所述第一距离大于所述第二距离;
控制所述无人机在运动过程中基于所述无人机搭载的相机对所述被摄对象拍摄以获取所述被摄对象的多张图像,所述无人机在所述第一环绕航线所述相机采集的图像与所述无人机在所述第二环绕航线所述相机采集的图像包括所述被摄对象的同名像点,多张所述图像用于建立所述被摄对象的立体模型。
本申请实施例第二方面提供一种模型获取方法,包括:
控制所述无人机沿第一环绕航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张初级图像,所述第一环绕航线用于对所述被摄对象进行环绕拍摄;
基于所述初级图像建立所述被摄对象的初始模型,所述初始模型包括所述被摄对象表面点的位置信息;
基于所述位置信息和预设距离规划第二航线,所述第二航线包括多个航点,多个所述航点与所述被摄对象表面的距离大致相等;
控制所述无人机沿所述第二航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张补充图像;
基于所述补充图像优化所述被摄对象的初始模型。
本申请实施例第三方面提供一种拍摄方法,包括:
获取被摄对象的位置信息;
基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线;
控制所述无人机沿所述第一环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第一图像;
基于多张所述第一图像和所述位置信息规划用于对所述被摄对象环绕拍摄的第二环绕航线;
控制所述无人机沿所述第二环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第二图像,所述第一图像和所述第二图像用于建立所述被摄对象的立体模型。
本申请实施例第四方面提供一种拍摄装置,包括:处理器和存储有计算机程序的存储器,所述处理器在执行所述计算机程序实现以下步骤:
获取被摄对象的位置信息;
基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线和第二环绕航线;
控制所述无人机分别沿所述第一环绕航线和所述第二环绕航线运动;其中,所述无人机沿所述第一环绕航线运动的过程中与所述被摄对象保持第一距离,所述无人机沿所述第二环绕航线运动的过程中与所述被摄对象保持第二距离,所述第一距离大于所述第二距离;
控制所述无人机在运动过程中基于所述无人机搭载的相机对所述被摄对象拍摄以获取所述被摄对象的多张图像,所述无人机在所述第一环绕航线所述相机采集的图像 与所述无人机在所述第二环绕航线所述相机采集的图像包括所述被摄对象的同名像点,多张所述图像用于建立所述被摄对象的立体模型。
本申请实施例第五方面提供一种模型获取装置,包括:处理器和存储有计算机程序的存储器,所述处理器在执行所述计算机程序实现以下步骤:
控制所述无人机沿第一环绕航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张初级图像,所述第一环绕航线用于对所述被摄对象进行环绕拍摄;
基于所述初级图像建立所述被摄对象的初始模型,所述初始模型包括所述被摄对象表面点的位置信息;
基于所述位置信息和预设距离规划第二航线,所述第二航线包括多个航点,多个所述航点与所述被摄对象表面的距离大致相等;
控制所述无人机沿所述第二航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张补充图像;
基于所述补充图像优化所述被摄对象的初始模型。
本申请实施例第六方面提供一种拍摄装置,包括:处理器和存储有计算机程序的存储器,所述处理器在执行所述计算机程序实现以下步骤:
获取被摄对象的位置信息;
基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线;
控制所述无人机沿所述第一环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第一图像;
基于多张所述第一图像和所述位置信息规划用于对所述被摄对象环绕拍摄的第二环绕航线;
控制所述无人机沿所述第二环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第二图像,所述第一图像和所述第二图像用于建立所述被摄对象的立体模型。
本申请实施例第七方面提供一种终端设备,包括:
通信模块,用于与无人机建立连接;
处理器和存储有计算机程序的存储器,所述处理器在执行所述计算机程序实现以下步骤:
获取被摄对象的位置信息;
基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线和第二环绕 航线;
控制所述无人机分别沿所述第一环绕航线和所述第二环绕航线运动;其中,所述无人机沿所述第一环绕航线运动的过程中与所述被摄对象保持第一距离,所述无人机沿所述第二环绕航线运动的过程中与所述被摄对象保持第二距离,所述第一距离大于所述第二距离;
控制所述无人机在运动过程中基于所述无人机搭载的相机对所述被摄对象拍摄以获取所述被摄对象的多张图像,所述无人机在所述第一环绕航线所述相机采集的图像与所述无人机在所述第二环绕航线所述相机采集的图像包括所述被摄对象的同名像点,多张所述图像用于建立所述被摄对象的立体模型。
本申请实施例第八方面提供一种终端设备,包括:
通信模块,用于与无人机建立连接;
处理器和存储有计算机程序的存储器,所述处理器在执行所述计算机程序实现以下步骤:
控制所述无人机沿第一环绕航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张初级图像,所述第一环绕航线用于对所述被摄对象进行环绕拍摄;
基于所述初级图像建立所述被摄对象的初始模型,所述初始模型包括所述被摄对象表面点的位置信息;
基于所述位置信息和预设距离规划第二航线,所述第二航线包括多个航点,多个所述航点与所述被摄对象表面的距离大致相等;
控制所述无人机沿所述第二航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张补充图像;
基于所述补充图像优化所述被摄对象的初始模型。
本申请实施例第九方面提供一种终端设备,包括:
通信模块,用于与无人机建立连接;
处理器和存储有计算机程序的存储器,所述处理器在执行所述计算机程序实现以下步骤:
获取被摄对象的位置信息;
基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线;
控制所述无人机沿所述第一环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第一图像;
基于多张所述第一图像和所述位置信息规划用于对所述被摄对象环绕拍摄的第二环绕航线;
控制所述无人机沿所述第二环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第二图像,所述第一图像和所述第二图像用于建立所述被摄对象的立体模型。
本申请实施例第十方面提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面提供的拍摄方法。
本申请实施例第十一方面提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述第二方面提供的模型获取方法。
本申请实施例第十二方面提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述第三方面提供的拍摄方法。
本申请实施例提供的拍摄方法,规划了第一环绕航线和第二环绕航线,由于无人机在所述第一环绕航线相机采集的图像与无人机在所述第二环绕航线相机采集的图像包括所述被摄对象的同名像点,因此无人机在第二环绕航线采集的图像可以与无人机在第一环绕航线采集的图像匹配和连接,进而无人机在第二环绕航线采集的图像之间的重叠度可以降低,从而减少了要拍摄的图像数量。并且,第二环绕航线对应的拍摄距离小于第一环绕航线对应的拍摄距离,因此无人机在第二环绕航线采集的图像可以保留被摄对象表面更多的细节,使建立的立体模型有足够的精度。可见,本申请实施例提供的拍摄方法,可以在确保立体模型的精度满足要求的基础上减少要拍摄的图像数量,提高了无人机的作业效率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的拍摄方法的流程图。
图2是本申请实施例提供的航线示意图。
图3是本申请实施例提供的仿面的第一环绕航线的俯视图。
图4是本申请实施例提供的仿面的第二环绕航线的侧视图。
图5是本申请实施例提供的规划多条第一环绕航线时的航线示意图。
图6是本申请实施例提供的规划了2条第二环绕航线时的航线示意图。
图7是本申请实施例提供的规划了三种航线的侧视图。
图8是本申请实施例提供的利用初始模型进行特征点匹配的示意图。
图9是本申请实施例提供的模型获取方法的流程图。
图10是本申请实施例提供的拍摄方法的流程图。
图11是本申请实施例提供的交互界面示意图一。
图12是本申请实施例提供的交互界面示意图二。
图13是本申请实施例提供的拍摄装置的结构示意图。
图14是本申请实施例提供的模型获取装置的结构示意图。
图15是本申请实施例提供的终端设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
基于无人机影像的三维重建技术被越来越多的应用在文物、电塔、信号塔、桥梁等物体的精细化建模上。在利用无人机进行建模时,可以控制无人机沿规划的航线飞行,并在飞行过程中对被摄对象拍摄图像,利用拍摄所得的图像可以建立被摄对象的立体模型。目前,无人机在拍摄图像时需要耗费大量的时间,作业效率仍有待提高。
为使建立的被摄对象的模型具有较高的精度,在一种实施方式中,可以规划距离被摄对象表面较近的航线,当无人机沿该航线运动并对被摄对象拍照时,由于无人机与被摄对象的距离较近,因此拍摄的图像可以保留被摄对象表面更多的细节,从而可以确保建立的模型有较高的精度。但也因为无人机与被摄对象的距离较近,拍摄所得的图像能够覆盖的范围较小,为了能够完整覆盖整个被摄对象并同时满足图像之间的重叠度的要求,无人机需要拍摄大量的图像,耗费大量的时间,作业效率较低。
本申请实施例提供了一种拍摄方法,可以参考图1,图1是本申请实施例提供的拍摄方法的流程图,该方法包括以下步骤:
S102、获取被摄对象的位置信息。
S104、基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线和第二环绕航线。
S106、控制所述无人机分别沿所述第一环绕航线和所述第二环绕航线运动。
S108、控制所述无人机在运动过程中基于所述无人机搭载的相机对所述被摄对象拍摄以获取所述被摄对象的多张图像。
被摄对象的位置信息可以指示出被摄对象所在的位置。在一个例子中,被摄对象的位置信息可以是被摄对象所在位置的几何坐标。在一种实施方式中,被摄对象的位置信息可以是用户输入或选定的。在一种实施方式中,被摄对象的位置信息可以是无人机通过传感器感知得到,比如无人机可以搭载有雷达,通过雷达探测可以获取被摄对象的位置信息,又比如无人机可以通过视觉定位的方式获取被摄对象的位置信息。
基于被摄对象的位置信息可以规划第一环绕航线和第二环绕航线。这里,第一环绕航线和第二环绕航线可以用于对被摄对象进行环绕拍摄。在一个例子中,第一环绕航线和第二环绕航线可以以被摄对象所在位置为环绕中心。
可以控制无人机分别沿第一环绕航线和第二环绕航线运动,其中,无人机沿所述第一环绕航线运动的过程中可以与被摄对象保持第一距离,无人机沿第二环绕航线运动的过程中与被摄对象保持第二距离,这里,第一距离可以大于第二距离。在一种表述中,第一环绕航线可以包括多个航点,第一环绕航线上航点到被摄对象的距离可以是所述第一距离,在另一种表述中,第一环绕航线对应的拍摄距离可以是所述第一距离。同理,在一种表述中,第二环绕航线可以包括多个航点,第二环绕航线上航点到被摄对象的距离可以是所述第二距离,在另一种表述中,第二环绕航线对应的拍摄距离可以是所述第二距离。
需要说明的是,无人机沿第一环绕航线运动时与被摄对象保持第一距离,是指无人机沿第一环绕航线运动时与被摄对象之间的距离大致维持在第一距离附近,即可以略微大于第一距离,也可以略微小于第一距离。同理,无人机沿第二环绕航线运动时与被摄对象保持第二距离,是指无人机沿第二环绕航线运动时与被摄对象之间的距离大致维持在第二距离附近,即可以略微大于第二距离,也可以略微小于第二距离。
无人机可以搭载有相机,在控制无人机分别沿第一环绕航线和第二环绕航线运动时,可以控制无人机在运动过程利用搭载的相机对被摄对象进行拍照,从而可以获取 被摄对象的多张图像。当无人机沿第一环绕航线或第二环绕航线运动时,在一种实施方式中,可以控制无人机每经过预设的时间间隔拍摄一张或多张图像,在一种实施方式中,可以控制无人机每经过预设的角度或者预设的距离拍摄一张或多张图像。具体在设定拍摄间隔时,可以根据相机的成像范围以及所需的图像重叠度进行设置。
可选的,所述无人机在所述第一环绕航线所述相机采集的图像包括第一图像区域,所述无人机在所述第二环绕航线所述相机采集的图像包括第二图像区域,所述第一图像区域与所述第二图像区域对应所述被摄对象的同一位置。例如,被摄对象是高压输电塔结构,所述第一图像区域包括塔身与电线的搭接处的成像区域,在所述第二图像区域中包括同一成像区域。
可选的,控制所述无人机在运动过程中基于所述无人机搭载的相机对所述被摄对象拍摄以获取所述被摄对象的多张图像,所述无人机在所述第一环绕航线所述相机采集的图像与所述无人机在所述第二环绕航线所述相机采集的图像包括同名像点,包括:所述无人机在所述第一环绕航线的第一位置,所述相机采集第一图像;所述无人机在所述第二环绕航线的第二位置,所述相机采集第二图像;所述第一图像与所述第二图像包括同名像点,所述第一位置和所述第二位置具有预设相对位置关系。例如,所述第一位置和所述第二位置位于所述被摄物体的同一方向上。
拍摄的多张图像中,无人机在第一环绕航线采集的图像与无人机在第二环绕航线采集的图像可以包括被摄对象的相同区域的成像,也就是说,所述无人机在所述第一环绕航线所述相机采集的图像与所述无人机在所述第二环绕航线所述相机采集的图像包括被摄物体的同名像点,从而在利用图像进行三维重建时,无人机在第一环绕航线采集的图像可以与无人机在第二环绕航线采集的图像匹配。
拍摄的多张图像可以用于建立被摄对象的立体模型。这里,建立模型所用的算法可以有多种,比如可以利用多视几何算法建立被摄对象的三维点云模型。被摄对象的立体模型也可以是各种模型,比如可以是点云模型,也可以是网格模型。
本申请实施例提供的拍摄方法,规划了第一环绕航线和第二环绕航线,所述无人机在所述第一环绕航线所述相机采集的图像与所述无人机在所述第二环绕航线所述相机采集的图像包括所述被摄对象的同名像点,因此无人机在第二环绕航线采集的图像可以与无人机在第一环绕航线采集的图像匹配和连接,进而无人机在第二环绕航线采集的图像之间的重叠度可以降低,从而减少了要拍摄的图像数量。并且,第二环绕航线对应的拍摄距离小于第一环绕航线对应的拍摄距离,因此无人机在第二环绕航线采集的图像可以保留被摄对象表面更多的细节,使建立的立体模型有足够的精度。可见, 本申请实施例提供的拍摄方法,可以在确保立体模型的精度满足要求的基础上减少要拍摄的图像数量,提高了无人机的作业效率。
在一种实施方式中,第一环绕航线可以包括多个航点,多个航点可以分布在被摄对象的不同方向,多个航点与被摄对象的距离可以大致相同,且多个航点可以大致位于同一高度。可以参考图2,图2是本申请实施例提供的航线示意图。在一个例子中,第一环绕航线也可以称为水平环绕航线。当控制无人机沿第一环绕航线运动时,无人机可以在某一高度的水平面上环绕被摄对象运动。
在一种实施方式中,第二环绕航线可以包括多条竖直航线段,多条竖直航线段可以分布在被摄对象的不同方向。每一条竖直航线段可以包括分布在不同高度的多个航点,这些航点在水平面上的投影位置可以大致相同,可以参考图2。在一个例子中,第二环绕航线也可以称为竖直环绕航线。当控制无人机沿第二环绕航线运动时,无人机可以分别沿每一条竖直航线段运动,具体的,无人机在沿竖直航线段运动时,可以在高度方向上向上或向下运动。
在一个例子中,无人机在第一环绕航线采集的图像可以完整覆盖被摄对象各个方向的表面。在一个例子中,无人机在某一竖直航线段采集的图像可以覆盖被摄对象在某个方向各个高度的表面。
可以理解的,通过第一环绕航线无人机可以拍摄得到被摄对象在各个方向的图像,通过竖直航线段无人机可以拍摄得到被摄对象在特定方向上多个不同高度的图像,由于无人机在第一环绕航线上拍摄的图像所覆盖的范围包含了无人机在竖直航线段上拍摄的图像所覆盖的范围,因此无人机在竖直航线段上拍摄的图像可以与无人机在第一环绕航线上拍摄的图像很好的匹配,满足重叠度的要求,从而竖直航线段之间的图像重叠度要求可以大大减少(可以低于20%),可以无需规划密集的竖直航线段,减少了无人机拍摄所用的时间,提高了作业效率。并且,由于竖直航线段的分布可以比较稀疏,因此可以适应更多复杂的场景,场景适应能力大大提升。此外,由于采用两种环绕航线进行拍摄,因此对被摄对象的覆盖更为全面,可以避免存在拍摄盲区。
在一种实施方式中,规划的第一环绕航线的形状可以与被摄对象在水平面上的轮廓形状匹配,如此,第一环绕航线上的各个航点到被摄对象对应的表面点的距离可以相同,均为所述第一距离。可以将这种形状与被摄对象在水平面上的轮廓形状匹配的第一环绕航线称为仿面的第一环绕航线。可以参考图3,图3是本申请实施例提供的仿面的第一环绕航线的俯视图。
在一种实施方式中,规划的第二环绕航线中竖直航线段的形状可以与被摄对象在 竖直面上的轮廓形状匹配,如此,竖直航线段上的各个航点到被摄对象对应的表面点的距离可以相同,均为所述第二距离。可以将这种竖直航线段的形状与被摄对象在竖直面上的轮廓形状匹配的第二环绕航线称为仿面的第二环绕航线。可以参考图4,图4是本申请实施例提供的仿面的第二环绕航线的侧视图。
在一种实施方式中,仿面的第一环绕航线可以根据被摄对象的初始模型和设定的第一距离规划得到,仿面的第二环绕航线可以根据被摄对象的初始模型和设定的第二距离规划得到。这里,被摄对象的初始模型可以包括被摄对象表面点的位置信息,因此可以确定航点与被摄对象表面点的距离。
被摄对象的初始模型可以利用被摄对象的多张初级图像进行三维重建得到。在一种实施方式中,若需要规划仿面的第一环绕航线和仿面的第二环绕航线,则需要提前获取被摄对象的初始模型。被摄对象的初级图像可以有多种获取方式,在一个例子中,可以预先通过相机对被摄对象进行拍摄得到多张初级图像;在一个例子中,可以预先通过无人机对被摄对象进行远距离的环绕拍摄,得到多张初级图像。这里,由于初级图像是无人机远距离拍摄得到的,因此建立的初始模型的精度较低,可以称为粗模。
在一种实施方式中,若仅规划仿面的第二环绕航线,第一环绕航线不需要仿面,则被摄对象的初始模型可以利用无人机在第一环绕航线采集的图像进行三维重建得到。可以理解的,为建立被摄对象的初始模型,无人机采集的图像需要完整覆盖被摄对象,因此,在规划第一环绕航线时,可以根据被摄对象的位置信息规划多条对应不同高度的第一环绕航线,可以参考图5,图5是本申请实施例提供的规划多条第一环绕航线时的航线示意图。在规划了多条第一环绕航线后,可以控制无人机分别沿每一条第一环绕航线运动并对被摄对象拍摄图像,从而可以获取到完整覆盖被摄对象的多张初级图像,利用该多张初级图像可以建立被摄对象的初始模型,利用该初始模型可以规划仿面的第二环绕航线。
在上述实施方式中,可以规划拍摄距离较大的多条第一环绕航线,如此,无人机在第一环绕航线上采集的图像可以覆盖较大范围的场景,即便规划的多条第一环绕航线之间存在较大的间隔,航线间的图像重叠度也能够满足要求。而多条第一环绕航线之间有较大的间隔(即比较稀疏),可以提高对复杂场景的适应性,对场景的空旷要求降低。
为了使无人机在第一环绕航线采集的图像可以与无人机在第二环绕航线采集的图像匹配,第一环绕航线对应的第一距离与第二环绕航线对应的第二距离之间的差距应当在合理范围内。在一种实施方式中,第一距离与第二距离可以满足特定的比例关系, 比如第二距离可以是第一距离的1/2,则第一距离可以记为D,第二距离可以记为0.5D。
由于第一距离与第二距离的差距需要限制在合理范围内,因此,若第一环绕航线的拍摄距离较远,则第二环绕航线的拍摄距离虽然比第一环绕航线近,但在距离差距的限制下仍然可能较远,比如第一环绕航线的第一距离是20米,则第二环绕航线的第二距离可能是10米,距离被摄对象仍然较远。在这种情况下,无人机以第二环绕航线采集的图像可能细节不足,导致最终建立的模型的精度达不到要求。针对该问题,在一种实施方式中,可以规划多条对应不同拍摄距离的第二环绕航线。在一个例子中,第一环绕航线与各条第二环绕航线之间的拍摄距离可以满足等比关系,例如规划了2条第二环绕航线,第一环绕航线的拍摄距离可以记为D,则2条第二环绕航线的拍摄距离可以分别是D/2、D/4。可以参考图6,图6是本申请实施例提供的规划了2条第二环绕航线时的航线示意图。
在规划第一环绕航线时,在一种实施方式中,可以根据被摄对象的尺寸信息确定第一环绕航线的拍摄距离(即第一距离)。被摄对象的尺寸信息可以与被摄对象被抽象成的立体图形匹配,例如,若被摄对象被抽象成长方体,则被摄对象的尺寸信息可以包括被摄对象的长、宽和高,若被摄对象被抽象成圆柱体,则被摄对象的尺寸信息可以包括被摄对象的高和底面直径,可以参考图2,图2中的被摄对象被抽象为一个圆柱体。被摄对象的尺寸信息,在一个例子中,可以是用户输入的,在一个例子中,也可以是无人机通过视觉测量等方式测量得到的。
在确定被摄对象的尺寸信息后,在一种实施方式中,可以通过预设的计算公式将被摄对象的尺寸信息转换成第一环绕航线对应的拍摄距离。可以举个例子,例如被摄对象被抽象为一个底面直径为1米的圆柱体,预设的计算公式比如可以是取底面直径的N倍,则第一环绕航线的拍摄距离可以是N米。
在一种实施方式中,本申请实施例描述的航线的拍摄距离、无人机与被摄对象的距离、航点与被摄对象间的距离可以均为水平距离,例如航点与被摄对象的距离可以是航点在水平面上的投影位置到被摄对象在同一水平面上的投影位置的距离。
考虑到用户可能对被摄对象的某些区域有较高的建模要求,例如在信号塔的建模任务中,用户对信号塔上天线的模型要求有较高的精度,因此,在一种实施方式中,可以规划用于对感兴趣区域进行近距离拍摄的第三航线。具体的,可以获取用户选定的被摄对象表面的感兴趣区域,并可以根据该感兴趣区域规划第三航线,从而可以控制无人机沿第三航线运动并在运动过程中拍摄被摄对象的近距离的多张图像。为方便区分,可以将无人机沿第一环绕航线拍摄的图像称为第一图像,沿第二环绕航线拍摄 的图像称为第二图像,沿第三航线拍摄的图像称为第三图像,则在进行三维重建时,可以利用距离最远的第一图像、中等距离的第二图像以及距离最近的第三图像建立被摄对象的高精度模型。
可以参考图7,图7是本申请实施例提供的规划了三种航线的侧视图。其中,规划的航线包括用于远距离对被摄对象拍摄的3条第一环绕航线A、B和C,用于中距离对被摄对象拍摄的2条仿面的第二环绕航线A和B,以及用于近距离对被摄对象拍摄的第三航线。
在一种实施方式中,若规划了多条第二环绕航线,则第一环绕航线、多条第二环绕航线和第三航线的拍摄距离可以满足等比关系,比如,若规划了2条第二环绕航线,第一环绕航线的拍摄距离可以记为D,则2条第二环绕航线的拍摄距离可以分别是D/2、D/4,第三航线的拍摄距离可以是D/8。通过这样的拍摄距离的设置,可以确保无人机在第三航线上采集的图像与无人机在第二环绕航线上采集的图像能够匹配和连接,从而可以顺利建立被摄对象的高精度的立体模型。
在根据选定的感兴趣区域规划第三航线时,在一种实施方式中,可以在距离被摄对象感兴趣区域表面预设距离的位置规划多个航点,并可以根据规划的多个航点规划第三航线。在一个例子中,规划的多个航点可以相对均匀的分布在被摄对象的感兴趣区域上,与被摄对象的表面间隔所述预设距离。在一个例子中,第三航线可以包括多个航线段,航线段之间的旁向重叠度可以大于60%,航线段内的航向重叠度可以大于80%。
关于被摄对象表面的感兴趣区域,在一种实施方式中,可以是用户在已拍摄的被摄对象的图像中选定的,这里,已拍摄的被摄对象的图像可以包括所述的第一图像和第二图像,还可以包括当前拍摄的预览图像。在一种实施方式中,感兴趣区域可以是用户在被摄对象的初始模型上选定的。
在利用拍摄的多张图像建立被摄对象的立体模型时,在一种实施方式中,可以对多张图像进行特征点匹配,根据特征点匹配的结果进行三维重建,可以得到被摄对象的立体模型。这里,用于三维重建的多张图像可以包括无人机在(各个)第一环绕航线采集的图像和在(各个)第二环绕航线采集的图像,在其他例子中,还可以包括无人机在第三航线上采集的图像。
可以用第一图像指代多张图像中的任一图像,在对多张图像进行特征点匹配时,在一种实施方式中,可以将第一图像与第一图像以外的其它各张图像分别进行特征点匹配。这种方式虽然可以确定出与第一图像匹配的图像,但计算量大,重建效率低。 在一种实施方式中,可以利用被摄对象的初始模型进行多张图像之间的特征点匹配。具体的,可以利用所述初始模型从多张图像中确定候选图像,将第一图像与候选图像进行特征点匹配。
在利用初始模型确定候选图像时,在一种实施方式中,可以获取第一图像对应的相机位姿信息,从多张图像中筛选出相机位姿信息与第一图像的相机位姿信息匹配的多张待定图像,利用初始模型从多张待定图像中确定候选图像。相机位姿信息可以是图像中携带的信息,其可以指示出在拍摄该图像时相机的位置和姿态。可以理解的,拍摄时相机的位姿越接近,拍摄的图像之间的相似度越高,匹配的特征点也越多,因此可以根据相机位姿信息对多张图像进行筛选,筛选出多张所述待定图像。
在从多张待定图像中确定候选图像时,可以利用被摄对象的初始模型。在一种实施方式中,可以将第一图像的特征点投影到初始模型上,并可以对每一张待定图像分别进行以下操作:根据待定图像的相机位姿信息将特征点从初始模型反投影至该待定图像所在的平面,并统计落在待定图像内的特征点的数量。在统计得到每一张待定图像包含的特征点数量后,可以根据各个待定图像对应的特征点的数量确定候选图像。这里,在一个例子中,可以将包含的特征点数量最多的N张待定图像确定为候选图像,N可以是大于0的自然数。
在确定候选图像后,可以将第一图像与候选图像进行特征点匹配。在一种实施方式中,可以将第一图像中的特征点与候选图像中的目标范围内的特征点进行匹配,目标范围可以是第一图像中的特征点从初始模型反投影到候选图像所在的平面后,该特征点的落点所在的范围。可以参考图8,图8是本申请实施例提供的利用初始模型进行特征点匹配的示意图。第一图像中的特征点x投影到初始模型后可以得到三维点p,三维点p反投影至候选图像所在的平面后的落点可以是xp,在将第一图像与候选图像进行特征点匹配时,可以将特征点x与候选图像中落点xp所在范围内的特征点进行匹配,提高了匹配效率和匹配成功率,使三维重建过程更加鲁棒。这里,落点所在的范围可以是距离落点位置特定距离的范围。
本申请实施例提供的拍摄方法,规划了第一环绕航线和第二环绕航线,所述无人机在所述第一环绕航线所述相机采集的图像与所述无人机在所述第二环绕航线所述相机采集的图像包括同名像点,因此无人机在第二环绕航线采集的图像可以与无人机在第一环绕航线采集的图像匹配和连接,进而无人机在第二环绕航线采集的图像之间的重叠度可以降低,从而减少了要拍摄的图像数量。并且,第二环绕航线对应的拍摄距离小于第一环绕航线对应的拍摄距离,因此无人机在第二环绕航线采集的图像可以保 留被摄对象表面更多的细节,使建立的立体模型有足够的精度。可见,本申请实施例提供的拍摄方法,可以在确保立体模型的精度满足要求的基础上减少要拍摄的图像数量,提高了无人机的作业效率。
下面可以参考图9,图9是本申请实施例提供的模型获取方法的流程图。该方法可以包括以下步骤:
S902、控制所述无人机沿第一环绕航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张初级图像。
其中,所述第一环绕航线用于对所述被摄对象进行环绕拍摄。
S904、基于所述初级图像建立所述被摄对象的初始模型。
所述初始模型包括所述被摄对象表面点的位置信息。
S906、基于所述位置信息和预设距离规划第二航线。
所述第二航线包括多个航点,多个所述航点与所述被摄对象表面的距离大致相等。
S908、控制所述无人机沿所述第二航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张补充图像。
S910、基于所述补充图像优化所述被摄对象的初始模型。
上述步骤中,S902和S904可以参考前文中的相关说明,在此不再赘述。
在建立被摄对象的初始模型后,初始模型包括被摄对象表面点的位置信息,因此可以根据被摄对象表面点的位置信息以及预设距离规划第二航线,规划的第二航线上,各个航点到被摄对象表面的距离可以大致相等,均为所述预设距离。
可以理解的,预设距离在一种表述中可以是第二航线上航点到被摄对象的距离,在一种表述中可以是第二航线的拍摄距离。
可以控制无人机沿第二航线运动并对被摄对象进行拍摄,可以得到被摄对象对应的多张补充图像。这里,在一种实施方式中,所述预设距离可以小于第一环绕航线上航点与被摄对象的距离,从而当无人机沿第二航线飞行时,可以以更近的距离对被摄对象进行拍摄,拍摄所得的补充图像可以提高初始模型的精度。
在基于补充图像优化被摄对象的初始模型时,在一种实施方式中,可以利用多张补充图像以及无人机在第一环绕航线采集的多张第一图像进行三维重建,重建得到精度高于初始模型的优化模型。
本申请实施例提供的模型获取方法,可以利用无人机沿第一环绕航线运动时拍摄的多张初级图像建立被摄对象的初始模型,并可以基于初始模型包括的被摄对象表面点的位置信息和预设距离规划第二航线,提高了第二航线的精确度,可以使第二航线 上航点到被摄对象表面的距离大致相等。并且,无人机沿第二航线运动所拍摄的多张补偿图像可以用于优化被摄对象的初始模型,从而可以提高初始模型的质量。
在一种实施方式中,可以基于被摄对象的位置信息和多个预设距离规划多条第二航线。这里,第二航线可以对应前文所述的第二环绕航线,其可以包括多条竖直航线段,多条竖直航线段可以分布在被摄对象的不同方向,每一竖直航线段可以用于引导所述无人机在高度方向向上或者向下运动。
在一种实施方式中,规划的多条第二航线各自对应的预设距离(拍摄距离)可以满足等比关系。例如可以规划3条第二航线,第一环绕航线的拍摄距离可以记为D,则3条第二航线的拍摄距离可以分别是D/2、D/4、D/8。
在一种实施方式中,规划的第二航线可以用于对感兴趣区域进行近距离拍摄,此时,规划的第二航线可以对应前文中的第三航线。具体的,可以获取用户选定的感兴趣区域,从而可以根据被摄对象表面点的位置信息、所述感兴趣区域以及预设距离规划第二航线。
可选的,所述基于所述补充图像优化所述被摄对象的初始模型,包括:
对多张所述补充图像进行特征点匹配,并根据所述特征点匹配的结果优化所述被摄对象的初始模型。
可选的,所述对多张所述补充图像进行特征点匹配,包括:
利用所述初始模型进行多张所述补充图像之间的特征点匹配。
可选的,所述利用所述初始模型进行多张所述补充图像之间的特征点匹配,包括:
利用所述初始模型从所述多张补充图像中确定用于与第一补充图像进行特征点匹配的候选补充图像,所述第一补充图像是所述多张补充图像中的任一补充图像;
将所述第一补充图像与所述候选补充图像进行特征点匹配。
可选的,所述利用所述初始模型从所述多张补充图像中确定用于与第一补充图像进行特征点匹配的候选补充图像,包括:
从所述多张补充图像中筛选相机位姿信息与所述第一补充图像对应的相机位姿信息相匹配的多张待定补充图像;
利用所述初始模型从所述多张待定补充图像中确定所述候选补充图像。
可选的,所述利用所述初始模型从所述多张待定补充图像中确定所述候选补充图像,包括:
将所述第一补充图像的特征点投影至所述初始模型;
对每张所述待定补充图像,根据所述待定补充图像对应的相机位姿信息将所述特 征点从所述初始模型反投影至所述待定补充图像所在的平面,并确定位于所述待定补充图像内的特征点的数量;
根据各个所述待定补充图像对应的所述特征点的数量确定所述候选补充图像。
以上所提供的模型获取方法的各种实施方式,其具体实现可以参考前文中的相关说明,在此不再赘述。
本申请实施例提供的模型获取方法,可以利用无人机沿第一环绕航线运动时拍摄的多张初级图像建立被摄对象的初始模型,并可以基于初始模型包括的被摄对象表面点的位置信息和预设距离规划第二航线,提高了第二航线的精确度,可以使第二航线上航点到被摄对象表面的距离大致相等。并且,无人机沿第二航线运动所拍摄的多张补偿图像可以用于优化被摄对象的初始模型,从而可以提高初始模型的质量。
可以参考图10,图10是本申请实施例提供的拍摄方法的流程图,该方法可以包括以下步骤:
S1002、获取被摄对象的位置信息。
S1004、基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线。
S1006、控制所述无人机沿所述第一环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第一图像。
S1008、基于多张所述第一图像和所述位置信息规划用于对所述被摄对象环绕拍摄的第二环绕航线。
S1010、控制所述无人机沿所述第二环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第二图像。
所述第一图像和所述第二图像用于建立所述被摄对象的立体模型。
关于S1002、S1004和S1006,可以参考前文中的相关说明,在此不再赘述。
在获取到无人机沿第一环绕航线运动时拍摄的被摄对象的多张第一图像后,可以根据被摄对象的位置信息以及多张第一图像规划第二环绕航线。这里,在一个例子中,被摄对象的位置信息可以用于确定第二环绕航线的环绕中心。在一个例子中,多张第一图像可以用于确定第二环绕航线上航点与被摄对象的距离。
在一种实施方式中,规划的第二环绕航线对应的拍摄距离可以小于第一环绕航线对应的拍摄距离,即第二环绕航线上航点与被摄对象的距离可以小于第一环绕航线上航点与被摄对象的距离。
在根据多张第一图像确定第二环绕航线上航点与被摄对象的距离时,在一种实施方式中,可以控制无人机与被摄对象保持测试距离,该测试距离可以是小于第一环绕 航线上航点与被摄对象之间的距离的任意距离,可以控制无人机在所述测试距离对被摄对象进行拍摄,得到测试图像,可以将该测试图像与无人机沿第一环绕航线运动时拍摄的多张第一图像进行相似度匹配,若匹配得到的相似度不满足条件,则可以对测试距离进行调整,若匹配得到的相似度满足条件,则可以将调整完成的测试距离确定为第二环绕航线上的航点与被摄对象的距离。这里,在一种实施方式中,匹配得到的相似度可以是测试图像与多张第一图像分别进行相似度匹配后得到的最高相似度。
如前所述,第二环绕航线的拍摄距离可以小于第一环绕航线的拍摄距离,而测试距离作为第二环绕航线的拍摄距离的尝试值,其可以小于第一环绕航线的拍摄距离,即可以小于第一环绕航线上航点与被摄对象之间的距离。
需要说明的是,无人机在第二环绕航线上拍摄的图像与无人机在第一环绕航线上拍摄的图像之间需要满足一定的相似度,以便于这些图像在用于三维重建时可以很好的匹配,避免出现图像最终不能连接的问题。但无人机在第二环绕航线上拍摄的图像与无人机在第一环绕航线上拍摄的图像之间的相似度也不宜过高,因为相似度越高意味着第二环绕航线的拍摄距离与第一环绕航线的拍摄距离越接近,则无人机在第二环绕航线上拍摄的图像对模型精度的提高越有限,最终可能导致建立的模型精度不足。又或者,为了使所建立模型的精度达到要求,需要规划更多对应不同拍摄距离的航线,从而大大增加了无人机拍摄的工作量,拍摄效率大大降低。
针对上述问题,在一种实施方式中,在将该测试图像与多张第一图像进行相似度匹配后,若匹配得出的相似度小于相似度下限,即可以增加测试距离,使第二环绕航线的拍摄距离与第一环绕航线的拍摄距离接近一点,以确保无人机在第二环绕航线上拍摄的图像与无人机在第一环绕航线上拍摄的图像可以连接。在一种实施方式中,若匹配得出的相似度大于相似度上限,则意味着第二环绕航线的拍摄距离与第一环绕航线的拍摄距离过于接近,可以减少测试距离,以使无人机在第二环绕航线上拍摄的图像可以对模型精度的提高提供更多的贡献。
考虑到无人机沿第一环绕航线运动时拍摄的图像数量较多,若测试图像分别与每一张第一图形进行相似度匹配,则需要耗费较多的计算资源,也降低了计算效率,因此,在一种实施方式中,可以获取测试图像对应的相机位姿信息,根据测试图像对应的相机位姿信息对多张第一图像进行筛选,筛选出相机位姿信息与测试图像对应的相机位姿信息相匹配的第一图像,可以将测试图像与筛选出的第一图像进行相似度匹配。这里,相机位姿信息可以是测试图像携带的信息,在一个例子中,相机位姿信息可以是无人机或者相机上的惯性测量单元测量得到的。由于两张图像的相机位姿信息匹配 意味着这两张图像的拍摄角度大致相同,图像之间的相似度较高,因此测试图像可以与筛选出的第一图像进行相似度匹配,从而提高了匹配效率。在一种实施方式中,也可以通过图像检索算法对多张第一图像进行筛选,从而也可以筛选出数量较少或单张第一图像用于与测试图像进行相似度匹配。
在一种实施方式中,当用户控制无人机以测试距离对被摄对象进行拍摄时,若拍摄所得的测试图像与第一图像的匹配结果不满足条件,例如所述相似度大于相似度上限或者小于相似度下限,则可以向用户反馈相应的匹配结果,以指导用户对测试距离进行调整。可以参考图11和图12,在一个例子中,若匹配结果不满足条件,则可以在终端的显示界面上显示用于表示当前的测试距离不合适的信息,如图11中的BAD,若匹配结果满足条件,则可以在终端的显示界面上显示用于表示当前的测试距离合适的信息,如图12中的GOOD。
在将测试图像与第一图像进行相似度匹配时,可以有多种方式。在一种实施方式中,可以对测试图像和第一图像分别进行特征提取,提取出的特征可以是一个高维度的特征向量,则可以利用测试图像对应的特征向量与第一图像对应的特征向量计算测试图像和第一图像之间的相似度,例如,相似度可以是测试图像和第一图像的特征向量之间的夹角,也可以是测试图像和第一图像的特征向量之间的距离。
本申请实施例提供的拍摄方法,可以根据无人机沿第一环绕航线运动时拍摄的多张第一图像规划第二环绕航线,从而可以确保无人机在第二环绕航线拍摄的图像可以与无人机在第一环绕航线拍摄的图像匹配,避免三维重建时发生图像不能连接的问题。
下面可以参考图13,图13是本申请实施例提供的拍摄装置的结构示意图。本申请实施例提供的拍摄装置包括:处理器1310和存储有计算机程序的存储器1320。在一种实施方式中,所述处理器在执行所述计算机程序实现以下步骤:
获取被摄对象的位置信息;
基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线和第二环绕航线;
控制所述无人机分别沿所述第一环绕航线和所述第二环绕航线运动;其中,所述无人机沿所述第一环绕航线运动的过程中与所述被摄对象保持第一距离,所述无人机沿所述第二环绕航线运动的过程中与所述被摄对象保持第二距离,所述第一距离大于所述第二距离;
控制所述无人机在运动过程中基于所述无人机搭载的相机对所述被摄对象拍摄以获取所述被摄对象的多张图像,所述无人机在所述第一环绕航线所述相机采集的图像 与所述无人机在所述第二环绕航线所述相机采集的图像包括同名像点,多张所述图像用于建立所述被摄对象的立体模型。
可选的,所述第一环绕航线包括多个航点,多个所述航点分布在所述被摄对象的不同方向,与所述被摄对象的距离大致相同,且大致位于同一高度,所述第一环绕航线用于引导无人机在水平面上环绕所述被摄对象运动。
可选的,所述第一环绕航线上的各个航点到所述被摄对象表面点的距离为所述第一距离,所述第一环绕航线的形状与所述被摄对象在水平面上的轮廓形状匹配。
可选的,所述第二环绕航线包括多条竖直航线段,多条所述竖直航线段分布在所述被摄对象的不同方向,每一所述竖直航线段用于引导所述无人机在高度方向向上或者向下运动。
可选的,所述竖直航线段上的各个航点到所述被摄对象表面点的距离为所述第二距离,所述竖直航线段的形状与所述被摄对象在竖直面上的轮廓形状匹配。
可选的,所述被摄对象表面点的位置信息是根据所述被摄对象的初始模型确定的,所述初始模型是预先基于所述被摄对象的多张初级图像建立的。
可选的,所述处理器还用于:
根据选定的所述被摄对象表面的感兴趣区域规划第三航线;
控制所述无人机沿所述第三航线运动,并在运动过程中对所述被摄对象拍摄。
可选的,所述第三航线上的航点与所述被摄对象之间的距离小于所述第二距离。
可选的,所述处理器利用所述多张图像建立所述被摄对象的立体模型时用于:
对所述多张图像进行特征点匹配,并根据所述特征点匹配的结果进行三维重建,得到所述被摄对象的立体模型。
可选的,所述处理器对所述多张图像进行特征点匹配时用于:
利用所述被摄对象的初始模型进行所述多张图像之间的特征点匹配,所述初始模型是预先基于所述被摄对象的多张初级图像建立的。
可选的,所述处理器利用所述被摄对象的初始模型进行所述多张图像之间的特征点匹配时用于:
利用所述被摄对象的初始模型从所述多张图像中确定用于与第一图像进行特征点匹配的候选图像,所述第一图像是所述多张图像中的任一图像;
将所述第一图像与所述候选图像进行特征点匹配。
可选的,所述处理器利用所述被摄对象的初始模型从所述多张图像中确定用于与第一图像进行特征点匹配的候选图像时用于:
从所述多张图像中筛选相机位姿信息与所述第一图像对应的相机位姿信息相匹配的多张待定图像;
利用所述初始模型从所述多张待定图像中确定所述候选图像。
可选的,所述处理器利用所述初始模型从所述多张待定图像中确定所述候选图像时用于:
将所述第一图像的特征点投影至所述初始模型;
对每张所述待定图像,根据所述待定图像对应的相机位姿信息将所述特征点从所述初始模型反投影至所述待定图像所在的平面,并确定位于所述待定图像内的特征点的数量;
根据各个所述待定图像对应的所述特征点的数量确定所述候选图像。
以上提供的拍摄装置的各种实施方式,其具体实现可以参考前文中的相关说明,在此不再赘述。
本申请实施例提供的拍摄装置,规划了第一环绕航线和第二环绕航线,由于所述无人机在所述第一环绕航线所述相机采集的图像与所述无人机在所述第二环绕航线所述相机采集的图像包括同名像点,因此无人机在第二环绕航线采集的图像可以与无人机在第一环绕航线采集的图像匹配和连接,进而无人机在第二环绕航线采集的图像之间的重叠度可以降低,从而减少了要拍摄的图像数量。并且,第二环绕航线对应的拍摄距离小于第一环绕航线对应的拍摄距离,因此无人机在第二环绕航线采集的图像可以保留被摄对象表面更多的细节,使建立的立体模型有足够的精度。可见,本申请实施例提供的拍摄装置,可以在确保立体模型的精度满足要求的基础上减少要拍摄的图像数量,提高了无人机的作业效率。
下面可以参考图14,图14是本申请实施例提供的模型获取装置的结构示意图。本申请实施例提供的拍摄装置包括:处理器1410和存储有计算机程序的存储器1420,所述处理器在执行所述计算机程序实现以下步骤:
控制所述无人机沿第一环绕航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张初级图像,所述第一环绕航线用于对所述被摄对象进行环绕拍摄;
基于所述初级图像建立所述被摄对象的初始模型,所述初始模型包括所述被摄对象表面点的位置信息;
基于所述位置信息和预设距离规划第二航线,所述第二航线包括多个航点,多个所述航点与所述被摄对象表面的距离大致相等;
控制所述无人机沿所述第二航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张补充图像;
基于所述补充图像优化所述被摄对象的初始模型。
可选的,所述预设距离小于所述第一环绕航线上的航点与所述被摄对象之间的距离。
可选的,所述处理器基于所述位置信息和预设距离规划第二航线时用于:
基于所述位置信息和多个预设距离规划多个第二航线。
可选的,多个所述预设距离满足等比关系。
可选的,所述处理器基于所述位置信息和预设距离规划第二航线时用于:
基于所述位置信息、从所述初始模型上选定的感兴趣区域和预设距离规划第二航线,所述第二航线上的航点分布在所述感兴趣区域内。
可选的,所述第一环绕航线包括多个航点,多个所述航点分布在所述被摄对象的不同方向,与所述被摄对象的距离大致相同,且大致位于同一高度,所述第一环绕航线用于引导无人机在水平面上环绕所述被摄对象运动。
可选的,所述第二航线包括多条竖直航线段,多条所述竖直航线段分布在所述被摄对象的不同方向,每一所述竖直航线段用于引导所述无人机在高度方向向上或者向下运动。
可选的,所述处理器基于所述补充图像优化所述被摄对象的初始模型时用于:
对多张所述补充图像进行特征点匹配,并根据所述特征点匹配的结果优化所述被摄对象的初始模型。
可选的,所述处理器对多张所述补充图像进行特征点匹配时用于:
利用所述初始模型进行多张所述补充图像之间的特征点匹配。
可选的,所述处理器利用所述初始模型进行多张所述补充图像之间的特征点匹配时用于:
利用所述初始模型从所述多张补充图像中确定用于与第一补充图像进行特征点匹配的候选补充图像,所述第一补充图像是所述多张补充图像中的任一补充图像;
将所述第一补充图像与所述候选补充图像进行特征点匹配。
可选的,所述处理器利用所述初始模型从所述多张补充图像中确定用于与第一补充图像进行特征点匹配的候选补充图像时用于:
从所述多张补充图像中筛选相机位姿信息与所述第一补充图像对应的相机位姿信息相匹配的多张待定补充图像;
利用所述初始模型从所述多张待定补充图像中确定所述候选补充图像。
可选的,所述处理器利用所述初始模型从所述多张待定补充图像中确定所述候选补充图像时用于:
将所述第一补充图像的特征点投影至所述初始模型;
对每张所述待定补充图像,根据所述待定补充图像对应的相机位姿信息将所述特征点从所述初始模型反投影至所述待定补充图像所在的平面,并确定位于所述待定补充图像内的特征点的数量;
根据各个所述待定补充图像对应的所述特征点的数量确定所述候选补充图像。
以上所提供的模型获取装置的各种实施方式,其具体实现可以参考前文中的相关说明,在此不再赘述。
本申请实施例提供的模型获取装置,可以利用无人机沿第一环绕航线运动时拍摄的多张初级图像建立被摄对象的初始模型,并可以基于初始模型包括的被摄对象表面点的位置信息和预设距离规划第二航线,提高了第二航线的精确度,可以使第二航线上航点到被摄对象表面的距离大致相等。并且,无人机沿第二航线运动所拍摄的多张补偿图像可以用于优化被摄对象的初始模型,从而可以提高初始模型的质量。
本申请实施例还提供了一种拍摄装置,其结构可以参考图13,该装置的处理器在执行存储器存储的计算机程序时实现以下步骤:
获取被摄对象的位置信息;
基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线;
控制所述无人机沿所述第一环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第一图像;
基于多张所述第一图像和所述位置信息规划用于对所述被摄对象环绕拍摄的第二环绕航线;
控制所述无人机沿所述第二环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第二图像,所述第一图像和所述第二图像用于建立所述被摄对象的立体模型。
可选的,所述第二环绕航线上的航点与所述被摄对象的距离是根据多张所述第一图像确定的。
可选的,所述处理器根据多张所述第一图像确定所述第二环绕航线上的航点与所述被摄对象的距离时用于:
控制所述无人机与所述被摄对象保持测试距离并对所述被摄对象拍摄,得到测试 图像,所述测试距离小于所述第一环绕航线上的航点与所述被摄对象的距离;
将所述测试图像与多张所述第一图像进行相似度匹配,并根据匹配得到的相似度对所述测试距离进行调整;
将调整完成的测试距离确定为所述第二环绕航线上的航点与所述被摄对象的距离。
可选的,所述处理器根据匹配得到的相似度对所述测试距离进行调整时用于:
若所述匹配得到的相似度小于相似度下限,增加所述测试距离。
可选的,所述处理器根据匹配得到的相似度对所述测试距离进行调整时用于:
若所述匹配得到的相似度大于相似度上限,减少所述测试距离。
可选的,所述处理器将所述测试图像与多张所述第一图像进行相似度匹配时用于:
从多张所述第一图像中筛选出相机位姿信息与所述测试图像对应的相机位姿信息匹配的第一图像;
将所述测试图像与筛选出的第一图像进行相似度匹配。
以上所提供的拍摄装置的各种实施方式,其具体实现可以参考前文中的相关说明,在此不再赘述。
本申请实施例提供的拍摄装置,可以根据无人机沿第一环绕航线运动时拍摄的多张第一图像规划第二环绕航线,从而可以确保无人机在第二环绕航线拍摄的图像可以与无人机在第一环绕航线拍摄的图像匹配,避免三维重建时发生图像不能连接的问题。
下面可以参考图15,图15是本申请实施例提供的终端设备的结构示意图。该终端设备可以包括:
通信模块1510,用于与无人机建立连接;
处理器1520和存储有计算机程序的存储器1530.
在一种实施方式中,所述处理器在执行所述计算机程序实现以下步骤:
获取被摄对象的位置信息;
基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线和第二环绕航线;
控制所述无人机分别沿所述第一环绕航线和所述第二环绕航线运动;其中,所述无人机沿所述第一环绕航线运动的过程中与所述被摄对象保持第一距离,所述无人机沿所述第二环绕航线运动的过程中与所述被摄对象保持第二距离,所述第一距离大于所述第二距离;
控制所述无人机在运动过程中基于所述无人机搭载的相机对所述被摄对象拍摄以 获取所述被摄对象的多张图像,所述无人机在所述第一环绕航线所述相机采集的图像与所述无人机在所述第二环绕航线所述相机采集的图像包括同名像点,多张所述图像用于建立所述被摄对象的立体模型。
可选的,所述第一环绕航线包括多个航点,多个所述航点分布在所述被摄对象的不同方向,与所述被摄对象的距离大致相同,且大致位于同一高度,所述第一环绕航线用于引导无人机在水平面上环绕所述被摄对象运动。
可选的,所述第一环绕航线上的各个航点到所述被摄对象表面点的距离为所述第一距离,所述第一环绕航线的形状与所述被摄对象在水平面上的轮廓形状匹配。
可选的,所述第二环绕航线包括多条竖直航线段,多条所述竖直航线段分布在所述被摄对象的不同方向,每一所述竖直航线段用于引导所述无人机在高度方向向上或者向下运动。
可选的,所述竖直航线段上的各个航点到所述被摄对象表面点的距离为所述第二距离,所述竖直航线段的形状与所述被摄对象在竖直面上的轮廓形状匹配。
可选的,所述被摄对象表面点的位置信息是根据所述被摄对象的初始模型确定的,所述初始模型是预先基于所述被摄对象的多张初级图像建立的。
可选的,所述处理器还用于:
根据选定的所述被摄对象表面的感兴趣区域规划第三航线;
控制所述无人机沿所述第三航线运动,并在运动过程中对所述被摄对象拍摄。
可选的,所述第三航线上的航点与所述被摄对象之间的距离小于所述第二距离。
可选的,所述处理器利用所述多张图像建立所述被摄对象的立体模型时用于:
对所述多张图像进行特征点匹配,并根据所述特征点匹配的结果进行三维重建,得到所述被摄对象的立体模型。
可选的,所述处理器对所述多张图像进行特征点匹配时用于:
利用所述被摄对象的初始模型进行所述多张图像之间的特征点匹配,所述初始模型是预先基于所述被摄对象的多张初级图像建立的。
可选的,所述处理器利用所述被摄对象的初始模型进行所述多张图像之间的特征点匹配时用于:
利用所述被摄对象的初始模型从所述多张图像中确定用于与第一图像进行特征点匹配的候选图像,所述第一图像是所述多张图像中的任一图像;
将所述第一图像与所述候选图像进行特征点匹配。
可选的,所述处理器利用所述被摄对象的初始模型从所述多张图像中确定用于与 第一图像进行特征点匹配的候选图像时用于:
从所述多张图像中筛选相机位姿信息与所述第一图像对应的相机位姿信息相匹配的多张待定图像;
利用所述初始模型从所述多张待定图像中确定所述候选图像。
可选的,所述处理器利用所述初始模型从所述多张待定图像中确定所述候选图像时用于:
将所述第一图像的特征点投影至所述初始模型;
对每张所述待定图像,根据所述待定图像对应的相机位姿信息将所述特征点从所述初始模型反投影至所述待定图像所在的平面,并确定位于所述待定图像内的特征点的数量;
根据各个所述待定图像对应的所述特征点的数量确定所述候选图像。
以上提供的终端设备的各种实施方式,其具体实现可以参考前文中的相关说明,在此不再赘述。
本申请实施例提供的终端设备,规划了第一环绕航线和第二环绕航线,由于所述无人机在所述第一环绕航线所述相机采集的图像与所述无人机在所述第二环绕航线所述相机采集的图像包括同名像点,因此无人机在第二环绕航线采集的图像可以与无人机在第一环绕航线采集的图像匹配和连接,进而无人机在第二环绕航线采集的图像之间的重叠度可以降低,从而减少了要拍摄的图像数量。并且,第二环绕航线对应的拍摄距离小于第一环绕航线对应的拍摄距离,因此无人机在第二环绕航线采集的图像可以保留被摄对象表面更多的细节,使建立的立体模型有足够的精度。可见,本申请实施例提供的终端设备,可以在确保立体模型的精度满足要求的基础上减少要拍摄的图像数量,提高了无人机的作业效率。
本申请实施例还提供了一种终端设备,其结构可以参考图15,该终端设备中的处理器在执行计算机程序实现以下步骤:
控制所述无人机沿第一环绕航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张初级图像,所述第一环绕航线用于对所述被摄对象进行环绕拍摄;
基于所述初级图像建立所述被摄对象的初始模型,所述初始模型包括所述被摄对象表面点的位置信息;
基于所述位置信息和预设距离规划第二航线,所述第二航线包括多个航点,多个所述航点与所述被摄对象表面的距离大致相等;
控制所述无人机沿所述第二航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张补充图像;
基于所述补充图像优化所述被摄对象的初始模型。
可选的,所述预设距离小于所述第一环绕航线上的航点与所述被摄对象之间的距离。
可选的,所述处理器基于所述位置信息和预设距离规划第二航线时用于:
基于所述位置信息和多个预设距离规划多个第二航线。
可选的,多个所述预设距离满足等比关系。
可选的,所述处理器基于所述位置信息和预设距离规划第二航线时用于:
基于所述位置信息、从所述初始模型上选定的感兴趣区域和预设距离规划第二航线,所述第二航线上的航点分布在所述感兴趣区域内。
可选的,所述第一环绕航线包括多个航点,多个所述航点分布在所述被摄对象的不同方向,与所述被摄对象的距离大致相同,且大致位于同一高度,所述第一环绕航线用于引导无人机在水平面上环绕所述被摄对象运动。
可选的,所述第二航线包括多条竖直航线段,多条所述竖直航线段分布在所述被摄对象的不同方向,每一所述竖直航线段用于引导所述无人机在高度方向向上或者向下运动。
可选的,所述处理器基于所述补充图像优化所述被摄对象的初始模型时用于:
对多张所述补充图像进行特征点匹配,并根据所述特征点匹配的结果优化所述被摄对象的初始模型。
可选的,所述处理器对多张所述补充图像进行特征点匹配时用于:
利用所述初始模型进行多张所述补充图像之间的特征点匹配。
可选的,所述处理器利用所述初始模型进行多张所述补充图像之间的特征点匹配时用于:
利用所述初始模型从所述多张补充图像中确定用于与第一补充图像进行特征点匹配的候选补充图像,所述第一补充图像是所述多张补充图像中的任一补充图像;
将所述第一补充图像与所述候选补充图像进行特征点匹配。
可选的,所述处理器利用所述初始模型从所述多张补充图像中确定用于与第一补充图像进行特征点匹配的候选补充图像时用于:
从所述多张补充图像中筛选相机位姿信息与所述第一补充图像对应的相机位姿信息相匹配的多张待定补充图像;
利用所述初始模型从所述多张待定补充图像中确定所述候选补充图像。
可选的,所述处理器利用所述初始模型从所述多张待定补充图像中确定所述候选补充图像时用于:
将所述第一补充图像的特征点投影至所述初始模型;
对每张所述待定补充图像,根据所述待定补充图像对应的相机位姿信息将所述特征点从所述初始模型反投影至所述待定补充图像所在的平面,并确定位于所述待定补充图像内的特征点的数量;
根据各个所述待定补充图像对应的所述特征点的数量确定所述候选补充图像。
以上所提供的终端设备的各种实施方式,其具体实现可以参考前文中的相关说明,在此不再赘述。
本申请实施例提供的终端设备,可以利用无人机沿第一环绕航线运动时拍摄的多张初级图像建立被摄对象的初始模型,并可以基于初始模型包括的被摄对象表面点的位置信息和预设距离规划第二航线,提高了第二航线的精确度,可以使第二航线上航点到被摄对象表面的距离大致相等。并且,无人机沿第二航线运动所拍摄的多张补偿图像可以用于优化被摄对象的初始模型,从而可以提高初始模型的质量。
本申请实施例还提供了一种终端设备,其结构可以参考图15,该终端设备中的处理器在执行计算机程序实现以下步骤:
获取被摄对象的位置信息;
基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线;
控制所述无人机沿所述第一环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第一图像;
基于多张所述第一图像和所述位置信息规划用于对所述被摄对象环绕拍摄的第二环绕航线;
控制所述无人机沿所述第二环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第二图像,所述第一图像和所述第二图像用于建立所述被摄对象的立体模型。
可选的,所述第二环绕航线上的航点与所述被摄对象的距离是根据多张所述第一图像确定的。
可选的,所述处理器根据多张所述第一图像确定所述第二环绕航线上的航点与所述被摄对象的距离时用于:
控制所述无人机与所述被摄对象保持测试距离并对所述被摄对象拍摄,得到测试 图像,所述测试距离小于所述第一环绕航线上的航点与所述被摄对象的距离;
将所述测试图像与多张所述第一图像进行相似度匹配,并根据匹配得到的相似度对所述测试距离进行调整;
将调整完成的测试距离确定为所述第二环绕航线上的航点与所述被摄对象的距离。
可选的,所述处理器根据匹配得到的相似度对所述测试距离进行调整时用于:
若所述匹配得到的相似度小于相似度下限,增加所述测试距离。
可选的,所述处理器根据匹配得到的相似度对所述测试距离进行调整时用于:
若所述匹配得到的相似度大于相似度上限,减少所述测试距离。
可选的,所述处理器将所述测试图像与多张所述第一图像进行相似度匹配时用于:
从多张所述第一图像中筛选出相机位姿信息与所述测试图像对应的相机位姿信息匹配的第一图像;
将所述测试图像与筛选出的第一图像进行相似度匹配。
以上所提供的终端设备的各种实施方式,其具体实现可以参考前文中的相关说明,在此不再赘述。
本申请实施例提供的终端设备,可以根据无人机沿第一环绕航线运动时拍摄的多张第一图像规划第二环绕航线,从而可以确保无人机在第二环绕航线拍摄的图像可以与无人机在第一环绕航线拍摄的图像匹配,避免三维重建时发生图像不能连接的问题。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现本申请实施例提供的任一种拍摄方法以及任一种模型获取方法。
以上针对每个保护主题均提供了多种实施方式,在不存在冲突或矛盾的基础上,本领域技术人员可以根据实际情况自由对各种实施方式进行组合,由此构成各种不同的技术方案。而本申请文件限于篇幅,未能对所有组合而得的技术方案展开说明,但可以理解的是,这些未能展开的技术方案也属于本申请实施例公开的范围。
本申请实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存 储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (96)

  1. 一种拍摄方法,其特征在于,包括:
    获取被摄对象的位置信息;
    基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线和第二环绕航线;
    控制所述无人机分别沿所述第一环绕航线和所述第二环绕航线运动;其中,所述无人机沿所述第一环绕航线运动的过程中与所述被摄对象保持第一距离,所述无人机沿所述第二环绕航线运动的过程中与所述被摄对象保持第二距离,所述第一距离大于所述第二距离;
    控制所述无人机在运动过程中基于所述无人机搭载的相机对所述被摄对象拍摄以获取所述被摄对象的多张图像,所述无人机在所述第一环绕航线所述相机采集的图像与所述无人机在所述第二环绕航线所述相机采集的图像包括所述被摄对象的同名像点,多张所述图像用于建立所述被摄对象的立体模型。
  2. 根据权利要求1所述的方法,其特征在于,所述第一环绕航线包括多个航点,多个所述航点分布在所述被摄对象的不同方向,与所述被摄对象的距离大致相同,且大致位于同一高度,所述第一环绕航线用于引导无人机在水平面上环绕所述被摄对象运动。
  3. 根据权利要求2所述的方法,其特征在于,所述第一环绕航线上的各个航点到所述被摄对象表面点的距离为所述第一距离,所述第一环绕航线的形状与所述被摄对象在水平面上的轮廓形状匹配。
  4. 根据权利要求1所述的方法,其特征在于,所述第二环绕航线包括多条竖直航线段,多条所述竖直航线段分布在所述被摄对象的不同方向,每一所述竖直航线段用于引导所述无人机在高度方向向上或者向下运动。
  5. 根据权利要求4所述的方法,其特征在于,所述竖直航线段上的各个航点到所述被摄对象表面点的距离为所述第二距离,所述竖直航线段的形状与所述被摄对象在竖直面上的轮廓形状匹配。
  6. 根据权利要求3或5所述的方法,其特征在于,所述被摄对象表面点的位置信息是根据所述被摄对象的初始模型确定的,所述初始模型是预先基于所述被摄对象的多张初级图像建立的。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据选定的所述被摄对象表面的感兴趣区域规划第三航线;
    控制所述无人机沿所述第三航线运动,并在运动过程中对所述被摄对象拍摄。
  8. 根据权利要求7所述的方法,其特征在于,所述第三航线上的航点与所述被摄对象之间的距离小于所述第二距离。
  9. 根据权利要求1所述的方法,其特征在于,控制所述无人机在运动过程中基于所述无人机搭载的相机对所述被摄对象拍摄以获取所述被摄对象的多张图像,所述无人机在所述第一环绕航线所述相机采集的图像与所述无人机在所述第二环绕航线所述相机采集的图像包括所述被摄对象的同名像点,包括:
    所述无人机在所述第一环绕航线的第一位置,所述相机采集第一图像;
    所述无人机在所述第二环绕航线的第二位置,所述相机采集第二图像;
    所述第一图像与所述第二图像包括包括所述被摄对象的同名像点,所述第一位置和所述第二位置具有预设相对位置关系。
  10. 根据权利要求1所述的方法,其特征在于,
    利用所述多张图像建立所述被摄对象的立体模型,包括:
    对所述多张图像进行特征点匹配,并根据所述特征点匹配的结果进行三维重建,得到所述被摄对象的立体模型;
    所述对所述多张图像进行特征点匹配,包括:
    利用所述被摄对象的初始模型进行所述多张图像之间的特征点匹配,所述初始模型是预先基于所述被摄对象的多张初级图像建立的。
  11. 根据权利要求10所述的方法,其特征在于,所述利用所述被摄对象的初始模型进行所述多张图像之间的特征点匹配,包括:
    利用所述被摄对象的初始模型从所述多张图像中确定用于与第一图像进行特征点匹配的候选图像,所述第一图像是所述多张图像中的任一图像;
    将所述第一图像与所述候选图像进行特征点匹配。
  12. 根据权利要求11所述的方法,其特征在于,所述利用所述被摄对象的初始模型从所述多张图像中确定用于与第一图像进行特征点匹配的候选图像,包括:
    从所述多张图像中筛选相机位姿信息与所述第一图像对应的相机位姿信息相匹配的多张待定图像;
    利用所述初始模型从所述多张待定图像中确定所述候选图像。
  13. 根据权利要求12所述的方法,其特征在于,所述利用所述初始模型从所述多张待定图像中确定所述候选图像,包括:
    将所述第一图像的特征点投影至所述初始模型;
    对每张所述待定图像,根据所述待定图像对应的相机位姿信息将所述特征点从所述初始模型反投影至所述待定图像所在的平面,并确定位于所述待定图像内的特征点的数量;
    根据各个所述待定图像对应的所述特征点的数量确定所述候选图像。
  14. 一种模型获取方法,其特征在于,包括:
    控制所述无人机沿第一环绕航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张初级图像,所述第一环绕航线用于对所述被摄对象进行环绕拍摄;
    基于所述初级图像建立所述被摄对象的初始模型,所述初始模型包括所述被摄对象表面点的位置信息;
    基于所述位置信息和预设距离规划第二航线,所述第二航线包括多个航点,多个所述航点与所述被摄对象表面的距离大致相等;
    控制所述无人机沿所述第二航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张补充图像;
    基于所述补充图像优化所述被摄对象的初始模型。
  15. 根据权利要求14所述的方法,其特征在于,所述预设距离小于所述第一环绕航线上的航点与所述被摄对象之间的距离。
  16. 根据权利要求14所述的方法,其特征在于,所述基于所述位置信息和预设距离规划第二航线,包括:
    基于所述位置信息和多个预设距离规划多个第二航线。
  17. 根据权利要求16所述的方法,其特征在于,多个所述预设距离满足等比关系。
  18. 根据权利要求14所述的方法,其特征在于,所述基于所述位置信息和预设距离规划第二航线,包括:
    基于所述位置信息、从所述初始模型上选定的感兴趣区域和预设距离规划第二航线,所述第二航线上的航点分布在所述感兴趣区域内。
  19. 根据权利要求14所述的方法,其特征在于,所述第一环绕航线包括多个航点,多个所述航点分布在所述被摄对象的不同方向,与所述被摄对象的距离大致相同,且大致位于同一高度,所述第一环绕航线用于引导无人机在水平面上环绕所述被摄对象运动。
  20. 根据权利要求14所述的方法,其特征在于,所述第二航线包括多条竖直航线段,多条所述竖直航线段分布在所述被摄对象的不同方向,每一所述竖直航线段用于 引导所述无人机在高度方向向上或者向下运动。
  21. 根据权利要求14所述的方法,其特征在于,所述基于所述补充图像优化所述被摄对象的初始模型,包括:
    对多张所述补充图像进行特征点匹配,并根据所述特征点匹配的结果优化所述被摄对象的初始模型。
  22. 根据权利要求21所述的方法,其特征在于,所述对多张所述补充图像进行特征点匹配,包括:
    利用所述初始模型进行多张所述补充图像之间的特征点匹配。
  23. 根据权利要求22所述的方法,其特征在于,所述利用所述初始模型进行多张所述补充图像之间的特征点匹配,包括:
    利用所述初始模型从所述多张补充图像中确定用于与第一补充图像进行特征点匹配的候选补充图像,所述第一补充图像是所述多张补充图像中的任一补充图像;
    将所述第一补充图像与所述候选补充图像进行特征点匹配。
  24. 根据权利要求23所述的方法,其特征在于,所述利用所述初始模型从所述多张补充图像中确定用于与第一补充图像进行特征点匹配的候选补充图像,包括:
    从所述多张补充图像中筛选相机位姿信息与所述第一补充图像对应的相机位姿信息相匹配的多张待定补充图像;
    利用所述初始模型从所述多张待定补充图像中确定所述候选补充图像。
  25. 根据权利要求24所述的方法,其特征在于,所述利用所述初始模型从所述多张待定补充图像中确定所述候选补充图像,包括:
    将所述第一补充图像的特征点投影至所述初始模型;
    对每张所述待定补充图像,根据所述待定补充图像对应的相机位姿信息将所述特征点从所述初始模型反投影至所述待定补充图像所在的平面,并确定位于所述待定补充图像内的特征点的数量;
    根据各个所述待定补充图像对应的所述特征点的数量确定所述候选补充图像。
  26. 一种拍摄方法,其特征在于,包括:
    获取被摄对象的位置信息;
    基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线;
    控制所述无人机沿所述第一环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第一图像;
    基于多张所述第一图像和所述位置信息规划用于对所述被摄对象环绕拍摄的第二 环绕航线;
    控制所述无人机沿所述第二环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第二图像,所述第一图像和所述第二图像用于建立所述被摄对象的立体模型。
  27. 根据权利要求26所述的方法,其特征在于,所述第二环绕航线上的航点与所述被摄对象的距离是根据多张所述第一图像确定的。
  28. 根据权利要求27所述的方法,其特征在于,根据多张所述第一图像确定所述第二环绕航线上的航点与所述被摄对象的距离,包括:
    控制所述无人机与所述被摄对象保持测试距离并对所述被摄对象拍摄,得到测试图像,所述测试距离小于所述第一环绕航线上的航点与所述被摄对象的距离;
    将所述测试图像与多张所述第一图像进行相似度匹配,并根据匹配得到的相似度对所述测试距离进行调整;
    将调整完成的测试距离确定为所述第二环绕航线上的航点与所述被摄对象的距离。
  29. 根据权利要求28所述的方法,其特征在于,所述根据匹配得到的相似度对所述测试距离进行调整,包括:
    若所述匹配得到的相似度小于相似度下限,增加所述测试距离。
  30. 根据权利要求28所述的方法,其特征在于,所述根据匹配得到的相似度对所述测试距离进行调整,包括:
    若所述匹配得到的相似度大于相似度上限,减少所述测试距离。
  31. 根据权利要求28所述的方法,其特征在于,所述将所述测试图像与多张所述第一图像进行相似度匹配,包括:
    从多张所述第一图像中筛选出相机位姿信息与所述测试图像对应的相机位姿信息匹配的第一图像;
    将所述测试图像与筛选出的第一图像进行相似度匹配。
  32. 一种拍摄装置,其特征在于,包括:处理器和存储有计算机程序的存储器,所述处理器在执行所述计算机程序实现以下步骤:
    获取被摄对象的位置信息;
    基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线和第二环绕航线;
    控制所述无人机分别沿所述第一环绕航线和所述第二环绕航线运动;其中,所述 无人机沿所述第一环绕航线运动的过程中与所述被摄对象保持第一距离,所述无人机沿所述第二环绕航线运动的过程中与所述被摄对象保持第二距离,所述第一距离大于所述第二距离;
    控制所述无人机在运动过程中基于所述无人机搭载的相机对所述被摄对象拍摄以获取所述被摄对象的多张图像,所述无人机在所述第一环绕航线所述相机采集的图像与所述无人机在所述第二环绕航线所述相机采集的图像包括所述被摄对象的同名像点,多张所述图像用于建立所述被摄对象的立体模型。
  33. 根据权利要求32所述的装置,其特征在于,所述第一环绕航线包括多个航点,多个所述航点分布在所述被摄对象的不同方向,与所述被摄对象的距离大致相同,且大致位于同一高度,所述第一环绕航线用于引导无人机在水平面上环绕所述被摄对象运动。
  34. 根据权利要求33所述的装置,其特征在于,所述第一环绕航线上的各个航点到所述被摄对象表面点的距离为所述第一距离,所述第一环绕航线的形状与所述被摄对象在水平面上的轮廓形状匹配。
  35. 根据权利要求32所述的装置,其特征在于,所述第二环绕航线包括多条竖直航线段,多条所述竖直航线段分布在所述被摄对象的不同方向,每一所述竖直航线段用于引导所述无人机在高度方向向上或者向下运动。
  36. 根据权利要求35所述的装置,其特征在于,所述竖直航线段上的各个航点到所述被摄对象表面点的距离为所述第二距离,所述竖直航线段的形状与所述被摄对象在竖直面上的轮廓形状匹配。
  37. 根据权利要求34或36所述的装置,其特征在于,所述被摄对象表面点的位置信息是根据所述被摄对象的初始模型确定的,所述初始模型是预先基于所述被摄对象的多张初级图像建立的。
  38. 根据权利要求32所述的装置,其特征在于,所述处理器还用于:
    根据选定的所述被摄对象表面的感兴趣区域规划第三航线;
    控制所述无人机沿所述第三航线运动,并在运动过程中对所述被摄对象拍摄。
  39. 根据权利要求38所述的装置,其特征在于,所述第三航线上的航点与所述被摄对象之间的距离小于所述第二距离。
  40. 根据权利要求32所述的装置,其特征在于,所述处理器利用所述多张图像建立所述被摄对象的立体模型时用于:
    对所述多张图像进行特征点匹配,并根据所述特征点匹配的结果进行三维重建, 得到所述被摄对象的立体模型。
  41. 根据权利要求40所述的装置,其特征在于,所述处理器对所述多张图像进行特征点匹配时用于:
    利用所述被摄对象的初始模型进行所述多张图像之间的特征点匹配,所述初始模型是预先基于所述被摄对象的多张初级图像建立的。
  42. 根据权利要求41所述的装置,其特征在于,所述处理器利用所述被摄对象的初始模型进行所述多张图像之间的特征点匹配时用于:
    利用所述被摄对象的初始模型从所述多张图像中确定用于与第一图像进行特征点匹配的候选图像,所述第一图像是所述多张图像中的任一图像;
    将所述第一图像与所述候选图像进行特征点匹配。
  43. 根据权利要求42所述的装置,其特征在于,所述处理器利用所述被摄对象的初始模型从所述多张图像中确定用于与第一图像进行特征点匹配的候选图像时用于:
    从所述多张图像中筛选相机位姿信息与所述第一图像对应的相机位姿信息相匹配的多张待定图像;
    利用所述初始模型从所述多张待定图像中确定所述候选图像。
  44. 根据权利要求43所述的装置,其特征在于,所述处理器利用所述初始模型从所述多张待定图像中确定所述候选图像时用于:
    将所述第一图像的特征点投影至所述初始模型;
    对每张所述待定图像,根据所述待定图像对应的相机位姿信息将所述特征点从所述初始模型反投影至所述待定图像所在的平面,并确定位于所述待定图像内的特征点的数量;
    根据各个所述待定图像对应的所述特征点的数量确定所述候选图像。
  45. 一种模型获取装置,其特征在于,包括:处理器和存储有计算机程序的存储器,所述处理器在执行所述计算机程序实现以下步骤:
    控制所述无人机沿第一环绕航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张初级图像,所述第一环绕航线用于对所述被摄对象进行环绕拍摄;
    基于所述初级图像建立所述被摄对象的初始模型,所述初始模型包括所述被摄对象表面点的位置信息;
    基于所述位置信息和预设距离规划第二航线,所述第二航线包括多个航点,多个所述航点与所述被摄对象表面的距离大致相等;
    控制所述无人机沿所述第二航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张补充图像;
    基于所述补充图像优化所述被摄对象的初始模型。
  46. 根据权利要求45所述的装置,其特征在于,所述预设距离小于所述第一环绕航线上的航点与所述被摄对象之间的距离。
  47. 根据权利要求45所述的装置,其特征在于,所述处理器基于所述位置信息和预设距离规划第二航线时用于:
    基于所述位置信息和多个预设距离规划多个第二航线。
  48. 根据权利要求47所述的装置,其特征在于,多个所述预设距离满足等比关系。
  49. 根据权利要求45所述的装置,其特征在于,所述处理器基于所述位置信息和预设距离规划第二航线时用于:
    基于所述位置信息、从所述初始模型上选定的感兴趣区域和预设距离规划第二航线,所述第二航线上的航点分布在所述感兴趣区域内。
  50. 根据权利要求45所述的装置,其特征在于,所述第一环绕航线包括多个航点,多个所述航点分布在所述被摄对象的不同方向,与所述被摄对象的距离大致相同,且大致位于同一高度,所述第一环绕航线用于引导无人机在水平面上环绕所述被摄对象运动。
  51. 根据权利要求45所述的装置,其特征在于,所述第二航线包括多条竖直航线段,多条所述竖直航线段分布在所述被摄对象的不同方向,每一所述竖直航线段用于引导所述无人机在高度方向向上或者向下运动。
  52. 根据权利要求45所述的装置,其特征在于,所述处理器基于所述补充图像优化所述被摄对象的初始模型时用于:
    对多张所述补充图像进行特征点匹配,并根据所述特征点匹配的结果优化所述被摄对象的初始模型。
  53. 根据权利要求52所述的装置,其特征在于,所述处理器对多张所述补充图像进行特征点匹配时用于:
    利用所述初始模型进行多张所述补充图像之间的特征点匹配。
  54. 根据权利要求53所述的装置,其特征在于,所述处理器利用所述初始模型进行多张所述补充图像之间的特征点匹配时用于:
    利用所述初始模型从所述多张补充图像中确定用于与第一补充图像进行特征点匹配的候选补充图像,所述第一补充图像是所述多张补充图像中的任一补充图像;
    将所述第一补充图像与所述候选补充图像进行特征点匹配。
  55. 根据权利要求54所述的装置,其特征在于,所述处理器利用所述初始模型从所述多张补充图像中确定用于与第一补充图像进行特征点匹配的候选补充图像时用于:
    从所述多张补充图像中筛选相机位姿信息与所述第一补充图像对应的相机位姿信息相匹配的多张待定补充图像;
    利用所述初始模型从所述多张待定补充图像中确定所述候选补充图像。
  56. 根据权利要求55所述的装置,其特征在于,所述处理器利用所述初始模型从所述多张待定补充图像中确定所述候选补充图像时用于:
    将所述第一补充图像的特征点投影至所述初始模型;
    对每张所述待定补充图像,根据所述待定补充图像对应的相机位姿信息将所述特征点从所述初始模型反投影至所述待定补充图像所在的平面,并确定位于所述待定补充图像内的特征点的数量;
    根据各个所述待定补充图像对应的所述特征点的数量确定所述候选补充图像。
  57. 一种拍摄装置,其特征在于,包括:处理器和存储有计算机程序的存储器,所述处理器在执行所述计算机程序实现以下步骤:
    获取被摄对象的位置信息;
    基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线;
    控制所述无人机沿所述第一环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第一图像;
    基于多张所述第一图像和所述位置信息规划用于对所述被摄对象环绕拍摄的第二环绕航线;
    控制所述无人机沿所述第二环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第二图像,所述第一图像和所述第二图像用于建立所述被摄对象的立体模型。
  58. 根据权利要求57所述的装置,其特征在于,所述第二环绕航线上的航点与所述被摄对象的距离是根据多张所述第一图像确定的。
  59. 根据权利要求58所述的装置,其特征在于,所述处理器根据多张所述第一图像确定所述第二环绕航线上的航点与所述被摄对象的距离时用于:
    控制所述无人机与所述被摄对象保持测试距离并对所述被摄对象拍摄,得到测试图像,所述测试距离小于所述第一环绕航线上的航点与所述被摄对象的距离;
    将所述测试图像与多张所述第一图像进行相似度匹配,并根据匹配得到的相似度对所述测试距离进行调整;
    将调整完成的测试距离确定为所述第二环绕航线上的航点与所述被摄对象的距离。
  60. 根据权利要求59所述的装置,其特征在于,所述处理器根据匹配得到的相似度对所述测试距离进行调整时用于:
    若所述匹配得到的相似度小于相似度下限,增加所述测试距离。
  61. 根据权利要求59所述的装置,其特征在于,所述处理器根据匹配得到的相似度对所述测试距离进行调整时用于:
    若所述匹配得到的相似度大于相似度上限,减少所述测试距离。
  62. 根据权利要求59所述的装置,其特征在于,所述处理器将所述测试图像与多张所述第一图像进行相似度匹配时用于:
    从多张所述第一图像中筛选出相机位姿信息与所述测试图像对应的相机位姿信息匹配的第一图像;
    将所述测试图像与筛选出的第一图像进行相似度匹配。
  63. 一种终端设备,其特征在于,包括:
    通信模块,用于与无人机建立连接;
    处理器和存储有计算机程序的存储器,所述处理器在执行所述计算机程序实现以下步骤:
    获取被摄对象的位置信息;
    基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线和第二环绕航线;
    控制所述无人机分别沿所述第一环绕航线和所述第二环绕航线运动;其中,所述无人机沿所述第一环绕航线运动的过程中与所述被摄对象保持第一距离,所述无人机沿所述第二环绕航线运动的过程中与所述被摄对象保持第二距离,所述第一距离大于所述第二距离;
    控制所述无人机在运动过程中基于所述无人机搭载的相机对所述被摄对象拍摄以获取所述被摄对象的多张图像,所述无人机在所述第一环绕航线所述相机采集的图像与所述无人机在所述第二环绕航线所述相机采集的图像包括所述被摄对象的同名像点,多张所述图像用于建立所述被摄对象的立体模型。
  64. 根据权利要求63所述的终端设备,其特征在于,所述第一环绕航线包括多个 航点,多个所述航点分布在所述被摄对象的不同方向,与所述被摄对象的距离大致相同,且大致位于同一高度,所述第一环绕航线用于引导无人机在水平面上环绕所述被摄对象运动。
  65. 根据权利要求64所述的终端设备,其特征在于,所述第一环绕航线上的各个航点到所述被摄对象表面点的距离为所述第一距离,所述第一环绕航线的形状与所述被摄对象在水平面上的轮廓形状匹配。
  66. 根据权利要求63所述的终端设备,其特征在于,所述第二环绕航线包括多条竖直航线段,多条所述竖直航线段分布在所述被摄对象的不同方向,每一所述竖直航线段用于引导所述无人机在高度方向向上或者向下运动。
  67. 根据权利要求66所述的终端设备,其特征在于,所述竖直航线段上的各个航点到所述被摄对象表面点的距离为所述第二距离,所述竖直航线段的形状与所述被摄对象在竖直面上的轮廓形状匹配。
  68. 根据权利要求65或67所述的终端设备,其特征在于,所述被摄对象表面点的位置信息是根据所述被摄对象的初始模型确定的,所述初始模型是预先基于所述被摄对象的多张初级图像建立的。
  69. 根据权利要求63所述的终端设备,其特征在于,所述处理器还用于:
    根据选定的所述被摄对象表面的感兴趣区域规划第三航线;
    控制所述无人机沿所述第三航线运动,并在运动过程中对所述被摄对象拍摄。
  70. 根据权利要求69所述的终端设备,其特征在于,所述第三航线上的航点与所述被摄对象之间的距离小于所述第二距离。
  71. 根据权利要求63所述的终端设备,其特征在于,所述处理器利用所述多张图像建立所述被摄对象的立体模型时用于:
    对所述多张图像进行特征点匹配,并根据所述特征点匹配的结果进行三维重建,得到所述被摄对象的立体模型。
  72. 根据权利要求71所述的终端设备,其特征在于,所述处理器对所述多张图像进行特征点匹配时用于:
    利用所述被摄对象的初始模型进行所述多张图像之间的特征点匹配,所述初始模型是预先基于所述被摄对象的多张初级图像建立的。
  73. 根据权利要求72所述的终端设备,其特征在于,所述处理器利用所述被摄对象的初始模型进行所述多张图像之间的特征点匹配时用于:
    利用所述被摄对象的初始模型从所述多张图像中确定用于与第一图像进行特征点 匹配的候选图像,所述第一图像是所述多张图像中的任一图像;
    将所述第一图像与所述候选图像进行特征点匹配。
  74. 根据权利要求73所述的终端设备,其特征在于,所述处理器利用所述被摄对象的初始模型从所述多张图像中确定用于与第一图像进行特征点匹配的候选图像时用于:
    从所述多张图像中筛选相机位姿信息与所述第一图像对应的相机位姿信息相匹配的多张待定图像;
    利用所述初始模型从所述多张待定图像中确定所述候选图像。
  75. 根据权利要求74所述的终端设备,其特征在于,所述处理器利用所述初始模型从所述多张待定图像中确定所述候选图像时用于:
    将所述第一图像的特征点投影至所述初始模型;
    对每张所述待定图像,根据所述待定图像对应的相机位姿信息将所述特征点从所述初始模型反投影至所述待定图像所在的平面,并确定位于所述待定图像内的特征点的数量;
    根据各个所述待定图像对应的所述特征点的数量确定所述候选图像。
  76. 一种终端设备,其特征在于,包括:
    通信模块,用于与无人机建立连接;
    处理器和存储有计算机程序的存储器,所述处理器在执行所述计算机程序实现以下步骤:
    控制所述无人机沿第一环绕航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张初级图像,所述第一环绕航线用于对所述被摄对象进行环绕拍摄;
    基于所述初级图像建立所述被摄对象的初始模型,所述初始模型包括所述被摄对象表面点的位置信息;
    基于所述位置信息和预设距离规划第二航线,所述第二航线包括多个航点,多个所述航点与所述被摄对象表面的距离大致相等;
    控制所述无人机沿所述第二航线运动,并在运动过程中对所述被摄对象拍摄以获取所述被摄对象的多张补充图像;
    基于所述补充图像优化所述被摄对象的初始模型。
  77. 根据权利要求76所述的终端设备,其特征在于,所述预设距离小于所述第一环绕航线上的航点与所述被摄对象之间的距离。
  78. 根据权利要求76所述的终端设备,其特征在于,所述处理器基于所述位置信息和预设距离规划第二航线时用于:
    基于所述位置信息和多个预设距离规划多个第二航线。
  79. 根据权利要求78所述的终端设备,其特征在于,多个所述预设距离满足等比关系。
  80. 根据权利要求76所述的终端设备,其特征在于,所述处理器基于所述位置信息和预设距离规划第二航线时用于:
    基于所述位置信息、从所述初始模型上选定的感兴趣区域和预设距离规划第二航线,所述第二航线上的航点分布在所述感兴趣区域内。
  81. 根据权利要求76所述的终端设备,其特征在于,所述第一环绕航线包括多个航点,多个所述航点分布在所述被摄对象的不同方向,与所述被摄对象的距离大致相同,且大致位于同一高度,所述第一环绕航线用于引导无人机在水平面上环绕所述被摄对象运动。
  82. 根据权利要求76所述的终端设备,其特征在于,所述第二航线包括多条竖直航线段,多条所述竖直航线段分布在所述被摄对象的不同方向,每一所述竖直航线段用于引导所述无人机在高度方向向上或者向下运动。
  83. 根据权利要求76所述的终端设备,其特征在于,所述处理器基于所述补充图像优化所述被摄对象的初始模型时用于:
    对多张所述补充图像进行特征点匹配,并根据所述特征点匹配的结果优化所述被摄对象的初始模型。
  84. 根据权利要求83所述的终端设备,其特征在于,所述处理器对多张所述补充图像进行特征点匹配时用于:
    利用所述初始模型进行多张所述补充图像之间的特征点匹配。
  85. 根据权利要求84所述的终端设备,其特征在于,所述处理器利用所述初始模型进行多张所述补充图像之间的特征点匹配时用于:
    利用所述初始模型从所述多张补充图像中确定用于与第一补充图像进行特征点匹配的候选补充图像,所述第一补充图像是所述多张补充图像中的任一补充图像;
    将所述第一补充图像与所述候选补充图像进行特征点匹配。
  86. 根据权利要求85所述的终端设备,其特征在于,所述处理器利用所述初始模型从所述多张补充图像中确定用于与第一补充图像进行特征点匹配的候选补充图像时用于:
    从所述多张补充图像中筛选相机位姿信息与所述第一补充图像对应的相机位姿信息相匹配的多张待定补充图像;
    利用所述初始模型从所述多张待定补充图像中确定所述候选补充图像。
  87. 根据权利要求86所述的终端设备,其特征在于,所述处理器利用所述初始模型从所述多张待定补充图像中确定所述候选补充图像时用于:
    将所述第一补充图像的特征点投影至所述初始模型;
    对每张所述待定补充图像,根据所述待定补充图像对应的相机位姿信息将所述特征点从所述初始模型反投影至所述待定补充图像所在的平面,并确定位于所述待定补充图像内的特征点的数量;
    根据各个所述待定补充图像对应的所述特征点的数量确定所述候选补充图像。
  88. 一种终端设备,其特征在于,包括:
    通信模块,用于与无人机建立连接;
    处理器和存储有计算机程序的存储器,所述处理器在执行所述计算机程序实现以下步骤:
    获取被摄对象的位置信息;
    基于所述位置信息规划用于对所述被摄对象环绕拍摄的第一环绕航线;
    控制所述无人机沿所述第一环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第一图像;
    基于多张所述第一图像和所述位置信息规划用于对所述被摄对象环绕拍摄的第二环绕航线;
    控制所述无人机沿所述第二环绕航线运动,并在运动过程中对所述被摄对象进行拍摄,以获取所述被摄对象的多张第二图像,所述第一图像和所述第二图像用于建立所述被摄对象的立体模型。
  89. 根据权利要求88所述的终端设备,其特征在于,所述第二环绕航线上的航点与所述被摄对象的距离是根据多张所述第一图像确定的。
  90. 根据权利要求89所述的终端设备,其特征在于,所述处理器根据多张所述第一图像确定所述第二环绕航线上的航点与所述被摄对象的距离时用于:
    控制所述无人机与所述被摄对象保持测试距离并对所述被摄对象拍摄,得到测试图像,所述测试距离小于所述第一环绕航线上的航点与所述被摄对象的距离;
    将所述测试图像与多张所述第一图像进行相似度匹配,并根据匹配得到的相似度对所述测试距离进行调整;
    将调整完成的测试距离确定为所述第二环绕航线上的航点与所述被摄对象的距离。
  91. 根据权利要求90所述的终端设备,其特征在于,所述处理器根据匹配得到的相似度对所述测试距离进行调整时用于:
    若所述匹配得到的相似度小于相似度下限,增加所述测试距离。
  92. 根据权利要求90所述的终端设备,其特征在于,所述处理器根据匹配得到的相似度对所述测试距离进行调整时用于:
    若所述匹配得到的相似度大于相似度上限,减少所述测试距离。
  93. 根据权利要求90所述的终端设备,其特征在于,所述处理器将所述测试图像与多张所述第一图像进行相似度匹配时用于:
    从多张所述第一图像中筛选出相机位姿信息与所述测试图像对应的相机位姿信息匹配的第一图像;
    将所述测试图像与筛选出的第一图像进行相似度匹配。
  94. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-13任一项所述的拍摄方法。
  95. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求14-25任一项所述的模型获取方法。
  96. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求26-31任一项所述的拍摄方法。
PCT/CN2021/084724 2021-03-31 2021-03-31 拍摄方法、装置、计算机可读存储介质和终端设备 WO2022205208A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2021/084724 WO2022205208A1 (zh) 2021-03-31 2021-03-31 拍摄方法、装置、计算机可读存储介质和终端设备
CN202180078855.4A CN116745579A (zh) 2021-03-31 2021-03-31 拍摄方法、装置、计算机可读存储介质和终端设备
US18/374,553 US20240025571A1 (en) 2021-03-31 2023-09-28 Shooting method, device, computer-readable storage medium, and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/084724 WO2022205208A1 (zh) 2021-03-31 2021-03-31 拍摄方法、装置、计算机可读存储介质和终端设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/374,553 Continuation US20240025571A1 (en) 2021-03-31 2023-09-28 Shooting method, device, computer-readable storage medium, and terminal device

Publications (1)

Publication Number Publication Date
WO2022205208A1 true WO2022205208A1 (zh) 2022-10-06

Family

ID=83457679

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/084724 WO2022205208A1 (zh) 2021-03-31 2021-03-31 拍摄方法、装置、计算机可读存储介质和终端设备

Country Status (3)

Country Link
US (1) US20240025571A1 (zh)
CN (1) CN116745579A (zh)
WO (1) WO2022205208A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106767706A (zh) * 2016-12-09 2017-05-31 中山大学 一种无人机勘查交通事故现场的航拍图像采集方法及***
CN107272738A (zh) * 2017-07-11 2017-10-20 成都纵横自动化技术有限公司 飞行航线设置方法及装置
CN107514993A (zh) * 2017-09-25 2017-12-26 同济大学 基于无人机的面向单体建筑建模的数据采集方法及***
CN110383004A (zh) * 2017-10-24 2019-10-25 深圳市大疆创新科技有限公司 信息处理装置、空中摄像路径生成方法、程序、及记录介质
WO2020051208A1 (en) * 2018-09-04 2020-03-12 Chosid Jessica Method for obtaining photogrammetric data using a layered approach
JP6675537B1 (ja) * 2019-03-12 2020-04-01 Terra Drone株式会社 飛行経路生成装置、飛行経路生成方法とそのプログラム、構造物点検方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106767706A (zh) * 2016-12-09 2017-05-31 中山大学 一种无人机勘查交通事故现场的航拍图像采集方法及***
CN107272738A (zh) * 2017-07-11 2017-10-20 成都纵横自动化技术有限公司 飞行航线设置方法及装置
CN107514993A (zh) * 2017-09-25 2017-12-26 同济大学 基于无人机的面向单体建筑建模的数据采集方法及***
CN110383004A (zh) * 2017-10-24 2019-10-25 深圳市大疆创新科技有限公司 信息处理装置、空中摄像路径生成方法、程序、及记录介质
WO2020051208A1 (en) * 2018-09-04 2020-03-12 Chosid Jessica Method for obtaining photogrammetric data using a layered approach
JP6675537B1 (ja) * 2019-03-12 2020-04-01 Terra Drone株式会社 飛行経路生成装置、飛行経路生成方法とそのプログラム、構造物点検方法

Also Published As

Publication number Publication date
CN116745579A (zh) 2023-09-12
US20240025571A1 (en) 2024-01-25

Similar Documents

Publication Publication Date Title
CN106767706B (zh) 一种无人机勘查交通事故现场的航拍图像采集方法及***
CN107514993A (zh) 基于无人机的面向单体建筑建模的数据采集方法及***
US9981742B2 (en) Autonomous navigation method and system, and map modeling method and system
CN107504957A (zh) 利用无人机多视角摄像快速进行三维地形模型构建的方法
CN108053473A (zh) 一种室内三维模型数据的处理方法
JP6765512B2 (ja) 飛行経路生成方法、情報処理装置、飛行経路生成システム、プログラム及び記録媒体
CN111141264B (zh) 一种基于无人机的城市三维测绘方法和***
CN110345925B (zh) 一种针对五目航拍照片质量检测及空三处理方法
CN104966281A (zh) 多视影像的imu/gnss引导匹配方法
CN108537885B (zh) 山体创面三维地形数据的获取方法
CN109900274B (zh) 一种图像匹配方法及***
CN110806199A (zh) 一种基于激光投线仪和无人机的地形测量方法及***
CN110428501A (zh) 全景影像生成方法、装置、电子设备及可读存储介质
US20210264666A1 (en) Method for obtaining photogrammetric data using a layered approach
CN112270702A (zh) 体积测量方法及装置、计算机可读介质和电子设备
CN116129064A (zh) 电子地图生成方法、装置、设备及存储介质
CN110030928A (zh) 基于计算机视觉的空间物体定位和测量的方法和***
CN111527375B (zh) 一种测绘采样点的规划方法、装置、控制终端及存储介质
KR20210037998A (ko) 드론 경로 제공 방법
Sani et al. 3D reconstruction of building model using UAV point clouds
WO2022205208A1 (zh) 拍摄方法、装置、计算机可读存储介质和终端设备
CN108050995A (zh) 一种基于dem的倾斜摄影无像控点航摄测区归并方法
WO2022205210A1 (zh) 拍摄方法、装置及计算机可读存储介质,终端设备
Li et al. A Method for Image Big Data Utilization: Automated Progress Monitoring Based on Image Data for Large Construction Site
CN117707206B (zh) 无人机航测作业方法、装置及计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21933871

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180078855.4

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21933871

Country of ref document: EP

Kind code of ref document: A1