CN112771842A - Imaging method, imaging apparatus, computer-readable storage medium - Google Patents

Imaging method, imaging apparatus, computer-readable storage medium Download PDF

Info

Publication number
CN112771842A
CN112771842A CN202080005244.2A CN202080005244A CN112771842A CN 112771842 A CN112771842 A CN 112771842A CN 202080005244 A CN202080005244 A CN 202080005244A CN 112771842 A CN112771842 A CN 112771842A
Authority
CN
China
Prior art keywords
shooting
target
pixels
target track
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080005244.2A
Other languages
Chinese (zh)
Inventor
梁家斌
关雁铭
黄文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112771842A publication Critical patent/CN112771842A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

An imaging method, an imaging apparatus, a computer-readable storage medium, a movable platform, and an electronic device, the imaging method for a movable platform, the movable platform including a camera, the method comprising: acquiring shooting control parameters, automatically generating a target track of the movable platform according to the shooting control parameters, and determining a target shooting posture of the shooting device when the movable platform moves along the target track; controlling the movable platform to move along the target track, and controlling the shooting device to shoot according to the target shooting posture in the moving process so as to obtain an image set; extracting pixels to be spliced of each frame of image in the image set, wherein the pixels to be spliced comprise preset row pixels or preset column pixels, and shooting contents corresponding to the pixels to be spliced of two adjacent frames of images are not overlapped with each other; and splicing the pixels to be spliced of each frame image together according to the shooting sequence of the image frames to generate a target image.

Description

Imaging method, imaging apparatus, computer-readable storage medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an imaging method, an imaging apparatus, a computer-readable storage medium, a movable platform, and an electronic device.
Background
Movable platforms, such as drones, typically include cameras that may be used to capture images. The camera is usually mounted on the movable platform through a cradle head, and the camera can be rotated relative to the movable platform by controlling the rotation of the cradle head.
Folded images are commonly used for various entertainment presentations as a special effect image. The movable platform flies above a target area, the shooting device shoots at different attitude angles in the flying process, and images obtained by shooting are spliced by utilizing image processing software in the later period of a user, so that images with changed visual angles are obtained, and the effect of twisting the real world is generated. However, the manual operation method has a high requirement on the image processing capability of the user, and is difficult to popularize for common consumers.
Disclosure of Invention
The disclosed embodiment provides an imaging method for a movable platform, wherein the movable platform comprises a shooting device, and the method comprises the following steps:
acquiring shooting control parameters, automatically generating a target track of the movable platform according to the shooting control parameters, and determining a target shooting posture of the shooting device when the movable platform moves along the target track;
controlling the movable platform to move along the target track, and controlling the shooting device to shoot according to the target shooting posture in the moving process so as to obtain an image set;
extracting pixels to be spliced of each frame of image in the image set, wherein the pixels to be spliced comprise preset row pixels or preset column pixels, and shooting contents corresponding to the pixels to be spliced of two adjacent frames of images are not overlapped with each other;
and splicing the pixels to be spliced of each frame image together according to the shooting sequence of the image frames to generate a target image.
An embodiment of the present disclosure also provides an imaging apparatus, including:
a memory for storing executable instructions;
a processor to execute the executable instructions stored in the memory to perform the following:
acquiring shooting control parameters, automatically generating a target track of the movable platform according to the shooting control parameters, and determining a target shooting posture of the shooting device when the movable platform moves along the target track;
controlling the movable platform to move along the target track, and controlling the shooting device to shoot according to the target shooting posture in the moving process so as to obtain an image set;
extracting pixels to be spliced of each frame of image in the image set, wherein the pixels to be spliced comprise preset row pixels or preset column pixels, and shooting contents corresponding to the pixels to be spliced of two adjacent frames of images are not overlapped with each other;
and splicing the pixels to be spliced of each frame image together according to the shooting sequence of the image frames to generate a target image.
Embodiments of the present disclosure also provide a computer-readable storage medium storing executable instructions that, when executed by one or more processors, may cause the one or more processors to perform the above-described imaging method.
The disclosed embodiment also provides a movable platform, including: the imaging device is provided.
An embodiment of the present disclosure further provides an electronic device, including: the imaging device is provided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without inventive labor.
Fig. 1 is a flow chart of an imaging method according to an embodiment of the disclosure.
Fig. 2 is a schematic structural diagram of a movable platform to which the imaging method of the present embodiment is applied.
FIG. 3a shows a predetermined point and the ground of a target area;
FIG. 3b shows the predetermined point and the equivalent ground;
fig. 3c shows the target trajectory and the target area.
Fig. 4 shows a warped coordinate system.
FIG. 5a shows the geometric relationship of equivalent ground points in a warped coordinate system;
fig. 5b shows the geometrical relationship of the ground points in a warped coordinate system.
FIG. 6 shows the position relationship of the drone and the target trajectory;
FIG. 7 shows another positional relationship of the drone with the target trajectory;
FIG. 8a shows a predetermined row of pixels of an image frame;
FIG. 8b shows the stitched target image;
fig. 8c shows an example of a stitched target image.
Fig. 9 shows an object trajectory.
Fig. 10 shows another target trajectory.
Fig. 11 shows yet another target trajectory.
Fig. 12 shows the target trajectory in a warped coordinate system.
Fig. 13 shows a process of determining a target attitude angle of the photographing device.
Fig. 14 shows another target trajectory.
Fig. 15 shows yet another target trajectory.
Fig. 16 is a flowchart of an imaging device according to an embodiment of the disclosure.
Fig. 17 is a schematic structural diagram of a movable platform according to an embodiment of the disclosure.
Fig. 18 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
When a user wants to generate a folded image of a target area, as for a mode of manually generating the folded image, firstly, the user needs to control the unmanned aerial vehicle to fly above the target area, and respectively control the shooting devices to shoot images at different pitch angles at a plurality of positions in the flying process to obtain a series of images of the target area. Then, the user needs to process the series of images by using the image processing software, intercept the required part of each image, and splice the required parts of the images together to obtain the folded image of the target area.
In the process, the flight track, the shooting position and the pitching angle of the shooting device of the unmanned aerial vehicle are determined by the feeling and experience of the user, and the selection of the required part of each image and the splicing among the images also depend on the feeling and experience of the user. Therefore, the method for generating the folded image has the defects of high operation difficulty and excessive dependence on the feeling and experience of the user, so that the user threshold of the folded image is too high, and the method is not beneficial to popularization and promotion. Meanwhile, the selection of the required parts of the images and the splicing process need a large amount of manual operation, time and labor are wasted, the efficiency is low, the obtained folded images usually have segmentation traces, and the quality of the folded images is not smooth enough.
The technical solution of the present disclosure will be clearly and completely described below with reference to the embodiments and the drawings in the embodiments. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
An embodiment of the present disclosure provides an imaging method, as shown in fig. 1, the imaging method including:
s101: acquiring shooting control parameters, automatically generating a target track of the movable platform according to the shooting control parameters, and determining a target shooting posture of the shooting device when the movable platform moves along the target track;
s102: controlling the movable platform to move along the target track, and controlling the shooting device to shoot according to the target shooting posture in the moving process so as to obtain an image set;
s103: extracting pixels to be spliced of each frame of image in the image set, wherein the pixels to be spliced comprise preset row pixels or preset column pixels, and shooting contents corresponding to the pixels to be spliced of two adjacent frames of images are not overlapped with each other;
s104: and splicing the pixels to be spliced of each frame image together according to the shooting sequence of the image frames to generate a target image.
The imaging method of the embodiment can be applied to various devices which can image a target area, such as a movable platform, an electronic device and the like. After the devices obtain the image set of the target area by using the imaging method of the embodiment, the folded image of the target area is obtained by using the image set. Among others, the movable platform may include, but is not limited to: the robot, unmanned aerial vehicle, unmanned car do not do the restriction to this, as long as have the carrier of shooting device. Electronic devices may include, but are not limited to: without limitation, a remote control, a smartphone/handset, a tablet, a laptop computer, a desktop computer, a media content player, a video game station/system, a virtual reality system, an augmented reality system, a wearable device, any electronic device capable of providing or rendering image data.
For convenience of description, the present embodiment is described below by taking the unmanned aerial vehicle as an example. Fig. 2 shows a movable platform 100. The movable platform 100 includes: a movable platform 110, a pan/tilt head 140, and a camera 130.
The movable platform 110 may include a fuselage 105, and one or more propulsion units 150. The propulsion unit 150 may be configured to generate lift for the movable platform 110. Propulsion unit 150 may include a rotor. The movable platform 110 is capable of flying in three-dimensional space and is rotatable along at least one of a pitch axis, a yaw axis, and a roll axis.
The movable platform 100 may include one or more cameras 130. In some examples, the camera 130 may be a camera, a video camera. The photographing device 130 may be mounted to the pan/tilt head 140. The pan/tilt head 140 may allow the camera 130 to rotate about at least one of a pitch axis, a yaw axis, and a roll axis. For convenience of description, the present embodiment will be described below by taking the imaging device 130 as a camera as an example.
The movable platform 100 may be controlled by a remote control 120. The remote controller 120 may communicate with at least one of the movable platform 110, the pan/tilt head 140, and the camera 130. The remote control 120 includes a display. The display is used for displaying a picture of the photographing device 130. The remote control 120 also includes an input device. The input device may be used to receive input information from a user.
And S101, acquiring shooting control parameters, automatically generating a target track of the unmanned aerial vehicle according to the shooting control parameters, and determining a target shooting posture of the camera when the movable platform moves along the target track.
The present embodiment aims to automatically generate a folded image of a target area. In some examples, as shown in fig. 3a, the folded image of the target area around the predetermined point is actually an image gradually increasing from an oblique viewing angle to a vertical viewing angle in a direction away from the predetermined point. The view angle refers to an included angle between an optical axis of the camera and a horizontal plane, namely a pitch angle of the camera. As shown in fig. 3b, assuming that the camera is located at a predetermined point and remains still, if the ground of the target area can be folded toward the predetermined point to form an equivalent ground, the camera only needs to take one frame of image at the predetermined point to obtain a folded image of the target area. But actually the ground in target area is collapsible, so need drive the camera through unmanned aerial vehicle and remove, come the folding effect on simulation target area ground. As shown in fig. 3c, the unmanned aerial vehicle drives the camera to move to the end point along the target track with the predetermined point as the starting point. Wherein, for each point of the target trajectory, the distance between each point and the corresponding ground point is kept the same as the distance between the corresponding ground point and the predetermined point in fig. 3 b; at the same time, the pitch angle of the camera at each point also remains the same as the pitch angle formed by the camera at the predetermined point and the corresponding ground point in fig. 3 b. For example, for the target track point P1 and the corresponding ground point T1, the distance between P1 and T1 is equal to the distance between P and T1 in fig. 3b, and the pitch angle of the camera at P1 (i.e., the angle between the line between P1 and T1 and the horizontal plane) is equal to the angle between the line between P and T1 and the tangent plane of the equivalent ground at T1; for the target track point P2 and the corresponding ground point T2, the distance between P2 and T2 is equal to the distance between P and T2, and the pitch angle of the camera at P2 (i.e. the included angle between the connecting line between P2 and T2 and the horizontal plane) is equal to the included angle between the connecting line of P and T2 and the tangent plane of the equivalent ground at T2; by analogy, for the target track point P6 and the corresponding ground point T6, the distance between P6 and T6 is equal to the distance between P and T6, and the pitch angle of the camera at P6 (i.e., the included angle between the connecting line between P6 and T6 and the horizontal plane) is equal to the included angle between the connecting line of P and T6 and the tangent plane of the equivalent ground at T1. It can be seen that, in some examples, the target trajectory of the present embodiment can be characterized by the following parameters: the position coordinates of the trace points. The target shooting posture includes: the pitch angle of the camera at the track point. The embodiment determines the position coordinates of the track points of the target track and the pitch angle of the camera at the track points according to the above rules.
First, shooting control parameters are acquired. In some examples, the photographing control parameters may include: one or more of a photographing distance, a photographing height, a bending radius, a start point position of the target trajectory, an end point position of the target trajectory, and a direction of the target trajectory. After the shooting control parameters are obtained, the position coordinates of the track point of the target track and the pitch angle of the camera at the track point can be determined in various ways.
In one example, the position coordinates of the track points and the pitch angle of the camera at the track points may be determined analytically. In the analysis mode, a distorted coordinate system can be constructed according to the shooting control parameters, and the position coordinates of the track points in the distorted coordinate system are determined.
As shown in fig. 4, in some examples, the shooting distance d refers to a distance between a start point P1 and an end point P6 of the target track, i.e., a distance between a projection point T0 of the start point P1 on the ground and the ground T6 corresponding to the end point P6. The shooting height H refers to the height of the terminal point P6 from the ground, i.e., the distance between the terminal point P6 and its corresponding ground point T6. The bending radius r refers to the height of the origin O of the warped coordinate system, i.e., the distance between the origin O and its projected point on the ground, i.e., the corresponding ground point T1 of the starting point P1.
Wherein, the horizontal axis (x axis) of the distorted coordinate system is a horizontal line passing through the starting point P1 of the target track and extending towards the direction close to the end point P6 of the target track, the origin O of the distorted coordinate system is on the horizontal line, and the distance between the origin O and the starting point P1 of the target track is the difference H-r between the imaging height H and the bending radius r. As can be seen from this, in the example shown in fig. 4, the photographing height H, the photographing distance d, and the bending radius r have a correspondence relationship, that is
Figure BDA0002984646540000071
With respect to the above three photographing control parameters, if two of them are obtained, the third parameter can be determined. The embodiment can directly acquire the shooting height H and the bending radius r sent by the control device. Considering that the shooting height H and the shooting distance d are intuitive parameters for the user, the shooting height H and the shooting distance d sent by the control device may be obtained first, and then the bending radius r may be calculated from the shooting height H and the shooting distance d.
The start and end positions of the target track refer to the physical coordinates of the start point P1 and the end point P6. In some examples, the physical coordinates may be coordinates in a geographic coordinate system, or coordinates in a body coordinate system of the drone. The direction of the target trajectory is a direction in which the target trajectory is projected on a horizontal plane, i.e., a direction between a projected point T0 of the start point P1 of the target trajectory on the ground and a ground point T6 corresponding to the end point P6. The direction may reflect in which direction the user wishes to take a shot from the starting point. For the three shooting control parameters, in some cases, the position coordinates of the track point and the pitch angle of the camera at the track point can be determined by obtaining two parameters. For example, the start point position and direction, or the start point position and end point position, or the end point position and direction of the target trajectory may be acquired.
The process of determining the position coordinates of the track point in the warped coordinate system and the pitch angle of the camera at the track point is described below with reference to fig. 5.
Firstly, determining the equivalent position coordinates of equivalent ground points in a distorted coordinate system and the equivalent pitch angles of the connecting line of the starting point of a target track and the equivalent ground points and a horizontal plane according to shooting control parameters;
the equivalent ground point is a point on the equivalent ground, the equivalent ground is a quarter of a circular arc under a distorted coordinate system, and the equivalent ground point is formed by folding the ground of the target area.
Specifically, the equivalent position coordinates and the equivalent pitch angle may be determined by:
determining the initial pitch angle of the camera at the starting point of the target track according to the shooting height H and the bending radius r;
determining an equivalent pitch angle according to the initial pitch angle;
and determining the equivalent position coordinate according to the shooting height H, the bending radius r and the equivalent pitch angle.
As shown in fig. 5a, the starting pitch angle of the camera at the beginning of the target track is α 1, as can be seen from the geometrical relationship in fig. 5a,
Figure BDA0002984646540000081
x denotes the position coordinate of the starting point on the X-axis of the warping coordinate system, and X — r. The equivalent ground point corresponding to the ground point T2 is T2'. An equivalent pitch angle between a connection line of the starting point P1 and the equivalent ground point T2 ' and the horizontal plane is α ' 2, and α ' 2 is α 1- ω T. Wherein T represents a time moving from the ground point T1 to the ground point T2; ω is the scanning angular velocity, which can be set by the user, or can be preset in the drone or the camera. Equivalent position coordinate (T) of equivalent ground point T2' in twisted coordinate systemx′,Ty') can be determined by the following equation:
Tx ′2+Ty ′2=r2
-Ty′=(X-Tx′)×arctanα′2
then, based on the equivalent position coordinates (T)x′,Ty') and the equivalent pitch angle α' 2 determines the pitch angle of the camera at the track point.
As shown in figure 5a of the drawings,
Figure BDA0002984646540000091
and is
Figure BDA0002984646540000092
As shown in fig. 5b, the camera has a pitch angle a 2 at the locus point P2,
Figure BDA0002984646540000093
the pitch angle of the camera at the track point P2 of the target track is thus obtained.
And then, determining the position coordinates of the ground points corresponding to the equivalent ground points in the distorted coordinate system according to the shooting height H, the bending radius r and the equivalent position coordinates.
As can be seen from fig. 5a and 5b, the position coordinate of the ground point T2 corresponding to the equivalent ground point T2' in the warped coordinate system is (T)x,Ty) And is and
Tx=-θ×r
Ty=-r
and determining the position coordinates of the track points in the distorted coordinate system according to the shooting control parameters, the position coordinates of the ground points in the distorted coordinate system and the pitch angles of the cameras in the track points.
Specifically, the distance between the equivalent ground point and the start point of the target trajectory is determined according to the shooting height H, the bending radius r and the equivalent position coordinates of the equivalent ground point.
As shown in FIG. 5a, if the distance between the equivalent ground point T2' and the target trajectory starting point P1 is L, then
Figure BDA0002984646540000101
And determining the position coordinates of the track points in the distorted coordinate system according to the distance between the equivalent ground point and the starting point of the target track, the position coordinates of the ground point in the distorted coordinate system and the pitch angle of the camera in the track points.
As shown in FIG. 5b, the locus point P2 has a position coordinate of (P) in the warped coordinate systemx,Py) Then, then
Px=Tx+L×sinβ
Py=Ty+L×cosβ
Thereby obtaining the position coordinates of the trajectory point P2 of the target trajectory in the warped coordinate system. By analogy, each track point in the target track is processed according to the process, so that the position coordinates of all track points and the pitch angle of the camera at all track points can be obtained, and the target shooting posture of the target track and the camera when the unmanned aerial vehicle moves along the target track is obtained.
In some examples, the user may set the photographing distance, the photographing height, and the bending radius among the photographing control parameters through a remote controller of the drone. For example, the user may select the "folded image mode" through the remote controller, and input or select the above-described photographing control parameter values in the display of the remote controller in the "folded image mode". And responding to the operation of the user, and sending a 'folding image mode' instruction and the shooting control parameter value to the unmanned aerial vehicle by the remote controller. After receiving the 'folded image mode' instruction and the shooting control parameter values, the unmanned aerial vehicle enters the 'folded image mode' and determines a target track and a target shooting posture of the camera when the unmanned aerial vehicle moves along the target track.
In some examples, when the user selects "folded image mode", if the drone is at a distance from the ground equal to the user-set bend radius r, the current position of the drone may be taken as the predetermined point, i.e., the drone is located at the starting point P1 of the target trajectory, as shown in fig. 6. In this case, the shooting distance d is a distance between a projected point T0 of the current position of the drone on the ground and a ground point T6 corresponding to the target trajectory end point P6. After the user sets the direction of the target track, the unmanned aerial vehicle can move along the target track from the current position along the direction, and the target area is shot to obtain an image set. In some cases, the user may input or select the direction of the target trajectory through a remote control. In response to the operation of the user, the remote controller sends the direction set by the user to the unmanned aerial vehicle. In other cases, the user may not need to enter or select the direction of the target trajectory, but rather the heading at which the drone enters the "folded image mode" as the direction of the target trajectory.
In some examples, when the user selects "folded image mode", the drone is not at a distance from the ground equal to the user-set bend radius, and the current position of the drone is not at the start of the target trajectory, as shown in fig. 7. In this case, the user also needs to set the starting point position of the target track, which is the physical coordinate of the starting point P1, and the direction of the target track. Then, the unmanned aerial vehicle moves from the current position to the starting point P1 of the target track, and then moves along the target track from the starting point P1 along the direction, and the target area is shot to obtain an image set. In some cases, the user may input or select the starting position of the target track and the direction of the target track through the remote controller, or may input or select the ending position of the target track and the direction of the target track, and then obtain the starting position of the target track. In a possible embodiment, the drone may also be raised or lowered so that the distance of the drone from the ground is equal to the bending radius set by the user. In another possible embodiment, the user does not need to set the bending radius, but can automatically set the distance between the drone and the ground to the bending radius when the user selects "folded image mode".
After the image set is obtained through S102, pixels to be spliced of each frame of image in the image set can be extracted through S103, the pixels to be spliced comprise preset row pixels or preset column pixels, the shooting contents corresponding to the pixels to be spliced of two adjacent frames of images are not overlapped, and the pixels to be spliced of each frame of image are spliced together through S104 according to the shooting sequence of the image frames to generate the target image.
The camera may capture at a predetermined rate or frame rate in a target capture pose, resulting in a set of images. Each frame of image in the image set can correspond to each track point of the target track, and therefore the folded image of the target area can be obtained by extracting a plurality of pixels in each frame of image and splicing the extracted pixels of each frame of image according to a preset rule.
In some examples, the extracted preset row of pixels includes: one or more lines of pixels in the middle of each frame of image, and the preset columns of pixels comprise: one or more columns of pixels in the middle of each frame of image. The present embodiment will be described below by taking the example of extracting and stitching the pixels of the preset line.
As shown in fig. 8a, the image set may include n frame images, and for the 2 nd frame to the n-1 th frame images, the preset line of pixels of each frame image is the middle line or lines of pixels. The line number of the preset line pixels is not limited in the embodiment, the line number can be set as required, and the preset line pixels of each frame of image and the shooting contents corresponding to the preset line pixels of two adjacent frames of images do not overlap with each other. After the pixels in the preset row are extracted, as shown in fig. 8b, the preset pixels in each frame image are spliced together according to the shooting sequence of the image frame. This includes: and splicing the pixels of the preset lines of the images from the 2 nd frame to the (n-1) th frame, the pixels of the preset lines of the image from the 1 st frame and all the lines below the preset lines of the image from the 1 st frame, and the pixels of the preset lines of the image from the nth frame and all the lines above the preset lines of the image from the nth frame together to obtain a complete target image. Fig. 8c shows an example of a stitched target image, where a smooth transition from an oblique viewing angle gradually increasing to a vertical viewing angle is achieved from below to above the image, resulting in a folding effect.
Therefore, according to the imaging method of the embodiment, the user only needs to set the shooting control parameters, the target track of the movable platform and the target shooting posture of the camera when the movable platform moves along the target track can be automatically generated, the image set is obtained, the pixel selection and the splicing of the image set can be automatically completed, and the automatic generation of the folded image is realized. Compared with the prior art, the method and the device have the advantages that the user does not need to determine the target track and the target shooting posture and splice the images, dependence on the feeling and experience of the user is completely eliminated, the user operation is very simple and fast, the user threshold of folding the images is greatly reduced, and popularization of the function are very facilitated. Meanwhile, the image selection and splicing process does not need any manual operation, time and labor are saved, the operation efficiency is greatly improved, the obtained folded image is smooth and smooth, and segmentation traces are greatly inhibited or even eliminated, so that the quality of the folded image is improved, and the user experience is improved.
The above is merely an exemplary illustration of the present embodiment. The target trajectory in the figures includes six trajectory points P1-P6, but those skilled in the art will appreciate that this is for descriptive convenience only and that the target trajectory is comprised of a series of trajectory points. The shape of the target trajectory in the figures is an arc, but it will be appreciated by those skilled in the art that this is only one form of target trajectory, and that the target trajectory may have a variety of other forms, such as the target trajectories shown in fig. 9-11.
Another embodiment of the present disclosure provides an imaging method. For brevity, the same or similar features of this embodiment as those of the previous embodiment are not repeated, and only the differences from the previous embodiment will be described below.
According to the imaging method, the target track of the unmanned aerial vehicle and the target shooting posture of the camera can be automatically generated in a fitting mode. In particular, the present invention relates to a method for producing,
firstly, acquiring position coordinates of a group of control points in a distorted coordinate system; the set of control points includes at least: the start and end points of the target track.
And determining a fitting curve according to the position coordinates of the group of control points, and taking the position coordinates of the points on the fitting curve in the distorted coordinate system as the position coordinates of the track points of the target track in the distorted coordinate system.
The fitting approach also requires the creation of a warped coordinate system. Referring to the above description of an embodiment, and as shown in fig. 12, the position coordinates of the start point P1 and the end point P6 of the target trajectory as the control points in the warping coordinate system can be determined by the photographing height H and the bending radius r. Wherein the coordinates of the starting point P1 in the warping coordinate system are (H-r, 0), and the coordinates of the ending point P6 in the warping coordinate system are (H-r, 0)
Figure BDA0002984646540000131
In some examples, the fitted target trajectory may be a spline curve, e.g., a 3 rd order spline curve, using the position coordinates of the start and end points. In other examples, the set of control points may further include: at least one intermediate point of track between the start point P1 and the end point P6.
After the target track of the unmanned machine is fitted, the target pitch angle of the camera needs to be determined. In some examples, a target pitch angle of the camera when the unmanned aerial vehicle moves along the target track is determined, so that when the unmanned aerial vehicle moves along the target track to the end point of the target track at a first predetermined speed, an intersection point of an optical axis of the camera and the ground moves to a projection point of the end point of the target track on the ground at a second predetermined speed, and an included angle between a connecting line of the camera and the intersection point and the horizontal plane in the moving process is taken as the target pitch angle of the camera. Wherein, the value of first predetermined speed and second predetermined speed can be set up through the remote controller by the user, also can preset in unmanned aerial vehicle. As shown in fig. 13, the drone moves from a starting point P1 to an end point P6 of the target trajectory at a first predetermined speed along the target trajectory, an intersection point T1 of the optical axis of the camera and the ground (i.e., a ground point corresponding to the starting point P1) moves to a ground point T6 corresponding to the end point P6 at a second predetermined speed, and an angle between a connecting line between the camera and the intersection point and the horizontal plane during the movement is taken as a target pitch angle of the camera.
Therefore, according to the imaging method of the embodiment, the user only needs to set the positions of the control points, and the target track of the movable platform and the target shooting posture of the camera when the movable platform moves along the target track can be automatically generated. The method has the advantages that the operation of a user is simplified, the operation efficiency is improved, the quality of the folded image is improved, meanwhile, shooting control parameters needing to be input by the user are more visual and simplified, a complex calculation process is not needed, and the process of generating the folded image is simpler and more efficient.
The above is merely an exemplary illustration of the present embodiment. The shape of the target trajectory in the figures is an arc, but it will be understood by those skilled in the art that this is only one form of target trajectory, and that the target trajectory may have other various forms, such as the target trajectories shown in fig. 14 and 15.
Still another embodiment of the present disclosure provides an image forming apparatus, as shown in fig. 16, including:
a memory for storing executable instructions;
a processor to execute the executable instructions stored in the memory to perform the following:
acquiring shooting control parameters, automatically generating a target track of the movable platform according to the shooting control parameters, and determining a target shooting posture of the shooting device when the movable platform moves along the target track;
controlling the movable platform to move along the target track, and controlling the shooting device to shoot according to the target shooting posture in the moving process so as to obtain an image set;
extracting pixels to be spliced of each frame of image in the image set, wherein the pixels to be spliced comprise preset row pixels or preset column pixels, and shooting contents corresponding to the pixels to be spliced of two adjacent frames of images are not overlapped with each other;
and splicing the pixels to be spliced of each frame image together according to the shooting sequence of the image frames to generate a target image.
In the imaging apparatus of the present embodiment, the processor may perform operations corresponding to the steps in the imaging method of the above-described embodiment.
In some examples, the target trajectory is characterized by the following parameters: position coordinates of the track points; the target shooting pose includes: the shooting device is at the angle of pitch of track point.
In some examples, the target trajectory of the movable platform is automatically generated analytically and/or by fitting.
In some examples, the photographing control parameters include: a photographing height, a bending radius, a position of a start point of the target track, and a direction of the target track.
In some examples, the shooting height and the bending radius sent by the control device are acquired; or; acquiring the shooting height and the shooting distance in the direction sent by the control device; and determining the bending radius according to the shooting height and the shooting distance.
In some examples, the shooting distance is a distance between the start point and an end point of the target trajectory; the shooting height is the height from the end point of the target track to the ground.
In some examples, a warped coordinate system is constructed from the shooting control parameters, and position coordinates of a trajectory point of the target trajectory in the warped coordinate system are determined.
In some examples, a horizontal axis of the warped coordinate system is a horizontal line extending through a start point of the target trajectory in a direction approaching an end point of the target trajectory, and an origin of the warped coordinate system is spaced apart from the start point of the target trajectory by a difference between the imaging height and the bending radius, the origin being on the horizontal line.
In some examples, the bend radius is a distance of the origin from the ground.
In some examples, the processor is further configured to: acquiring position coordinates of a group of control points in the distorted coordinate system; the set of control points comprises at least: the start and end points of the target track; and determining a fitting curve according to the position coordinates of the group of control points, and taking the position coordinates of the points on the fitting curve in the distorted coordinate system as the position coordinates of the track points of the target track in the distorted coordinate system.
In some examples, the position coordinates of the set of control points in the warped coordinate system are determined by the capture height and the bend radius.
In some examples, the set of control points further comprises: at least one intermediate point between the start and the end point.
In some examples, the target capture pose comprises: a target pitch angle; the processor is further configured to perform the following operations:
and determining the target shooting posture of the shooting device when the movable platform moves along the target track, so that when the movable platform moves to the end point of the target track at a first preset speed along the target track, the intersection point of the optical axis of the shooting device and the ground moves to the projection point of the end point of the target track on the ground at a second preset speed.
In some examples, the processor is further configured to: and taking the included angle between the horizontal plane and the connecting line of the shooting device and the intersection point in the moving process as the target pitch angle.
In some examples, the preset row of pixels includes: a middle row or rows of pixels of the image; the preset column of pixels comprises: one or more columns of pixels in the middle of the image; the preset line pixels of the first frame image comprise: all the pixels of the row below the preset row; the preset column of pixels of the first frame image comprises: all columns of pixels below the preset column; the preset row of pixels of the last frame image comprises: all row pixels above the preset row; the preset column of pixels of the last frame image comprises: all column pixels above the preset column.
Yet another embodiment of the present disclosure also provides a computer-readable storage medium storing executable instructions that, when executed by one or more processors, may cause the one or more processors to perform the imaging method of the above-described embodiment.
A computer-readable storage medium may be, for example, any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the readable storage medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
In addition, the computer program may be configured with computer program code, for example, comprising computer program modules. It should be noted that the division manner and the number of the modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, when these program modules are executed by a computer (or a processor), the computer may execute the flow of the simulation method of the unmanned aerial vehicle described in the present disclosure and the modifications thereof.
Still another embodiment of the present disclosure provides a movable platform, as shown in fig. 17, including: the image forming apparatus of the above embodiment. The imaging device is mounted on the movable platform, or is mounted on the movable platform through a holder. In some examples, the movable platform is a drone.
Still another embodiment of the present disclosure provides an electronic device, as shown in fig. 18, including: the image forming apparatus of the above embodiment. In some examples, the electronic device is a remote control of the movable platform.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working process of the device described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; while the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; features in embodiments of the disclosure may be combined arbitrarily, without conflict; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.

Claims (39)

1. An imaging method for a movable platform including a camera, the method comprising:
acquiring shooting control parameters, automatically generating a target track of the movable platform according to the shooting control parameters, and determining a target shooting posture of the shooting device when the movable platform moves along the target track;
controlling the movable platform to move along the target track, and controlling the shooting device to shoot according to the target shooting posture in the moving process so as to obtain an image set;
extracting pixels to be spliced of each frame of image in the image set, wherein the pixels to be spliced comprise preset row pixels or preset column pixels, and shooting contents corresponding to the pixels to be spliced of two adjacent frames of images are not overlapped with each other;
and splicing the pixels to be spliced of each frame image together according to the shooting sequence of the image frames to generate a target image.
2. The imaging method of claim 1,
the target trajectory is characterized by the following parameters: position coordinates of the track points;
the target shooting pose includes: the shooting device is at the angle of pitch of track point.
3. The imaging method of claim 1, wherein said automatically generating a target trajectory of the movable platform from the capture control parameters comprises:
and automatically generating the target track of the movable platform in an analytic mode and/or a fitting mode.
4. The imaging method according to claim 1, wherein the photographing control parameter includes: a photographing height, a bending radius, a position of a start point of the target track, and a direction of the target track.
5. The imaging method according to claim 4, wherein the acquiring of the photographing control parameter includes:
acquiring the shooting height and the bending radius sent by a control device;
or;
acquiring the shooting height and the shooting distance in the direction sent by the control device;
and determining the bending radius according to the shooting height and the shooting distance.
6. The imaging method as set forth in claim 5,
the shooting distance is the distance between the starting point and the end point of the target track;
the shooting height is the height from the end point of the target track to the ground.
7. The imaging method of claim 4, wherein said automatically generating a target trajectory of the movable platform from the capture control parameters comprises:
and constructing a distorted coordinate system according to the shooting control parameters, and determining the position coordinates of the track points of the target track in the distorted coordinate system.
8. The imaging method as set forth in claim 7,
the horizontal axis of the distorted coordinate system is a horizontal line extending from the starting point of the target track to the direction close to the end point of the target track, the distance between the origin of the distorted coordinate system and the starting point of the target track is the difference value between the imaging height and the bending radius, and the origin is on the horizontal line.
9. The imaging method of claim 8, wherein the bend radius is a distance of the origin from a ground surface.
10. The imaging method of claim 7, wherein said determining the location coordinates of the trajectory points of the target trajectory in the warped coordinate system comprises:
acquiring position coordinates of a group of control points in the distorted coordinate system; the set of control points comprises at least: the start and end points of the target track;
and determining a fitting curve according to the position coordinates of the group of control points, and taking the position coordinates of the points on the fitting curve in the distorted coordinate system as the position coordinates of the track points of the target track in the distorted coordinate system.
11. The imaging method of claim 10, wherein the position coordinates of the set of control points in the warped coordinate system are determined by the capture height and the bend radius.
12. The imaging method of claim 10, wherein the set of control points further comprises: at least one intermediate point between the start and the end point.
13. The imaging method of claim 1, wherein the target capture pose comprises: a target pitch angle;
the determining of the target shooting posture of the shooting device when the movable platform moves along the target track comprises:
and determining the target shooting posture of the shooting device when the movable platform moves along the target track, so that when the movable platform moves to the end point of the target track at a first preset speed along the target track, the intersection point of the optical axis of the shooting device and the ground moves to the projection point of the end point of the target track on the ground at a second preset speed.
14. The imaging method according to claim 13, wherein an angle between a line connecting the photographing device and the intersection point and a horizontal plane during the movement is taken as the target pitch angle.
15. The imaging method of claim 1,
the preset row of pixels comprises: a middle row or rows of pixels of the image;
the preset column of pixels comprises: one or more columns of pixels in the middle of the image.
16. The imaging method of claim 1,
the preset line pixels of the first frame image comprise: all the pixels of the row below the preset row;
the preset column of pixels of the first frame image comprises: all columns of pixels below the predetermined column.
17. The imaging method of claim 1,
the preset row of pixels of the last frame image comprises: all row pixels above the preset row;
the preset column of pixels of the last frame image comprises: all column pixels above the preset column.
18. An image forming apparatus, comprising:
a memory for storing executable instructions;
a processor to execute the executable instructions stored in the memory to perform the following:
acquiring shooting control parameters, automatically generating a target track of the movable platform according to the shooting control parameters, and determining a target shooting posture of the shooting device when the movable platform moves along the target track;
controlling the movable platform to move along the target track, and controlling the shooting device to shoot according to the target shooting posture in the moving process so as to obtain an image set;
extracting pixels to be spliced of each frame of image in the image set, wherein the pixels to be spliced comprise preset row pixels or preset column pixels, and shooting contents corresponding to the pixels to be spliced of two adjacent frames of images are not overlapped with each other;
and splicing the pixels to be spliced of each frame image together according to the shooting sequence of the image frames to generate a target image.
19. The imaging apparatus of claim 18,
the target trajectory is characterized by the following parameters: position coordinates of the track points;
the target shooting pose includes: the shooting device is at the angle of pitch of track point.
20. The imaging apparatus of claim 18, wherein the processor is further configured to:
and automatically generating the target track of the movable platform in an analytic mode and/or a fitting mode.
21. The imaging apparatus of claim 18, wherein the photographing control parameter includes: a photographing height, a bending radius, a position of a start point of the target track, and a direction of the target track.
22. The imaging apparatus of claim 21, wherein the processor is further configured to:
acquiring the shooting height and the bending radius sent by a control device;
or;
acquiring the shooting height and the shooting distance in the direction sent by the control device;
and determining the bending radius according to the shooting height and the shooting distance.
23. The imaging apparatus of claim 22,
the shooting distance is the distance between the starting point and the end point of the target track;
the shooting height is the height from the end point of the target track to the ground.
24. The imaging apparatus of claim 21, wherein the processor is further configured to:
and constructing a distorted coordinate system according to the shooting control parameters, and determining the position coordinates of the track points of the target track in the distorted coordinate system.
25. The imaging apparatus of claim 24,
the horizontal axis of the distorted coordinate system is a horizontal line extending from the starting point of the target track to the direction close to the end point of the target track, the distance between the origin of the distorted coordinate system and the starting point of the target track is the difference value between the imaging height and the bending radius, and the origin is on the horizontal line.
26. The imaging apparatus of claim 25, wherein the bend radius is a distance of the origin from a ground surface.
27. The imaging apparatus of claim 24, wherein the processor is further configured to:
acquiring position coordinates of a group of control points in the distorted coordinate system; the set of control points comprises at least: the start and end points of the target track;
and determining a fitting curve according to the position coordinates of the group of control points, and taking the position coordinates of the points on the fitting curve in the distorted coordinate system as the position coordinates of the track points of the target track in the distorted coordinate system.
28. The imaging apparatus of claim 27, wherein the position coordinates of the set of control points in the warped coordinate system are determined by the capture height and the bend radius.
29. The imaging apparatus of claim 27, wherein the set of control points further comprises: at least one intermediate point between the start and the end point.
30. The imaging apparatus of claim 18, wherein the target capture pose comprises: a target pitch angle;
the processor is further configured to perform the following operations:
and determining the target shooting posture of the shooting device when the movable platform moves along the target track, so that when the movable platform moves to the end point of the target track at a first preset speed along the target track, the intersection point of the optical axis of the shooting device and the ground moves to the projection point of the end point of the target track on the ground at a second preset speed.
31. The imaging apparatus of claim 30, wherein the processor is further configured to:
and taking the included angle between the horizontal plane and the connecting line of the shooting device and the intersection point in the moving process as the target pitch angle.
32. The imaging apparatus of claim 18,
the preset row of pixels comprises: a middle row or rows of pixels of the image;
the preset column of pixels comprises: one or more columns of pixels in the middle of the image.
33. The imaging apparatus of claim 18,
the preset line pixels of the first frame image comprise: all the pixels of the row below the preset row;
the preset column of pixels of the first frame image comprises: all columns of pixels below the predetermined column.
34. The imaging apparatus of claim 18,
the preset row of pixels of the last frame image comprises: all row pixels above the preset row;
the preset column of pixels of the last frame image comprises: all column pixels above the preset column.
35. A computer-readable storage medium having stored thereon executable instructions that, when executed by one or more processors, may cause the one or more processors to perform the imaging method of any one of claims 1 to 17.
36. A movable platform, comprising: the imaging apparatus of any one of claims 18-34.
37. The movable platform of claim 36, wherein the movable platform is a drone.
38. An electronic device, comprising: the imaging apparatus of any one of claims 18-34.
39. The electronic device of claim 38, wherein the electronic device is a remote control of a movable platform.
CN202080005244.2A 2020-06-02 2020-06-02 Imaging method, imaging apparatus, computer-readable storage medium Pending CN112771842A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/093940 WO2021243566A1 (en) 2020-06-02 2020-06-02 Imaging method, imaging apparatus, and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112771842A true CN112771842A (en) 2021-05-07

Family

ID=75699520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080005244.2A Pending CN112771842A (en) 2020-06-02 2020-06-02 Imaging method, imaging apparatus, computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN112771842A (en)
WO (1) WO2021243566A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546111B (en) * 2022-09-13 2023-12-05 武汉海微科技有限公司 Curved surface screen detection method, device, equipment and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104168455A (en) * 2014-08-08 2014-11-26 北京航天控制仪器研究所 Air-based large-scene photographing system and method
CN105262951A (en) * 2015-10-22 2016-01-20 努比亚技术有限公司 Mobile terminal having binocular camera and photographing method
CN105988666A (en) * 2015-03-23 2016-10-05 Lg电子株式会社 Mobile terminal and control method thereof
CN106331527A (en) * 2016-10-12 2017-01-11 腾讯科技(北京)有限公司 Image splicing method and device
CN106792078A (en) * 2016-07-12 2017-05-31 乐视控股(北京)有限公司 Method for processing video frequency and device
WO2017124840A1 (en) * 2016-01-22 2017-07-27 深圳泰山体育科技股份有限公司 Optical control method and system for aircraft
CN107343153A (en) * 2017-08-31 2017-11-10 王修晖 A kind of image pickup method of unmanned machine, device and unmanned plane
CN107516294A (en) * 2017-09-30 2017-12-26 百度在线网络技术(北京)有限公司 The method and apparatus of stitching image
CN108513642A (en) * 2017-07-31 2018-09-07 深圳市大疆创新科技有限公司 A kind of image processing method, unmanned plane, ground control cabinet and its image processing system
CN109032184A (en) * 2018-09-05 2018-12-18 深圳市道通智能航空技术有限公司 Flight control method, device, terminal device and the flight control system of aircraft
CN110073403A (en) * 2017-11-21 2019-07-30 深圳市大疆创新科技有限公司 Image output generation method, equipment and unmanned plane
CN110268704A (en) * 2017-09-29 2019-09-20 深圳市大疆创新科技有限公司 Method for processing video frequency, equipment, unmanned plane and system
CN110717861A (en) * 2019-12-12 2020-01-21 成都纵横大鹏无人机科技有限公司 Image splicing method and device, electronic equipment and computer readable storage medium
CN110741626A (en) * 2018-10-31 2020-01-31 深圳市大疆创新科技有限公司 Shooting control method, movable platform, control device and storage medium
CN110751683A (en) * 2019-10-28 2020-02-04 北京地平线机器人技术研发有限公司 Trajectory prediction method and device, readable storage medium and electronic equipment
CN111192286A (en) * 2018-11-14 2020-05-22 西安中兴新软件有限责任公司 Image synthesis method, electronic device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016015251A1 (en) * 2014-07-30 2016-02-04 SZ DJI Technology Co., Ltd. Systems and methods for target tracking
CN104754228A (en) * 2015-03-27 2015-07-01 广东欧珀移动通信有限公司 Mobile terminal and method for taking photos by using cameras of mobile terminal
CN107945112B (en) * 2017-11-17 2020-12-08 浙江大华技术股份有限公司 Panoramic image splicing method and device
CN110648283B (en) * 2019-11-27 2020-03-20 成都纵横大鹏无人机科技有限公司 Image splicing method and device, electronic equipment and computer readable storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104168455A (en) * 2014-08-08 2014-11-26 北京航天控制仪器研究所 Air-based large-scene photographing system and method
CN105988666A (en) * 2015-03-23 2016-10-05 Lg电子株式会社 Mobile terminal and control method thereof
CN105262951A (en) * 2015-10-22 2016-01-20 努比亚技术有限公司 Mobile terminal having binocular camera and photographing method
WO2017124840A1 (en) * 2016-01-22 2017-07-27 深圳泰山体育科技股份有限公司 Optical control method and system for aircraft
CN106792078A (en) * 2016-07-12 2017-05-31 乐视控股(北京)有限公司 Method for processing video frequency and device
CN106331527A (en) * 2016-10-12 2017-01-11 腾讯科技(北京)有限公司 Image splicing method and device
CN108513642A (en) * 2017-07-31 2018-09-07 深圳市大疆创新科技有限公司 A kind of image processing method, unmanned plane, ground control cabinet and its image processing system
CN107343153A (en) * 2017-08-31 2017-11-10 王修晖 A kind of image pickup method of unmanned machine, device and unmanned plane
CN110268704A (en) * 2017-09-29 2019-09-20 深圳市大疆创新科技有限公司 Method for processing video frequency, equipment, unmanned plane and system
CN107516294A (en) * 2017-09-30 2017-12-26 百度在线网络技术(北京)有限公司 The method and apparatus of stitching image
CN110073403A (en) * 2017-11-21 2019-07-30 深圳市大疆创新科技有限公司 Image output generation method, equipment and unmanned plane
CN109032184A (en) * 2018-09-05 2018-12-18 深圳市道通智能航空技术有限公司 Flight control method, device, terminal device and the flight control system of aircraft
CN110741626A (en) * 2018-10-31 2020-01-31 深圳市大疆创新科技有限公司 Shooting control method, movable platform, control device and storage medium
CN111192286A (en) * 2018-11-14 2020-05-22 西安中兴新软件有限责任公司 Image synthesis method, electronic device and storage medium
CN110751683A (en) * 2019-10-28 2020-02-04 北京地平线机器人技术研发有限公司 Trajectory prediction method and device, readable storage medium and electronic equipment
CN110717861A (en) * 2019-12-12 2020-01-21 成都纵横大鹏无人机科技有限公司 Image splicing method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
WO2021243566A1 (en) 2021-12-09

Similar Documents

Publication Publication Date Title
JP7020522B2 (en) Information processing equipment, information processing methods, computer-readable media, imaging systems, and flying objects
US20200104598A1 (en) Imaging control method and device
US20200162682A1 (en) Video processing method, device, aircraft, and system
US11513511B2 (en) Techniques for image recognition-based aerial vehicle navigation
US9866752B2 (en) Systems and methods for producing a combined view from fisheye cameras
CN113038016B (en) Unmanned aerial vehicle image acquisition method and unmanned aerial vehicle
EP3182202B1 (en) Selfie-drone system and performing method thereof
US11611811B2 (en) Video processing method and device, unmanned aerial vehicle and system
CN108702444A (en) A kind of image processing method, unmanned plane and system
CN105573341A (en) Aerial vehicle optical control method and aerial vehicle optical control system
CN107643758A (en) Shoot the autonomous system and method that include unmanned plane and earth station of mobile image
US11295621B2 (en) Methods and associated systems for managing 3D flight paths
CN108513642B (en) Image processing method, unmanned aerial vehicle, ground console and image processing system thereof
US20150278987A1 (en) Multi-viewpoint image capturing method and image display method
CN112639652A (en) Target tracking method and device, movable platform and imaging platform
CN108731681A (en) Rotor wing unmanned aerial vehicle method of navigation, related computer program, electronic equipment and unmanned plane
US20190047706A1 (en) Flight direction display method and apparatus, and unmanned aerial vehicle
CN112771842A (en) Imaging method, imaging apparatus, computer-readable storage medium
WO2021056352A1 (en) Unmanned aerial vehicle simulation method and simulation device, and computer readable storage medium
CN110637269A (en) Unmanned aerial vehicle control method and control device and unmanned aerial vehicle
US10249340B2 (en) Video generation device, video generation program, and video generation method
CN108419052A (en) A kind of more unmanned plane method for panoramic imaging
CN112334853A (en) Course adjustment method, ground end equipment, unmanned aerial vehicle, system and storage medium
WO2023201574A1 (en) Control method for unmanned aerial vehicle, image display method, unmanned aerial vehicle, and control terminal
WO2019227352A1 (en) Flight control method and aircraft

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210507