WO2018098824A1 - 一种拍摄控制方法、装置以及控制设备 - Google Patents

一种拍摄控制方法、装置以及控制设备 Download PDF

Info

Publication number
WO2018098824A1
WO2018098824A1 PCT/CN2016/108446 CN2016108446W WO2018098824A1 WO 2018098824 A1 WO2018098824 A1 WO 2018098824A1 CN 2016108446 W CN2016108446 W CN 2016108446W WO 2018098824 A1 WO2018098824 A1 WO 2018098824A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
shooting
target object
image
angle
Prior art date
Application number
PCT/CN2016/108446
Other languages
English (en)
French (fr)
Inventor
钱杰
李昊南
赵丛
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2016/108446 priority Critical patent/WO2018098824A1/zh
Priority to CN201680030410.8A priority patent/CN107710283B/zh
Publication of WO2018098824A1 publication Critical patent/WO2018098824A1/zh
Priority to US16/426,975 priority patent/US10897569B2/en
Priority to US17/151,335 priority patent/US11575824B2/en
Priority to US18/164,811 priority patent/US11863857B2/en
Priority to US18/544,884 priority patent/US20240155219A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • the present invention relates to the field of automatic control technologies, and in particular, to a shooting control method, apparatus, and control device.
  • various camera devices such as cameras and video recorders have emerged. With these imaging devices, people can capture images of various objects. If these camera devices are set up on certain moving objects, such as smart flying devices such as drones, monitoring of certain objects can also be achieved. The monitoring can mean that the imaging device can capture the target object that needs to be continuously monitored regardless of how the moving object carrying the imaging device moves.
  • the current implementation is mainly realized by image recognition technology. Specifically, based on the image recognition technology, the position of the target object is determined according to the gradation, texture, and the like of the image region where the target object is located in the image captured by the imaging device, and the moving object is moved during the movement of the moving object. The determined position of the target object is adjusted to adjust the shooting angle of the imaging device to acquire a new image and perform image recognition, thereby achieving continuous monitoring of the target object.
  • the image recognition technology based on features such as grayscale and texture is relatively complicated, and the required software and hardware costs are high. Moreover, if the target object to be monitored is occluded, the image recognition technology cannot recognize the target. Object, which causes an operation error.
  • the embodiment of the invention provides a shooting control method, device and control device, which can be relatively simple Monitoring of confirmed target objects accurately and accurately.
  • an embodiment of the present invention provides a shooting control method, including:
  • the photographing adjustment instruction is configured to adjust a photographing angle of the image capturing apparatus such that a position corresponding to the position estimation information is within a field of view of the image capturing apparatus.
  • an embodiment of the present invention further provides a shooting control apparatus, including:
  • An acquiring module configured to acquire a set of information including at least two sets of shooting information, where the shooting information includes: shooting position information and shooting angle information when the target object is captured;
  • a determining module configured to determine position estimation information of the target object based on at least two sets of shooting information selected from the information set, wherein the location corresponding to the shooting location information in each selected group of shooting information is different;
  • control module configured to generate a shooting adjustment instruction according to the position estimation information to adjust an imaging device; the shooting adjustment instruction is used to adjust a shooting angle of the imaging device, so that a position corresponding to the position estimation information is in the imaging Within the field of view of the device.
  • an embodiment of the present invention further provides a control device, including: a processor and an output interface;
  • the processor is configured to acquire a set of information including at least two sets of shooting information, where the shooting information includes: shooting position information and shooting angle information when the target object is captured; and at least two groups of shooting based on the selected information set
  • the information, the position estimation information of the target object is determined, wherein the position corresponding to the shooting position information in each selected group of shooting information is different;
  • the shooting adjustment instruction is generated according to the position estimation information to adjust the imaging device;
  • the command is used to adjust the shooting angle of the image capturing device, so that the position corresponding to the position estimating information is within the field of view of the image capturing device; and the output interface is configured to output the adjusting command to adjust the image capturing device.
  • the position of the target object that needs continuous shooting is estimated by the shooting position and the shooting angle, and then the shooting direction of the shooting module is adjusted based on the position estimation information obtained by the position estimation, and the implementation manner is simple and quick, and the target can be effectively avoided.
  • Image caused by occlusion of the object Identify problems with operation errors. Improve the continuous shooting efficiency of the target object.
  • FIG. 1 is a schematic diagram of position coordinates of an embodiment of the present invention
  • 2a is a schematic diagram of an image coordinate system and an angle of view of an embodiment of the present invention
  • 2b is a schematic view of a field of view of an embodiment of the present invention.
  • FIG. 3 is a schematic flow chart of a shooting control method according to an embodiment of the present invention.
  • FIG. 4 is a schematic flow chart of another shooting control method according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart diagram of a method for adjusting an image capturing apparatus according to an embodiment of the present invention
  • FIG. 6 is a schematic flow chart of a mobile control method according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a photographing control apparatus according to an embodiment of the present invention.
  • Figure 8 is a schematic structural view of one of the control modules of Figure 7;
  • FIG. 9 is a schematic structural diagram of a control device according to an embodiment of the present invention.
  • monitoring of a certain target object can be realized by a movable moving object carrying the imaging device.
  • the moving object may be an Unmanned Aerial Vehicle (UAV), or an unmanned vehicle, a movable robot, or the like.
  • UAV Unmanned Aerial Vehicle
  • These moving objects can be equipped with an imaging device by setting a pan/tilt.
  • the head can be a three-axis head that can rotate on three axes of yaw, pitch, and roll. By controlling the angle of rotation of the gimbal on one or more axes of rotation, it is better to guarantee no one When a moving object such as a machine moves to a certain place or a position, the target object can be continuously captured.
  • the image captured by the camera device including the target object can be transmitted back to a certain terrestrial device through the wireless link.
  • the image including the target object captured by the drone can be transmitted to the smartphone or tablet through the wireless link.
  • Such intelligent terminals which have established a communication link with the drone or directly with the camera device before receiving the image including the target object.
  • the target object can be an object specified by the user, such as an environmental object.
  • the image captured by the camera device can be displayed in a user interface, and the user selects an object as a target object by clicking operation on the image displayed in the user interface.
  • a user can select a tree, an animal, or an object of a certain area as a target object.
  • the user can also input only the image features of certain objects, such as inputting a facial feature or the shape feature of an object, and performing image processing by the corresponding processing module to find a character or object corresponding to the image feature, and then The person or object found is taken as the target object.
  • the target object may be a stationary object, or the object does not move for a period of continuous shooting, or the speed of moving during continuous shooting relative to the movement of a moving object such as a drone.
  • the speed is much smaller, for example, the speed difference between the two is less than the preset threshold.
  • the image can be analyzed and recognized by the image recognition technology during the moving shooting of the moving object such as the drone equipped with the imaging device.
  • each image captured by the image may be identified based on features such as grayscale and texture to find the target object and continuously capture the target object.
  • the target object In the process of continuous shooting of the target object, there may be a loss of the target object, which may result in multiple reasons for loss. Specifically, after the target object is occluded by an object, image recognition based on features such as grayscale and texture may be The target object cannot be found, resulting in the loss of the target object; or, if the distance between the moving object and the target object is far away, the grayscale, texture, and the like of the target object in the captured image are insufficient to be recognized from the image. The target object causes the target object to be lost. Of course, there may be other cases where the target object is lost. For example, the lens of the imaging device is exposed to strong light, so that the features such as grayscale and texture in the captured image are weak, or the module performing image recognition processing is faulty. It should be noted that the above-mentioned lost target object means that the target object cannot be determined in the image.
  • the image satisfying condition for the target object means that, for a certain captured image, the target object can be accurately recognized in the image based on the image recognition technology.
  • the captured shooting information at the time of the shooting includes: shooting position information and shooting angle information, wherein the shooting position information is used to indicate position information of the imaging device when the imaging device captures the target object, and the shooting position information may be used for carrying the camera Positioning information of the moving object of the device, for example, GPS coordinates; the shooting angle information of the embodiment of the present invention is used to indicate the orientation of the target object relative to the imaging device when the imaging device captures the target object, and the orientation may be based on the posture of the pan/tilt
  • the angle (the yaw angle yaw, the pitch angle pitch) and the display position of the target object in the captured image are comprehensively calculated and determined.
  • the embodiment of the present invention detects at least two images satisfying the condition and records the corresponding shooting information.
  • the recorded photographing information constitutes a set of information so that the position estimation information of the target object can be calculated based on the photographing information, and is convenient for a certain degree when the target object is lost, or when the object photographing needs to be directly based on the position.
  • the user's continuous shooting needs.
  • the positions corresponding to the shooting position information included in each group of shooting information in the information set are different.
  • the imaging device is mounted on the moving object through the pan/tilt
  • the shooting position information includes the collected position coordinates of the moving object
  • the shooting angle information includes posture information according to the pan/tilt and The angle at which the target object calculates the position information in the captured image.
  • the pitch angle in the shooting angle information may be taken by the pitch angle pitch of the pan/tilt
  • the yaw angle in the angle information is the yaw angle yaw of the gimbal.
  • the offset of the target object relative to the image center relative to the X-axis of the image may be determined according to the pixel distance dp1 of the target object relative to the X-axis of the image physical coordinate system and the horizontal field of view angle.
  • the elevation angle pitch of the pan/tilt may be added to the offset angle relative to the X axis of the image, and the yaw angle in the shooting angle information is the yaw angle yaw of the pan/tilt plus the Y axis relative to the image.
  • Offset angle Specifically, as shown in FIG. 2a and FIG. 2b, the physical coordinate system of the image, the horizontal field of view angle and the vertical field of view of the imaging device are shown, based on the target object.
  • the pixel distance ratio of the heart point relative to the pixel distance of the X-axis and the Y-axis and the corresponding field of view angle can be obtained as an offset angle with respect to the X-axis of the image and an offset angle of the Y-axis of the image.
  • the selection rule used to select at least two sets of shooting information from the information set includes: selecting shooting information based on the calculated separation distance of the shooting position information in the shooting information; and/or, based on the shooting angle information in the shooting information The calculated interval angle is used to select the shooting information.
  • the condition that the continuous shooting based on the location is satisfied may include: receiving a control instruction issued by the user based on the position for continuous shooting, or calculating the position coordinate of the target object more accurately based on the information in the already recorded information set.
  • the location estimation information of the calculation target object is described by taking only two sets of shooting information as an example.
  • the coordinates of the target object are t(tx, ty), and the shooting position information d1 (d1x, d1y) in the first group of shooting information is selected, and the shooting angle is The yaw angle in the information is yaw1, the shooting position information d2 (d2x, d2y) in the second group of shooting information, and the yaw angle in the shooting angle information is yaw2.
  • the pitch angle of the shooting angle information of the first group of shooting information is pitch1
  • the pitch angle of the shooting angle information of the second group of shooting information is pitch2.
  • the position estimation information of the target object includes the calculated coordinate t.
  • d1 and d2 may be positioning coordinates acquired by the positioning module in the moving object, for example, GPS coordinates obtained by the GPS positioning module in the drone.
  • the yaw angle and the elevation angle in the shooting angle information are based on the yaw angle of the pan-tilt and the distance of the image position of the target object with respect to the Y-axis of the image when the image capable of recognizing the target object is captured, and the pan-tilt
  • the elevation angle and the image position of the target object are calculated separately from the X-axis of the image.
  • the specific calculation method refer to the above-mentioned pair for FIG. Should be described.
  • the adjustment instruction for adjusting the shooting angle of the imaging device may be further generated according to the specific position of the moving object and the position estimation information, wherein the specific position is based on the moving object The position at which the position passes through the continuous shooting of the target object.
  • the specific position is based on the moving object The position at which the position passes through the continuous shooting of the target object.
  • There are various methods for determining a specific location for example, by acquiring the current location of the moving object in real time, using the current location as a specific location, generating an adjustment instruction according to the specific location and location estimation information, and according to the adjustment instruction to the imaging device. Adjust the shooting angle.
  • the shooting of the image pickup apparatus is adjusted according to the adjustment command.
  • the route for continuously shooting the target object based on the location is an already planned route
  • each location point on the route may be used as a specific location, and the specific location is generated according to each specific location and location estimation information.
  • the shooting angle of the imaging device is adjusted by using an adjustment instruction corresponding to the position point.
  • the yaw angle and the pitch angle in the shooting angle are calculated based on the three-dimensional coordinates of the specific position and the three-dimensional coordinates in the position estimation information.
  • the coordinates of the specific position d are known as d(dx, dy, dz)
  • the coordinates of the position t corresponding to the position estimation information are t(tx, ty, tz).
  • the calculation of the adjustment angle can be made based on the two position coordinates.
  • the angle of flight is gyaw
  • the pitch angle is gpitch.
  • the formula for calculating L is as follows:
  • the adjustment finger For controlling the gimbal on the basis of the current yaw angle yaw and the pitch angle pitch, according to the deviation of the yaw angle and the deviation of the pitch angle, the object at the position corresponding to the position estimation information is Within the field of view of the imaging device, in order to ensure that the imaging device is capable of capturing an object at a position corresponding to the position estimation information.
  • the adjustment command may be used to adjust the pan/tilt based on the relative position between the location corresponding to the specific location and the location estimation information. For example, based on the relative position, determining that the position corresponding to the position estimation information is located at the lower right of the specific position, an adjustment command is generated to adjust the pan/tilt, so that the lens of the imaging device is adjusted to the lower right when the moving object reaches the specific position. It is also possible to ensure to a certain extent that the object at the position corresponding to the position estimation information is within the field of view of the imaging device.
  • FIG. 3 it is a schematic flowchart of a shooting control method according to an embodiment of the present invention.
  • the method in the embodiment of the present invention may be implemented by a dedicated control device, or may be a mobile controller of a moving object. This can be achieved, for example, by a flight controller of a drone, or by a pan/tilt controller.
  • the method of the embodiment of the present invention can be applied to a system composed of a mobile device that can move a position, a pan/tilt that can rotate in a plurality of axial directions, and an image pickup device capable of image capturing.
  • the method of the embodiment of the present invention includes the following steps.
  • S301 Acquire an information set including at least two sets of shooting information, where the shooting information includes: shooting position information and shooting angle information when the target object is captured.
  • the information set may include two sets of shooting information, and may also include multiple sets of shooting information.
  • Each group of shooting information in the information set is captured when the target object can be captured. Specifically, if the image captured by the shooting device can recognize the target object based on the image recognition, the shooting when the image is captured can be recorded. Location information and shooting angle information. When the moving object moves the target object, at least two sets of shooting information can be obtained as needed.
  • the photographing information may be acquired at different positions when the moving object such as a drone moves in a tangential direction with respect to the target object.
  • the shooting information is acquired at regular time intervals, or at intervals of a certain distance, or when the central angle corresponding to the two position points is greater than or equal to a preset angle threshold. Get a collection of information.
  • S302 Determine the target object based on at least two sets of shooting information selected from the information set. Position estimation information, wherein the positions of the shooting position information in the selected group of shooting information are different.
  • the basic principle of selecting shooting information from a collection of information is to ensure that a more accurate position estimation information about the target object can be calculated.
  • the interval distance calculated based on the shooting position information in the shooting information may be selected, and/or the corresponding shooting information may be selected according to the interval angle calculated by the shooting angle information. For example, if the separation distance between the positions corresponding to the shooting position information in the two sets of shooting information is greater than a preset distance threshold (10 meters), and the interval angle calculated based on the shooting angle information in the two sets of shooting information is greater than the preset For the angle threshold (10 degrees), the two sets of shooting information are selected to calculate the position estimation information. For example, as shown in FIG.
  • the distance between d1 and d2 corresponding to the selected two sets of shooting information is greater than a preset distance threshold
  • the central angle calculated according to yaw1 and yaw2 is greater than a preset angle threshold.
  • S303 generating a shooting adjustment instruction according to the position estimation information to adjust an imaging device; the shooting adjustment instruction is used to adjust a shooting angle of the imaging device, so that a position corresponding to the position estimation information is in a view of the imaging device
  • the camera device can be adjusted by controlling the rotation of the gimbal.
  • the position of the target object that needs continuous shooting is estimated by the shooting position and the shooting angle, and then the shooting direction of the shooting module is adjusted based on the position estimation information obtained by the position estimation, and the implementation manner is simple and quick, and the target can be effectively avoided.
  • the problem of image recognition operation error caused by the object being occluded. Improve the continuous shooting efficiency of the target object.
  • FIG. 4 it is a schematic flowchart of another shooting control method according to an embodiment of the present invention.
  • the method in the embodiment of the present invention may be implemented by a dedicated control device, or may be implemented by a mobile controller of a moving object.
  • the flight controller of the drone can be implemented, or it can be implemented by a pan/tilt controller.
  • the method of the embodiment of the present invention can be applied to a system composed of a mobile device that can move a position, a pan/tilt that can rotate in a plurality of axial directions, and an image pickup device capable of image capturing.
  • the method of the embodiment of the present invention includes the following steps.
  • S401 Acquire an information set including at least two sets of shooting information, where the shooting information includes: shooting position information and shooting angle information when the target object is captured;
  • S402 Perform image recognition on the captured image to identify the target object; the target object may be found from the image based on features such as grayscale and texture of the image.
  • S403 described below is executed, and when the target object is not recognized, S404 described below is executed.
  • S403 Perform continuous shooting on the target object.
  • the image recognition technology can be used to find an image of the target object in the image, and then the imaging device is adjusted based on the position of the image of the found target object, so as to include the target object in the next captured image.
  • the shooting angle is controlled this time, and the imaging device is rotated upward to facilitate The display position of the target object is still approximately the same as the display position of the target object in the previous image.
  • S404 Determine position estimation information of the target object based on at least two sets of shooting information selected from the information set, wherein the location corresponding to the shooting location information in each selected group of shooting information is different.
  • the S404 may specifically include: determining initial position estimation information of at least two target objects based on at least three sets of shooting information; and detecting whether the determined initial position information of each position meets a preset stable condition;
  • the stable condition determines the position estimation information of the target object from the respective position initial estimation information.
  • one position initial estimation information may be determined according to any two sets of shooting information of the at least three sets of shooting information, wherein the calculation of the position initial estimation information may refer to the above embodiment.
  • the calculation method of position estimation information may be used to determine the position estimation information.
  • the location estimation information determined by the S404 may be one information randomly selected from multiple location initial estimation information, or may be averaged after calculating location coordinates corresponding to multiple location initial estimation information. An average value. It may also be position estimation information determined according to some other rules, for example, position initial estimation information calculated from two sets of photographing information having the longest separation distance and/or the largest separation angle is determined as position estimation information.
  • the amplitude change position between the positions corresponding to the at least two position initial estimation information in the determined initial position estimation information meets the preset change amplitude requirement, it is determined that the stable condition is satisfied.
  • the position change amplitude mainly refers to the separation distance between the positions, and the requirement for meeting the position change amplitude mainly includes: the plurality of separation distances are all within a preset numerical range.
  • the case where the position change between the plurality of position initial estimation information is large includes a plurality of cases, for example, the target object is in a stationary state, and when the above information set is acquired, one or more sets of shooting information are captured.
  • the location information or the shooting angle information is inaccurate, which in turn results in inaccurate calculated position estimation information. Therefore, when determining the position estimation information of the target object, the calculation is performed based on the calculated plurality of position initial estimation information, for example, the above-mentioned average calculation of the plurality of position initial estimation information is performed, and an average value is obtained as the The position estimation information of the target object.
  • the target tracking object may be used to find the target object for continuous shooting, for example, further identifying the target in the captured image based on a more complex image recognition technology.
  • a prompt message indicating that the target is lost is automatically issued to notify the end user.
  • the method further includes: determining whether the target object moves according to the initial estimation information of each position, and if the movement occurs, performing continuous shooting processing on the target object based on the image recognition technology. First, the image is recognized based on the image recognition technology, and the target object is continuously photographed according to the recognition result. If the result of the judgment is that no movement has occurred, the following S405 is executed.
  • S405 generate a shooting adjustment instruction according to the position estimation information to adjust an imaging device; the shooting adjustment instruction is used to adjust a shooting angle of the imaging device, so that a position corresponding to the position estimation information is in a view of the imaging device Inside the venue.
  • FIG. 5 is a schematic flowchart of a method for adjusting an image pickup apparatus according to an embodiment of the present invention.
  • the method of the embodiment of the present invention corresponds to the above S305.
  • the method may specifically include the following steps.
  • the target display position may be a fixed display position used by the user to display the target object in the image, or may be displayed in the image when switching to the continuous shooting of the target object based on the position estimation information. position.
  • the target display location may be determined by determining a preset image designation location as a target display location, wherein the preset image designation location is determined by receiving a location selected by the user on the interaction interface.
  • S502 Generate a shooting adjustment instruction according to the target display position, the position estimation information, and the specific position; the shooting adjustment instruction is used to adjust a shooting angle of the imaging device, so that when the imaging device is in a specific position
  • the position corresponding to the position estimation information is within the field of view of the imaging device, and the object at the position corresponding to the position estimation information can be imaged to the target display position of the captured image.
  • the determining a target display position of the target object in the image includes: determining a specified position in the image as a target display position, and after the shooting adjustment instruction adjusts a shooting angle of the imaging device,
  • the object at the position corresponding to the position estimation information can be fixedly imaged to the target display position of the captured image.
  • the image captured by the camera device may be displayed in a user interface.
  • the user may specify a target display position, and when adjusting the camera device, ensure that the target object is calculated at the same time.
  • the object at the position corresponding to the position estimation information is fixedly imaged to the target display position.
  • the way to specify the target display position can be specified by clicking on the selection on the user interface displaying the image, or by dragging a preset selection box.
  • a user interface can be configured that can capture user operations and simultaneously display images captured by the camera device.
  • the determining, by the target object, the target display position in the image includes: a position point on the trajectory drawn in the image as the target display position, and the position point as the target display position includes at least the first position point And the second location point.
  • the generating a shooting adjustment instruction according to the target display position, the position estimation information, and the specific location, the corresponding comprises: according to the preset Forming a policy, and generating at least a first photographing adjustment instruction corresponding to the first position point and a second photographing adjustment instruction corresponding to the second position point according to the position estimation information and the specific position, a shooting adjustment instruction and a second shooting adjustment instruction for adjusting a shooting angle of the image capturing apparatus, so that an object at the position corresponding to the position estimating information can be sequentially imaged to the first position and the second position of the captured image.
  • the preset generation strategy is preset according to any one or more of a moving speed of the imaging device, a moving position of the imaging device, and a position of each target display position on the target display track.
  • the moving speed and the moving position of the imaging apparatus mainly refer to a moving speed and a moving position of a moving object (for example, a drone) on which the imaging apparatus is mounted.
  • the object that can be achieved according to the plurality of adjustment instructions obtained by the generation strategy includes: controlling the generation speed of the generation of the adjustment instruction based on the moving speed and the moving position of the imaging device, so that the object corresponding to the position corresponding to the position estimation information includes the first Forming an adjustment command on the trajectory of the position point and the second position point at a corresponding speed in the captured image, and/or generating an adjustment instruction based on the position of each target display position on the target display trajectory to facilitate position estimation information
  • the objects corresponding to the positions are imaged on the trajectory including the first position point and the second position point in a preset order, for example, sequentially sequentially imaged at position points on the trajectory, or imaged at N position points, and the like.
  • a user interface can be configured that can capture user operations and simultaneously display images captured by the camera device.
  • the image captured by the camera device may be displayed in a user interface, and the user may slide on the user interface to obtain a sliding track, and then further determine a plurality of position points from the sliding track.
  • the position of the position as the target display position is sorted according to the chronological order of the sliding, so that when the adjustment instruction is subsequently adjusted to adjust the imaging device, the object at the position corresponding to the position estimation information generated for the target object is sequentially imaged on the trajectory. Confirmed multiple locations.
  • the calculation of the angle required for generating the control command is as follows.
  • the specific position is a position specified by the user or a position after the moving object is moved, and the position coordinate is known. Therefore, according to the above calculation method, the deviation del_yaw of the yaw angle and the deviation del_pitch of the pitch angle can be obtained. If the deviation of the yaw angle del_yaw and the deviation del_pitch of the pitch angle control the rotation of the gimbal, the position corresponding to the position estimation information can be imaged. The center position of the captured image.
  • the angle required to generate the control command is: the deviation of the yaw angle del_yaw plus the angle of the target display position with respect to the X axis, the deviation of the pitch angle del_pitch plus the angle of the target display position with respect to the Y axis.
  • the designated location of the user can be set by the smart terminal displaying the user interface by means of information interaction. Or one or more location points on the sliding track are sent to a terminal that can control the camera device. For example, displaying a user interface including an image and acquiring a specified location is a smart phone, and the terminal that can control the imaging device is a moving object.
  • the mobile controller at this time, the smart phone only needs to send the corresponding target display position to the mobile controller, and the mobile controller performs other corresponding calculations and adjustment control based on the target display position.
  • FIG. 6 it is a schematic flowchart of a mobile control method according to an embodiment of the present invention.
  • the method in the embodiment of the present invention may be obtained in the shooting control method of the embodiment corresponding to FIG. 3 or FIG. 4 .
  • the movement control may be further performed based on the position estimation information.
  • the method of the embodiment of the present invention may be implemented by a dedicated control device, or may be implemented by a mobile controller that obtains position estimation information about the target object, such as a flight controller of the drone.
  • the method includes the following steps.
  • S601 Acquire location estimation information.
  • the related descriptions in the foregoing embodiments may be referred to, and are not described herein.
  • S602 Generate a movement control instruction to control movement of the moving object according to the acquired position estimation information, where the movement control instruction is used to control a movement of the moving object carrying the imaging device around the position estimation information to change the moving object.
  • Shooting position
  • the moving object may move circumferentially around the position corresponding to the position estimation information, or may move in a polygon such as a square or a rectangle.
  • the surrounding flight rules may be set by default or may be user-defined.
  • a target object that needs to be moved around can be specified.
  • the target object can be specified through a user interface in an intelligent terminal.
  • the captured image can be simultaneously displayed, and the user can click through Specify the target in the form of a dot and/or drag selection box Object.
  • a set of information including at least two sets of shooting information is obtained.
  • the manner of calculating the location estimation information of the target object may refer to the description in the foregoing embodiments.
  • the position corresponding to the position estimation information may be directly determined, and the positional movement corresponding to the position estimation information of the moving object may be controlled based on various shapes around the flight path.
  • the location estimation information of a certain target object can be determined relatively quickly, so that the roundabout movement is more automated and intelligent.
  • the above embodiments describe in detail the position estimation for the target object in a stationary state or a relatively small moving speed, and the scheme of continuous shooting.
  • the moving object such as the drone performs a fast tangential motion with respect to the target object, it can be assumed that the target object does not have a sharp acceleration/deceleration motion, then pass every other distance or angle.
  • the observations can observe the change in the position of the target. But there is no doubt, because there is actually no real knowledge of the speed and direction of movement of the target.
  • the observed target position noise is very large. At this point, the state estimation equation can be used to recover the true target motion equation from the observed noise.
  • a common and reasonable state estimation method can use a Kalman filter.
  • a motion model can be designed that assumes that the target acceleration is Gaussian noise. Under this motion model, the motion of the target object does not occur. Then after a period of iteration, the state equation of the target will eventually converge to the real equation of motion.
  • the position of the target object that needs continuous shooting is estimated by the shooting position and the shooting angle, and then the shooting direction of the shooting module is adjusted based on the position estimation information obtained by the position estimation, and the implementation manner is simple and fast, and the target object can be effectively avoided.
  • the problem of image recognition operation caused by occlusion is wrong. Improve the shooting efficiency of the target object.
  • the corresponding moving function such as flying around can also be implemented based on the position.
  • the embodiment of the invention further provides a computer storage medium in which program instructions are stored, and when the program instructions are executed, the methods of the above embodiments are executed.
  • FIG. 7 is a schematic structural diagram of a shooting control apparatus according to an embodiment of the present invention.
  • the apparatus of the embodiment of the present invention may be disposed in a separate control device or may be set in a mobile control.
  • the device is a pan/tilt controller.
  • the device in the embodiment of the present invention includes the following modules.
  • the obtaining module 701 is configured to acquire an information set including at least two sets of shooting information, where the shooting information includes: shooting position information and shooting angle information when the target object is captured; and the determining module 702 is configured to select at least the selected information set.
  • the two groups of shooting information are used to determine the position estimation information of the target object, wherein the selected position of the shooting position information in each group of shooting information is different;
  • the control module 703 is configured to generate a shooting adjustment instruction according to the position estimation information To adjust the imaging device; the shooting adjustment instruction is used to adjust the shooting angle of the imaging device, so that the position corresponding to the position estimation information is within the field of view of the imaging device.
  • the determining module 702 is specifically configured to: determine initial position estimation information of at least two target objects based on the at least three sets of shooting information; and detect whether the determined initial position information of each position meets the preset a stable condition; if the stable condition is satisfied, the position estimation information of the target object is determined according to each position initial estimation information.
  • the apparatus of the embodiment of the present invention may further include: a second identification module 704, configured to identify the target object in the captured image based on the image recognition technology if the stable condition is not satisfied, so as to facilitate Continuous shooting of the target object.
  • a second identification module 704 configured to identify the target object in the captured image based on the image recognition technology if the stable condition is not satisfied, so as to facilitate Continuous shooting of the target object.
  • the determining module 702 is specifically configured to determine, when the position change amplitude between the positions corresponding to the at least two position initial estimation information in each position initial estimation information that has been determined meets the preset change amplitude requirement, The stable conditions are met.
  • the apparatus of the embodiment of the present invention may further include: a first identification module 705, configured to perform image recognition on the captured image to identify the target object; when the target object is identified, The target object is continuously photographed; when the target object is not recognized, the determining module 702 is notified.
  • a first identification module 705 configured to perform image recognition on the captured image to identify the target object; when the target object is identified, The target object is continuously photographed; when the target object is not recognized, the determining module 702 is notified.
  • the shooting adjustment instruction is specifically used to adjust a shooting angle of the image capturing apparatus when the specific position is located; as shown in FIG. 8 , the control module 703 includes: a determining unit 7031, configured to determine the target a target display position of the object in the image; a control unit 7032, configured to generate a shooting adjustment instruction according to the target display position, the position estimation information, and the specific position; the shooting adjustment instruction is used to adjust the imaging device Shooting an angle such that when the imaging device is in a specific position, the position corresponding to the position estimation information is within the field of view of the imaging device, and the object at the position corresponding to the position estimation information can be imaged to the target of the captured image Display position.
  • the determining unit 7031 is specifically configured to determine a specified position in the image as the target display position; after the shooting adjustment command adjusts the shooting angle of the image capturing device, the position estimating information is corresponding to the position
  • the object can be fixedly imaged to the target display position of the captured image.
  • the determining unit 7031 is specifically configured to use a position point on the trajectory drawn in the image as the target display position, and the position point as the target display position includes at least the first position point and the second position point.
  • the control unit 7032 is configured to generate at least a first shooting adjustment instruction corresponding to the first location point and the second according to the preset generation policy, according to the location estimation information and the specific location. a second photographing adjustment instruction corresponding to the position point, wherein the first photographing adjustment instruction and the second photographing adjustment instruction are used to adjust a photographing angle of the image capturing apparatus, so that an object at the position corresponding to the position estimation information can be sequentially imaged to photographing The first location point and the second location point of the image.
  • the preset generation strategy is preset according to any one or more of a moving speed of the imaging device, a moving position of the imaging device, and a position of each target display position on the target display track.
  • the apparatus of the embodiment of the present invention may further include: a movement control module 706, configured to generate, according to the position estimation information, a movement control instruction to control movement of a moving object carrying the imaging device;
  • the control instruction is configured to control the moving object to move around the position corresponding to the position estimation information to change the shooting position.
  • the image capturing device is mounted on the moving object through the pan/tilt
  • the shooting position information includes the collected position coordinate information of the moving object
  • the shooting angle information includes posture information of the pan/tilt head. Position information in the captured image with the target object.
  • the selecting rule used by the determining module 702 to select at least two sets of shooting information from the information set comprises: selecting shooting information based on the calculated separation distance of the shooting position information in the shooting information; and/or, The photographing information is selected based on the interval angle calculated from the photographing angle information in the photographing information.
  • the embodiment of the invention adopts a shooting position and a shooting angle to enter a target object that needs continuous shooting.
  • the line position estimation and then based on the position estimation information obtained by the position estimation, adjusts the shooting direction of the shooting module, and the implementation manner is simple and quick, and the problem of image recognition operation error caused by the target object being occluded can be effectively avoided.
  • the corresponding moving function such as flying around can also be implemented based on the position.
  • the control device in the embodiment of the present invention may be a dedicated device, such as a mobile object such as a drone and an intelligent cloud platform.
  • the data is connected to complete the control of the shooting angle of the imaging device.
  • the control device may also be a mobile controller of the mobile object, such as a flight controller of the drone, and the mobile controller is connected to the device data such as the pan/tilt to complete the control of the shooting angle of the imaging device.
  • the control device may also be a controller of a device such as a cloud platform, connected to the moving object data, and complete control of the shooting angle of the imaging device.
  • the control device may include a power module, various interface modules, and the like.
  • the control device further includes: a processor 901, an output interface 902, and a memory 903, the processor 901, an output interface 902, and The memories 903 can be connected to each other by means of a bus or the like.
  • the memory 903 may include a volatile memory such as a random access memory (RAM); the memory 903 may also include a non-volatile memory such as a flash.
  • RAM random access memory
  • the memory 903 may also include a non-volatile memory such as a flash.
  • the processor 901 can be a central processing unit (abbreviation: CPU).
  • the processor 901 may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof.
  • the PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (GAL), or any combination thereof.
  • the memory 903 is further configured to store program instructions.
  • the processor 901 can invoke the program instructions to implement related methods as shown in the embodiments corresponding to FIGS. 3, 4, 5, and 6 of the present application.
  • the processor 901 calls a program instruction stored in the memory 903, and is configured to acquire an information set including at least two sets of shooting information, where the shooting information includes: shooting position information when shooting a target object, and shooting Angle information; based on at least two sets of shots selected from the set of information
  • the information the position estimation information of the target object is determined, wherein the position corresponding to the shooting position information in each selected group of shooting information is different; the shooting adjustment instruction is generated according to the position estimation information to adjust the imaging device;
  • the command is used to adjust the shooting angle of the image capturing device, so that the position corresponding to the position estimating information is within the field of view of the image capturing device; and the output interface 902 is configured to output the adjusting command to adjust the image capturing device.
  • the processor 901 is specifically configured to determine location initial estimation information of at least two target objects based on at least three sets of shooting information, and detect whether the determined initial location information of each location meets a preset a stable condition; if the stable condition is satisfied, the position estimation information of the target object is determined according to each position initial estimation information.
  • the processor 901 is further configured to: if the stable condition is not met, identify a target object in the captured image based on an image recognition technology, so as to continuously capture the target object.
  • the processor 901 is configured to determine, when the position change amplitude between the positions corresponding to the at least two position initial estimation information in each position initial estimation information that has been determined meets the preset change amplitude requirement, The stable conditions are met.
  • the processor 901 is further configured to perform image recognition on the captured image to identify the target object; when the target object is recognized, continuously perform shooting on the target object; When the target object is identified, the determining location estimation information of the target object is performed.
  • the shooting adjustment instruction is specifically configured to adjust a shooting angle of the image capturing device when the image capturing device is located at a specific position;
  • the processor 901 is specifically configured to determine a target display position of the target object in the image;
  • the target display position, the position estimation information, and the specific position generate a shooting adjustment instruction;
  • the shooting adjustment instruction is used to adjust a shooting angle of the imaging device such that the imaging device is in a specific position, the position
  • the position corresponding to the estimated information is within the field of view of the imaging apparatus, and the object at the position corresponding to the position estimation information can be imaged to the target display position of the captured image.
  • the processor 901 is specifically configured to determine a specified position in the image as a target display position; after the shooting adjustment command adjusts a shooting angle of the image capturing device, the position estimation information is corresponding to the position
  • the object can be fixedly imaged to the target display position of the captured image.
  • the processor 901 is specifically configured to use a bit on a track drawn in an image.
  • the pointing point is the target display position, and the position point as the target display position includes at least the first position point and the second position point; and is configured to generate at least according to the preset generation strategy and according to the position estimation information and the specific position a first photographing adjustment instruction corresponding to the first position point and a second photographing adjustment instruction corresponding to the second position point, wherein the first photographing adjustment instruction and the second photographing adjustment instruction are used to adjust the image capturing apparatus
  • the shooting angle is such that the object at the position corresponding to the position estimation information can be sequentially imaged to the first position point and the second position point of the captured image.
  • the preset generation strategy is preset according to any one or more of a moving speed of the imaging device, a moving position of the imaging device, and a position of each target display position on the target display track.
  • the processor 901 is further configured to generate, according to the location estimation information, a motion control instruction to control movement of a moving object carrying the imaging device; and the movement control instruction is configured to control the moving object to surround The position corresponding to the position estimation information is moved to change the shooting position.
  • the image capturing device is mounted on the moving object through the pan/tilt
  • the shooting position information includes the collected position coordinate information of the moving object
  • the shooting angle information includes posture information of the pan/tilt head. Position information in the captured image with the target object.
  • the selecting rule used by the processor 901 to select at least two sets of shooting information from the information set includes: selecting shooting information based on the calculated separation distance of the shooting position information in the shooting information; and/or, The photographing information is selected based on the interval angle calculated from the photographing angle information in the photographing information.
  • processor 901 for a specific implementation of the processor 901 in the embodiment of the present invention, reference may be made to the specific description of the related steps and functions in the corresponding embodiments in FIG. 1 to FIG.
  • the position of the target object that needs continuous shooting is estimated by the shooting position and the shooting angle, and then the shooting direction of the shooting module is adjusted based on the position estimation information obtained by the position estimation, and the implementation manner is simple and fast, and the target object can be effectively avoided.
  • the problem of image recognition operation caused by occlusion is wrong. Improve the continuous shooting efficiency of the target object.
  • the corresponding moving function such as flying around can also be implemented based on the position.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

一种拍摄控制方法、装置以及控制设备,其中,方法包括:获取包括至少两组拍摄信息的信息集合,拍摄信息包括:拍摄到目标对象时的拍摄位置信息和拍摄角度信息(S301);基于从信息集合选取的至少两组拍摄信息,确定目标对象的位置估计信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同(S302);根据位置估计信息生成拍摄调整指令以调整摄像设备;拍摄调整指令用于调整摄像设备的拍摄角度,使位置估计信息所对应的位置在摄像设备的视场内(S303),该方法可以有效避免因为目标对象被遮挡所导致的图像识别运算出错的问题。

Description

一种拍摄控制方法、装置以及控制设备
本专利文件披露的内容包含受版权保护的材料。该版权为版权所有人所有。版权所有人不反对任何人复制专利与商标局的官方记录和档案中所存在的该专利文件或该专利披露。
技术领域
本发明涉及自动化控制技术领域,尤其涉及一种拍摄控制方法、装置以及控制设备。
背景技术
随着光学以及电子技术的发展,例如照相机、录影机等各种各样的摄像设备应运而生。通过这些摄像设备,人们可以捕捉各种对象的影像。如果将这些摄像设备设置到某些移动物体上,例如无人机等智能飞行设备上,还可以实现对某些对象的监视。监视可以是指:无论搭载摄像设备的移动物体如何移动,摄像设备均能够拍摄得到的需要持续监视的目标对象。
对于上述的监视,目前的实现方案主要是通过图像识别技术来实现。具体可以基于图像识别技术,根据在摄像设备采集到的影像中所述目标对象所在的影像区域的灰度、纹理等特征,确定目标对象的位置,在搭载摄像设备的移动物体移动过程中,根据确定的目标对象的位置来调整摄像设备的拍摄角度拍摄获取新的图像并进行图像识别,从而实现对目标对象的持续监视。
但是,基于灰度、纹理等特征的图像识别技术相对比较复杂,实现所需的软硬件成本较高,而且,如果需要监视的目标对象如果出现被遮挡等情况,则图像识别技术无法识别出目标对象,从而导致运算出错。
发明内容
本发明实施例提供了一种拍摄控制方法、装置以及控制设备,可较为简捷 地、准确地对已确认的目标对象的监视。
一方面,本发明实施例提供了一种拍摄控制方法,包括:
获取包括至少两组拍摄信息的信息集合,所述拍摄信息包括:拍摄到目标对象时的拍摄位置信息和拍摄角度信息;
基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置估计信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同;
根据所述位置估计信息生成拍摄调整指令以调整摄像设备;
所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使所述位置估计信息所对应的位置在所述摄像设备的视场内。
相应地,本发明实施例还提供了一种拍摄控制装置,包括:
获取模块,用于获取包括至少两组拍摄信息的信息集合,所述拍摄信息包括:拍摄到目标对象时的拍摄位置信息和拍摄角度信息;
确定模块,用于基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置估计信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同;
控制模块,用于根据所述位置估计信息生成拍摄调整指令以调整摄像设备;所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使所述位置估计信息所对应的位置在所述摄像设备的视场内。
相应地,本发明实施例还提供了一种控制设备,包括:处理器和输出接口;
所述处理器,用于获取包括至少两组拍摄信息的信息集合,所述拍摄信息包括:拍摄到目标对象时的拍摄位置信息和拍摄角度信息;基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置估计信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同;根据所述位置估计信息生成拍摄调整指令以调整摄像设备;所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使所述位置估计信息所对应的位置在所述摄像设备的视场内;所述输出接口,用于输出所述调整指令以调整摄像设备。
本发明实施例通过拍摄位置和拍摄角度来对需要持续拍摄的目标对象进行位置估计,再基于位置估计得到的位置估计信息来调整拍摄模块的拍摄方向,实现方式简便快捷,并且可以有效避免因为目标对象被遮挡所导致的图像 识别运算出错的问题。提高了对目标对象的持续拍摄效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例的位置坐标示意图;
图2a是本发明实施例的影像坐标系和视场角的示意图;
图2b是本发明实施例的视场角的示意图;
图3是本发明实施例的一种拍摄控制方法的流程示意图;
图4是本发明实施例的另一种拍摄控制方法的流程示意图;
图5是本发明实施例的一种调整摄像设备的方法的流程示意图;
图6是本发明实施例的一种移动控制方法的流程示意图;
图7是本发明实施例的一种拍摄控制装置的结构示意图;
图8是图7中的控制模块的其中一种结构示意图;
图9是本发明实施例的一种控制设备的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在本发明实施例中,对某个目标对象的监视可以通过一个搭载摄像设备的可移动的移动物体来实现。该移动物体可为无人机(Unmanned Aerial Vehicle,UAV),或无人汽车、可移动机器人等。这些移动物体可以通过设置云台来搭载摄像设备。为了实现更好的可以在多个角度的持续拍摄,该云台可以是一个三轴云台,该云台能够在偏航yaw、俯仰pitch以及横滚roll三个转动轴上转动。通过控制云台在一个或者多个转动轴上的转动角度,可以较好地保证无人 机等移动物体向某些地点或者方位移动的过程中,能够持续拍摄到目标对象。摄像设备拍摄到的包括目标对象的影像可以通过无线链路传回到某个地面端设备,例如,对于无人机拍摄得到的包括目标对象的影像可以通过无线链路传输给智能手机、平板电脑等智能终端,这些智能终端在接收到包括目标对象的影像之前,已经与无人机或者直接与摄像设备建立了通信链路。
目标对象可以是用户指定的某个物体,例如某个环境物体。可以将摄像设备拍摄得到的影像在一个用户界面中显示,用户通过针对该用户界面中显示的影像的点击操作,来选择一个物体作为目标对象。例如,用户可以选择某棵树、某个动物、或者某一片区域的物体作为目标对象。当然,用户也可以仅输入某些物体的影像特征,例如输入一张人脸特征、或者某种物体的外形特征,由相应的处理模块进行影像处理,找到影像特征对应的人物或者物体,进而将找到的人物或者物体作为目标对象进行拍摄。
在本发明实施例中,目标对象可以是一个静止的物体,或者在持续拍摄的一段时间内该物体是不移动的,或者在持续拍摄的过程中移动的速度相对于无人机等移动物体的移动速度小很多,例如两者的速度差值小于预设的阈值。
本发明的一种实施例中,在确定了拍摄到的影像中的目标对象后,在搭载摄像设备的无人机等移动物体的移动拍摄过程中,可以通过图像识别技术对影像进行分析识别,具体可以基于灰度、纹理等特征对拍摄得到的每一张影像进行图片识别,以找到目标对象并对该目标对象进行持续拍摄。
在对目标对象进行持续拍摄的过程中,可能存在目标对象丢失的情况,导致丢失原因包括多种,具体的,在目标对象被某个物体遮挡后,基于灰度、纹理等特征的图像识别可能无法找到目标对象,导致丢失该目标对象;或者,移动物体移动后如果与目标对象的距离较远,使得目标对象在拍摄到的影像中的灰度、纹理等特征已经不足以从影像中识别出该目标对象,导致丢失该目标对象。当然还可能存在其他丢失目标对象的情况,例如摄像设备的镜头受到强光的照射,使得拍摄的影像中灰度、纹理等特征很弱,或者进行图像识别处理的模块出现故障等因素。需要说明的是,上述的丢失目标对象是指无法在影像中确定目标对象。
本发明实施例中,在检测到对目标对象的影像满足条件时,会记录在拍摄 该满足条件的影像时的拍摄信息。具体的,对目标对象的影像满足条件是指:针对某次拍摄到的影像,若基于图像识别技术在该影像中能够准确地识别出目标对象。记录的此次拍摄时的拍摄信息包括:拍摄位置信息和拍摄角度信息,其中拍摄位置信息用于指示在摄像设备拍摄到目标对象时摄像设备的位置信息,该拍摄位置信息可以是用于搭载摄像设备的移动物体的定位信息,例如GPS坐标;本发明实施例的所述拍摄角度信息用于指示在摄像设备拍摄到目标对象时,目标对象相对摄像设备的方位,该方位可以基于云台的姿态角度(云台偏航角度yaw,俯仰角度pitch)和目标对象在拍摄到的影像中的显示位置综合进行计算确定的。
在搭载摄像设备的移动物体移动过程中,本发明实施例至少要检测出两次满足条件的影像,并记录对应的拍摄信息。记录的拍摄信息构成一个信息集合,以便于能够基于这些拍摄信息计算出目标对象的位置估计信息,方便在目标对象丢失时,或者在需要直接基于位置进行对象拍摄时,也能够在一定程度上满足用户的持续拍摄需求。在优选实施例中,所述信息集合中每组拍摄信息中包括的拍摄位置信息所对应的位置均不相同。
优选地,所述摄像设备是通过云台搭载在移动物体之上,所述拍摄位置信息包括采集到的所述移动物体的位置坐标,所述拍摄角度信息包括根据所述云台的姿态信息和所述目标对象在拍摄得到的影像中的位置信息计算得到的角度。具体的,针对其中的拍摄角度信息,如果拍摄到目标对象时目标对象是位于拍摄到的影像的中心区域,则对于拍摄角度信息中的俯仰角,可以是由云台的俯仰角pitch,而拍摄角度信息中的偏航角则为云台的偏航角yaw。如果不在中心区域,则可以根据目标对象的中心点相对于影像物理坐标系的X轴的像素距离dp1和水平视场角的大小,确定目标对象相对于影像中心的相对于影像X轴的偏移角度,并根据目标对象的中心点相对于影像物理坐标系的Y轴的像素距离dp2和垂直视场角的大小确定目标对象相对于影像Y轴的偏移角度,对于拍摄角度信息中的俯仰角,可以是由云台的俯仰角pitch加上所述的相对于影像X轴的偏移角度,而拍摄角度信息中的偏航角则为云台的偏航角yaw加上相对于影像Y轴的偏移角度。具体的,如图2a和图2b所示,示出了影像的物理坐标系,摄像设备的水平视场角和垂直视场角,基于目标对象的中 心点相对于X轴和Y轴的像素距离所占的像素距离比例和对应的视场角,可以得到关于影像X轴的偏移角度和影像Y轴的偏移角度。
在得到了信息集合后,如果需要基于位置来实现对目标对象的持续拍摄时,例如图像识别无法识别出目标对象,或者满足基于位置进行持续拍摄的条件,则从信息集合中选取至少两组拍摄信息,从所述信息集合选取至少两组拍摄信息所采用的选取规则包括:基于拍摄信息中的拍摄位置信息计算得到的间隔距离来选取拍摄信息;和/或,基于拍摄信息中的拍摄角度信息计算得到的间隔角度来选取拍摄信息。其中,满足基于位置进行持续拍摄的条件可以包括:接收到用户发出的基于位置进行持续拍摄的控制指令,或者基于已经记录的信息集合中的信息能够较为准确地计算出目标对象的位置坐标。
本发明实施例以仅选取两组拍摄信息为例,来对计算目标对象的位置估计信息进行说明。具体的,如图1所示,在北东地坐标系上,目标对象的坐标为t(tx,ty),选取的第一组拍摄信息中的拍摄位置信息d1(d1x,d1y),拍摄角度信息中的偏航角为yaw1,第二组拍摄信息中的拍摄位置信息d2(d2x,d2y),拍摄角度信息中的偏航角为yaw2。基于两个拍摄位置的拍摄角度信息,计算得到k1=1/tan(yaw1),k2=1/tan(yaw2),进而得到d1到目标对象所在平面的距离为L1=d1x-k1*d1y,d2到目标对象所在平面的距离为L2=d2x-k2*d2y。进一步可以计算得到,所述目标对象t的坐标为:tx=k1*ty+L1,ty=(L1-L2)/(k2-k1)。同时,第一组拍摄信息的拍摄角度信息的俯仰角为pitch1,第二组拍摄信息的拍摄角度信息的俯仰角为pitch2。估计目标对象的高度为e1z,e2z,其中,e1z=d1z-L1*tan(pitch1),e2z=d1z-L2*tan(pitch2),基于估计的高度,可以计算得到目标对象的高度tz=(e1z+e2z)/2。因此,最终得到的目标对象的三维坐标为t(tx,ty,tz)。
在本发明实施例中,目标对象的位置估计信息包括所述计算得到的坐标t。其中,d1和d2可以是移动物体中的定位模块采集到的定位坐标,例如,无人机中的GPS定位模块得到的GPS坐标。而拍摄角度信息中的偏航角和俯仰角则是基于在拍摄到能够识别出目标对象的影像时,云台的偏航角和目标对象的影像位置相对于影像Y轴的距离、云台的俯仰角和目标对象的影像位置相对于影像X轴的距离分别计算得到,具体的计算方式可参考上述针对图2的对 应描述。
在确定了目标对象的位置估计信息后,即可进一步根据移动物体的特定位置和所述位置估计信息,生成用于调整摄像设备的拍摄角度的调整指令,其中,该特定位置为移动物体在基于位置对目标对象进行持续拍摄的过程中所经过的任意一个位置。确定特定位置的方法有多种,例如,通过实时获取移动物体当前所在位置,将该当前所在位置作为特定位置,根据该特定位置和位置估计信息生成调整指令,并根据该调整指令对摄像设备的拍摄角度进行调整。或者,根据移动物体当前所在位置、当前姿态和当前速度预测移动物体即将要移动到的位置,并将该位置作为特定位置,根据该特定位置和位置估计信息生成调整指令,当移动物体移动到该特定位置时,根据该调整指令对摄像设备的拍摄就得进行调整。又或者,移动在基于位置对目标对象进行持续拍摄的航线为已经规划好的航线,那么可将该航线上的各位置点分别作为特定位置,根据每个特定位置和位置估计信息生成该特定位置对应的调整指令,但移动物体移动到该航线上的每个位置点时,采用该位置点对应的调整指令对摄像设备的拍摄角度进行调整。
具体的,基于特定位置的三维坐标和位置估计信息中的三维坐标来计算拍摄角度中的偏航角和俯仰角。具体的,特定位置d的坐标已知,为d(dx,dy,dz),位置估计信息所对应位置t的坐标为t(tx,ty,tz)。根据两个位置坐标,可进行调整角度的计算。其中,首先计算坐标差:delx=dx-tx,dely=dy-ty,delz=dz-tz,进一步计算特定位置所在平面与位置估计信息所对应位置的距离L,进而得到位置估计信息所对应位置相对于特定位置的俯仰角:t2d_pitch=arctan(delz/L),位置估计信息所对应位置相对于特定位置的偏航角:t2d_yaw=arctan(dely/delx),此时检测到的云台的偏航角为gyaw,俯仰角gpitch,则位置估计信息所对应位置相对于特定位置的的偏差角为:偏航角的偏差del_yaw=gyaw-t2d_yaw,俯仰角的偏差del_pitch=gyaw-t2d_pitch。其中L的计算公式如下:
Figure PCTCN2016108446-appb-000001
根据计算得到的偏航角的偏差和俯仰角的偏差生成调整指令,所述调整指 令用于控制云台在当前偏航角yaw和俯仰角pitch的基础上,根据所述的偏航角的偏差和俯仰角的偏差进行转动,使所述位置估计信息所对应位置处的对象在摄像设备的视场内,以便于确保摄像设备能够拍摄到所述位置估计信息所对应位置处的对象。
上述的计算相应角度并最终生成调整指令的方式为本发明实施例的优选方式,能够实现偏航角、俯仰角的精确调整。在其他一些实施方式中,可以基于特定位置和位置估计信息所对应位置之间的相对方位,生成调整指令对云台进行调整。例如,基于相对位置,确定位置估计信息所对应位置位于特定位置的右下方,则生成调整指令调整云台,使移动物体到达特定位置时调整摄像设备的镜头朝向右下方。也可以在一定程度上保证位置估计信息所对应位置处的对象处于摄像设备的视场内。
具体的,再请参见图3,是本发明实施例的一种拍摄控制方法的流程示意图,本发明实施例的所述方法可以由一个专用的控制设备实现,也可以由移动物体的移动控制器来实现,例如无人机的飞行控制器来实现,也可以由一个云台控制器来实现。本发明实施例的所述方法可以应用在由可以移动位置的移动设备、能够在多个轴向上转动的云台、以及能够进行影像拍摄的摄像设备所组成的***中。具体的,本发明实施例的所述方法包括如下步骤。
S301:获取包括至少两组拍摄信息的信息集合,所述拍摄信息包括:拍摄到目标对象时的拍摄位置信息和拍摄角度信息。所述信息集合中可以包括两组拍摄信息,也可以包括多组拍摄信息。信息集合中的各组拍摄信息是在能够拍摄到目标对象时拍摄到的,具体的,对于拍摄设备拍摄到的影像,如果基于图像识别能够识别出目标对象,则可以记录拍摄该影像时的拍摄位置信息和拍摄角度信息。移动物体在移动拍摄目标对象的时候,可以根据需要得到至少两组拍摄信息。
优选地,拍摄信息可以是在无人机等移动物体在相对于目标对象作切向运动时,在不同的位置处获取到的。具体的,在沿目标对象作圆周运动时,每隔一定时间间隔,或者每隔一定距离间隔、或者移动后两个位置点所对应的圆心角大于或等于预设的角度阈值时,获取拍摄信息得到信息集合。
S302:基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象 的位置估计信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同。
从信息集合中选取拍摄信息的基本原则是保证能够计算到较为准确的关于目标对象的位置估计信息。具体可以基于拍摄信息中的拍摄位置信息计算得到的间隔距离来选取,和/或根据拍摄角度信息计算得到的间隔角度来选取对应的拍摄信息。例如,如果两组拍摄信息中拍摄位置信息所对应位置之间的间隔距离大于预设的距离阈值(10米)、且基于该两组拍摄信息中拍摄角度信息计算得到的间隔角度大于预设的角度阈值(10度),则选取该两组拍摄信息来计算位置估计信息。例如,如图1所示,选取的两组拍摄信息对应的d1和d2之间的距离大于预设的距离阈值,且根据yaw1和yaw2计算得到的圆心角大于预设的角度阈值。具体的,计算位置估计信息的方式可参考上述实施例中相关计算方式的描述,在此不赘述。
S303:根据所述位置估计信息生成拍摄调整指令以调整摄像设备;所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使所述位置估计信息所对应的位置在所述摄像设备的视场内,具体的可以是通过控制云台的转动来调整摄像设备。所述调整指令的具体生成方式可参考上述实施例中关于调整指令的生成过程的相关描述,在此不赘述。
本发明实施例通过拍摄位置和拍摄角度来对需要持续拍摄的目标对象进行位置估计,再基于位置估计得到的位置估计信息来调整拍摄模块的拍摄方向,实现方式简便快捷,并且可以有效避免因为目标对象被遮挡所导致的图像识别运算出错的问题。提高了对目标对象的持续拍摄效率。
再请参见图4,是本发明实施例的另一种拍摄控制方法的流程示意图,本发明实施例的所述方法可以由一个专用的控制设备实现,也可以由移动物体的移动控制器来实现,例如无人机的飞行控制器来实现,也可以由一个云台控制器来实现。本发明实施例的所述方法可以应用在由可以移动位置的移动设备、能够在多个轴向上转动的云台、以及能够进行影像拍摄的摄像设备所组成的***中。具体的,本发明实施例的所述方法包括如下步骤。
S401:获取包括至少两组拍摄信息的信息集合,所述拍摄信息包括:拍摄到目标对象时的拍摄位置信息和拍摄角度信息;
S402:对拍摄到的影像进行图像识别,以识别所述目标对象;可以基于影像的灰度、纹理等特征来从影像中找到目标对象。当识别出所述目标对象时,执行下述的S403,当未识别出所述目标对象时,执行下述的S404。
S403:对所述目标对象进行持续拍摄。具体可以继续基于图像识别技术找到影像中关于目标对象的影像,然后基于找到的目标对象的影像的位置,调整摄像设备,以便于在下一次拍摄的影像中也包括目标对象。具体的,如果本次拍摄到的影像中目标对象的影像位置相对于上一张影像的位置向下移动了一段像素距离,则本次会控制拍摄角度,是摄像设备往上方转动,以便于使得目标对象的显示位置仍然与上一张影像中的目标对象的显示位置大致相同。
S404:基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置估计信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同。
其中,所述S404具体可以包括:基于至少三组拍摄信息,确定出至少两个所述目标对象的位置初始估计信息;检测已经确定的各个位置初始估计信息是否满足预设的稳定条件;若满足所述稳定条件,则从各个位置初始估计信息中确定出所述目标对象的位置估计信息。具体的,基于至少三组拍摄信息确定位置初始估计信息时,根据该至少三组拍摄信息中的任意两组拍摄信息可以确定一个位置初始估计信息,其中位置初始估计信息的计算可参考上述实施例中关于位置估计信息的计算方式。在本发明实施例中,所述S404确定的位置估计信息可以是从多个位置初始估计信息中随机选择的一个信息,或者是对多个位置初始估计信息所对应的位置坐标进行平均计算后的一个平均值。也可以是按照其他一些规则确定的位置估计信息,例如,将间隔距离最远、和/或间隔角度最大的两组拍摄信息计算得到的位置初始估计信息确定为位置估计信息。
其中可选地,当已经确定的各个位置初始估计信息中至少两个位置初始估计信息所对应位置之间的位置变化幅度满足预置变化幅度要求时,确定满足所述稳定条件。所述位置变化幅度主要是指位置之间的间隔距离,满足位置变化幅度要求主要包括:多个间隔距离均在一个预设的数值范围内。基于两个或者多个位置初始估计信息之间的位置变化幅度,可以确定计算得到的关于目标对象的位置估计是否稳定,位置变化幅度越小,说明计算得到的位置初始估计信 息较为准确,反之,则表明选取的拍摄信息存在不准确的情况,得到的位置初始估计信息存在不准确的量,无法确定出准确的位置估计信息,进而不能基于该位置估计信息对拍摄角度进行调整,不能基于位置估计信息对目标对象进行持续拍摄。
进一步地,导致多个位置初始估计信息之间的位置变化幅度较大的情况包括多种,例如,目标对象处于静止状态,在获取上述的信息集合时,其中的一个或多组拍摄信息的拍摄位置信息或拍摄角度信息不准确,进而导致计算得到的位置估计信息不准确。因此,在确定所述目标对象的位置估计信息时,基于计算得到的多个位置初始估计信息进行计算,例如上述的可以对多个位置初始估计信息进行平均计算后,得到的一个平均值作为所述目标对象的位置估计信息。
可选地,若不满足所述稳定条件,则可以再采用其他对象跟踪技术找到所述目标对象进行持续拍摄,例如,进一步再基于更为复杂的图像识别技术在拍摄到的影像中识别出目标对象,以便于对所述目标对象进行持续拍摄。或者在无法找到目标对象后,自动发出目标丢失的提示消息,以通知终端用户。
进一步地,即使已经确定的各个位置初始估计信息满足预设的稳定条件,但存在特殊的情况,即:目标对象初始一段时间处于静止状态,在对该目标对象进行持续拍摄的过程中,目标对象移动了一段距离到达新的位置点。那么,最终确定的位置估计信息不是目标对象当前所在位置,如果基于计算得到的位置估计信息进行后续的摄像设备的调整操作,并不能够对目标对象进行持续拍摄。因此,在本发明实施例中,所述方法还包括:根据各个位置初始估计信息判断目标对象是否发生移动,若发生移动,则基于图像识别技术对所述目标对象进行持续拍摄处理。首先基于图像识别技术对影像进行识别,并根据识别结果对所述目标对象进行持续拍摄处理。如果判断结果为没有发生移动,则执行下述的S405。
S405:根据所述位置估计信息生成拍摄调整指令以调整摄像设备;所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使所述位置估计信息所对应的位置在所述摄像设备的视场内。
其中可选地,所述拍摄调整指令具体用于调整所述摄像设备在位于特定位 置时的拍摄角度;如图5所示,是本发明实施例的一种调整摄像设备的方法的流程示意图,本发明实施例的所述方法对应于上述的S305。该方法具体可以包括如下步骤。
S501:确定所述目标对象在影像中的目标显示位置。所述目标显示位置可以是用户指定的在影像中用于显示目标对象的一个固定显示位置,也可以是指在切换到基于位置估计信息对目标对象进行持续拍摄时,目标对象在影像中的显示位置。
其中,所述目标显示位置可以是指:将预置的影像指定位置确定为目标显示位置,其中,所述预置的影像指定位置为通过接收用户在交互界面上选择的位置确定的。
S502:根据所述目标显示位置、所述位置估计信息以及所述特定位置,生成拍摄调整指令;所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使得所述摄像设备在特定位置时,所述位置估计信息对应的位置在摄像设备的视场角内、且所述位置估计信息对应位置处的对象能够成像到拍摄影像的所述目标显示位置处。
具体的,所述确定所述目标对象在影像中的目标显示位置包括:将在影像中的指定位置确定为目标显示位置,而所述拍摄调整指令调整所述摄像设备的拍摄角度后,所述位置估计信息对应位置处的对象能够固定成像到拍摄影像的所述目标显示位置处。本发明实施例中,所述摄像设备拍摄的影像可以在一个用户界面中显示,在该用户界面上,用户可以指定一个目标显示位置,在对摄像设备进行调整时,同时要确保为目标对象计算的位置估计信息所对应位置的对象固定成像到该目标显示位置处。指定该目标显示位置的方式可以是在显示影像的用户界面上的点击选择进行指定,或者是拖动一个预置的选择框进行指定。可以配置一个用户界面,该用户界面能够获取用户操作,还可以同时显示摄像设备采集到的影像。
进一步可选地,所述确定所述目标对象在影像中的目标显示位置包括:将在影像中绘制的轨迹上的位置点作为目标显示位置,作为目标显示位置的位置点至少包括第一位置点和第二位置点。而所述根据所述目标显示位置、所述位置估计信息以及所述特定位置生成拍摄调整指令,则对应包括:根据预置的生 成策略,并根据所述位置估计信息和所述特定位置,至少生成与所第一位置点对应的第一拍摄调整指令和与所述第二位置点对应的第二拍摄调整指令,所述第一拍摄调整指令和第二拍摄调整指令用于调整所述摄像设备的拍摄角度,使得所述位置估计信息对应位置处的对象能够依次成像到拍摄影像的所述第一位置点和第二位置点。其中优选地,所述预置的生成策略是根据摄像设备的移动速度、摄像设备的移动位置、各个目标显示位置在所述目标显示轨迹上的位置中的任意一个或者多个预先设置的。其中,摄像设备的移动速度和移动位置主要是指搭载所述摄像设备的移动物体(例如无人机)的移动速度和移动位置。根据生成策略得到的多个调整指令所能够达到的目的包括:基于摄像设备的移动速度和移动位置,控制生成调整指令的生成速度,以便于位置估计信息所对应位置的对象在包括所述第一位置点和第二位置点的轨迹上以相应速度成像在拍摄到的影像中,和/或,基于各个目标显示位置在所述目标显示轨迹上的位置,生成调整指令,以便于位置估计信息所对应位置的对象按照预设的顺序成像在包括所述第一位置点和第二位置点的轨迹上,例如顺序依次成像在轨迹上的位置点上,或者间隔N个位置点成像等。可以配置一个用户界面,该用户界面能够获取用户操作,还可以同时显示摄像设备采集到的影像。
本发明实施例中,所述摄像设备拍摄的影像可以在一个用户界面中显示,用户可以在该用户界面上滑动,以得到一条滑动的轨迹,然后进一步从滑动的轨迹中确定了多个位置点,这些位置点作为目标显示位置会根据滑动的时间先后顺序进行排序,以便于后续生成调整指令调整摄像设备时,使得为目标对象生成的位置估计信息所对应位置处的对象先后成像在从轨迹上确认的多个位置点处。
为了确保位置估计信息所对应位置处的对象成像到对应的一个或者多个目标显示位置,用于生成控制指令所需的角度的计算方式如下所述。
所述特定位置是一个由用户指定的位置或者是移动物体移动后的位置,该位置坐标为已知。因此,根据上述计算方式可以得到偏航角的偏差del_yaw、俯仰角的偏差del_pitch,如果根据偏航角的偏差del_yaw、俯仰角的偏差del_pitch控制云台转动,可以使位置估计信息所对应位置成像在拍摄到的影像的中心位置。为了保证所述位置估计信息所对应位置成像在所述目标显示位 置,进一步地,基于所述目标显示位置的像素坐标(中心位置为坐标原点),得到目标显示位置相对于X轴(偏航角yaw)、Y轴(俯仰角pitch)的角度,那么,用于生成控制指令所需的角度即为:偏航角的偏差del_yaw加上目标显示位置相对于X轴的角度,俯仰角的偏差del_pitch加上目标显示位置相对于Y轴的角度。
需要说明的是,如果显示所述能够包括影像的用户界面的智能终端不是可以对摄像设备进行控制的终端,则可以通过信息交互的方式,由显示所述用户界面的智能终端将用户的指定位置或者滑动轨迹上的一个或者多个位置点发送给可以对摄像设备进行控制的终端,例如,显示包括影像的用户界面并获取指定位置的是智能手机,可以对摄像设备进行控制的终端为移动物体的移动控制器,此时,智能手机只需要将对应的目标显示位置发送给移动控制器即可,由移动控制器基于该目标显示位置进行其他相应的计算以及调整控制。
再请参见图6,是本发明实施例的一种移动控制方法的流程示意图,本发明实施例的所述方法可以在上述图3或图4所对应实施例的拍摄控制方法中,在得到关于某个目标对象的位置估计信息之后,还可以进一步地基于该位置估计信息进行移动控制。本发明实施例的所述方法可以由一个专用的控制设备实现,也可以由得到关于目标对象的位置估计信息的移动控制器,例如无人机的飞行控制器来实现。具体的,所述方法包括如下步骤。
S601:获取位置估计信息。所述位置估计信息可以所述位置估计信息的具体计算方式可参考上述各实施例中的相关描述,在此不赘述。
S602:根据获取的所述位置估计信息,生成移动控制指令以控制移动物体移动;所述移动控制指令用于控制搭载摄像设备的移动物体围绕所述位置估计信息对应的位置移动以改变移动物体的拍摄位置。
具体的,根据围绕飞行规则,移动物体可以绕所述位置估计信息所对应位置进行圆周移动,也可以做正方形、长方形等多边形移动。其中,该围绕飞行规则可以是默认设置的,也可以是用户自定义设置的。
在本发明实施例中,可以指定一个需要环绕移动的目标对象,具体可以在一个智能终端中通过用户界面来指定目标对象,在该用户界面上,可以同时显示拍摄到的影像,用户可以通过点击打点和/或拖动选择框的形式指定出目标 对象。在确定了目标对象后,会获取包括至少两组拍摄信息的信息集合,各组拍摄信息的具体组成和获取方式可参考上述各实施例中的描述。并进一步地基于该信息集合得到关于所述目标对象的位置估计信息,同样,该目标对象的位置估计信息的计算方式可参考上述各实施例中的描述。
在获取到位置估计信息后,可以直接基于该位置估计信息所对应的位置,并基于各种围绕飞行航线形状来控制移动物体绕该位置估计信息所对应的位置移动。在本发明实施例中,可以较为快捷地确定某个目标对象的位置估计信息,使得绕点移动更加自动化、智能化。
另外,上述各实施例详细描述了针对处于静止状态或者移动速度相对很小的目标对象的位置估计、以及持续拍摄的方案。对于运动的目标对象的位置估算,如果当无人机等移动物体相对于目标对象做快速的切向运动的时候,可以假设目标对象并没有剧烈的加减速动作,那么通过每隔一段距离或者角度的观测,就能够观测到目标的位置变化。但是毫无疑问,因为实际上并没有真实得知目标的运动速度和运动方向。观测得到的目标位置噪音是非常大的。此时,可以使用状态估计的方法从观测噪音中恢复出真实得目标运动方程。
常用并且合理的状态估计方法可以使用卡尔曼滤波器。针对这种情况,可以设计一个假设目标加速度为高斯噪音的运动模型,在这个运动模型下,目标对象的运动是不会发生突变运动的。那么经过一段时间的迭代,目标的状态方程最终将收敛到真实的运动方程。
本发明实施例通过拍摄位置和拍摄角度来对需要持续拍摄的目标对象进行位置估计,再基于位置估计得到的位置估计信息来调整拍摄模块的拍摄方向,实现方式简便快捷,可以有效避免因为目标对象被遮挡所导致的图像识别运算出错的问题。提高了对目标对象的拍摄效率。并且,在确定了目标对象的相应位置后,还可以基于位置实现相应的环绕飞行等移动功能。
本发明实施例还提供了一种计算机存储介质,该计算机存储介质中存储中程序指令,所述程序指令被运行时,执行上述各个实施例的方法。
下面对本发明实施例的拍摄控制装置以及控制设备进行描述。
请参见图7,是本发明实施例的一种拍摄控制装置的结构示意图,本发明实施例的所述装置可以设置在一个单独的控制设备中,也可以设置在移动控制 器或者是云台控制器,具体的,本发明实施例的所述装置包括如下模块。
获取模块701,用于获取包括至少两组拍摄信息的信息集合,所述拍摄信息包括:拍摄到目标对象时的拍摄位置信息和拍摄角度信息;确定模块702,基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置估计信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同;控制模块703,用于根据所述位置估计信息生成拍摄调整指令以调整摄像设备;所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使所述位置估计信息所对应的位置在所述摄像设备的视场内。
进一步可选地,所述确定模块702,具体用于基于至少三组拍摄信息,确定出至少两个所述目标对象的位置初始估计信息;检测已经确定的各个位置初始估计信息是否满足预设的稳定条件;若满足所述稳定条件,则根据各个位置初始估计信息确定出所述目标对象的位置估计信息。
进一步可选地,本发明实施例的所述装置还可以包括:第二识别模块704,用于若不满足所述稳定条件,则基于图像识别技术在拍摄到的影像中识别目标对象,以便于对所述目标对象进行持续拍摄。
进一步可选地,所述确定模块702,具体用于当已经确定的各个位置初始估计信息中至少两个位置初始估计信息所对应的位置之间的位置变化幅度满足预置变化幅度要求时,确定满足所述稳定条件。
进一步可选地,本发明实施例的所述装置还可以包括:第一识别模块705,用于对拍摄到的影像进行图像识别,以识别所述目标对象;当识别出所述目标对象时,对所述目标对象进行持续拍摄;当未识别出所述目标对象时,则通知所述确定模块702。
进一步可选地,所述拍摄调整指令具体用于调整所述摄像设备在位于特定位置时的拍摄角度;如图8所示,所述控制模块703包括:确定单元7031,用于确定所述目标对象在影像中的目标显示位置;控制单元7032,用于根据所述目标显示位置、所述位置估计信息以及所述特定位置生成拍摄调整指令;所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使得所述摄像设备在特定位置时,所述位置估计信息对应的位置在摄像设备的视场角内、且所述位置估计信息对应位置处的对象能够成像到拍摄影像的所述目标显示位置处。
进一步可选地,所述确定单元7031,具体用于将在影像中的指定位置确定为目标显示位置;所述拍摄调整指令调整所述摄像设备的拍摄角度后,所述位置估计信息对应位置处的对象能够固定成像到拍摄影像的所述目标显示位置处。
进一步可选地,所述确定单元7031,具体用于将在影像中绘制的轨迹上的位置点作为目标显示位置,作为目标显示位置的位置点至少包括第一位置点和第二位置点。
所述控制单元7032,具体用于根据预置的生成策略,并根据所述位置估计信息和所述特定位置,至少生成与所第一位置点对应的第一拍摄调整指令和与所述第二位置点对应的第二拍摄调整指令,所述第一拍摄调整指令和第二拍摄调整指令用于调整所述摄像设备的拍摄角度,使得所述位置估计信息对应位置处的对象能够依次成像到拍摄影像的所述第一位置点和第二位置点。
进一步可选地,所述预置的生成策略是根据摄像设备的移动速度、摄像设备的移动位置、各个目标显示位置在所述目标显示轨迹上的位置中的任意一个或者多个预先设置的。
进一步可选地,本发明实施例的所述装置还可以包括:移动控制模块706,用于根据所述位置估计信息,生成移动控制指令以控制搭载所述摄像设备的移动物体移动;所述移动控制指令用于控制所述移动物体围绕所述位置估计信息对应的位置移动以改变拍摄位置。
进一步可选地,所述摄像设备是通过云台搭载在移动物体上,所述拍摄位置信息包括采集到的所述移动物体的位置坐标信息,所述拍摄角度信息包括所述云台的姿态信息和所述目标对象在拍摄得到的影像中的位置信息。
进一步可选地,所述确定模块702从所述信息集合选取至少两组拍摄信息所采用的选取规则包括:基于拍摄信息中的拍摄位置信息计算得到的间隔距离来选取拍摄信息;和/或,基于拍摄信息中的拍摄角度信息计算得到的间隔角度来选取拍摄信息。
具体的,本发明实施例中所述装置的各个模块、单元的具体实现可参考图1至图6所对应实施例中相关步骤、功能的具体描述,在此不赘述。
本发明实施例通过拍摄位置和拍摄角度来对需要持续拍摄的目标对象进 行位置估计,再基于位置估计得到的位置估计信息来调整拍摄模块的拍摄方向,实现方式简便快捷,可以有效避免因为目标对象被遮挡所导致的图像识别运算出错的问题。提高了对目标对象的持续拍摄效率。并且,在确定了目标对象的相应位置后,还可以基于位置实现相应的环绕飞行等移动功能。
再请参见图9,是本发明实施例的一种控制设备的结构示意图,本发明实施例的所述控制设备可以为一个专用设备,通过与无人机等移动物体和智能的云台等设备数据相连,完成对摄像设备的拍摄角度的控制。所述控制设备还可以为移动物体的移动控制器,例如无人机的飞行控制器,移动控制器与云台等设备数据相连,完成对摄像设备的拍摄角度的控制。所述控制设备还可以为云台等设备的控制器,与移动物体数据相连,完成对摄像设备的拍摄角度的控制。
所述控制设备可以包括电源模块,各种接口模块等,在本发明实施例中,所述控制设备还包括:处理器901、输出接口902以及存储器903,所述处理器901、输出接口902以及存储器903之间可以通过总线等方式数据相连。
所述存储器903可以包括易失性存储器(volatile memory),例如随机存取存储器903(random-access memory,RAM);存储器903也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储器903还可以包括上述种类的存储器的组合。
所述处理器901可以是中央处理器(central processing unit,缩写:CPU)。所述处理器901还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
可选地,所述存储器903还用于存储程序指令。所述处理器901可以调用所述程序指令,实现如本申请图3、4、5以及6所对应实施例中所示相关方法。
具体的,所述处理器901,调用所述存储器903中存储的程序指令,用于获取包括至少两组拍摄信息的信息集合,所述拍摄信息包括:拍摄到目标对象时的拍摄位置信息和拍摄角度信息;基于从所述信息集合选取的至少两组拍摄 信息,确定所述目标对象的位置估计信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同;根据所述位置估计信息生成拍摄调整指令以调整摄像设备;所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使所述位置估计信息所对应的位置在所述摄像设备的视场内;所述输出接口902,用于输出所述调整指令以调整摄像设备。
进一步可选地,所述处理器901,具体用于基于至少三组拍摄信息,确定出至少两个所述目标对象的位置初始估计信息;检测已经确定的各个位置初始估计信息是否满足预设的稳定条件;若满足所述稳定条件,则根据各个位置初始估计信息确定出所述目标对象的位置估计信息。
进一步可选地,所述处理器901,还用于若不满足所述稳定条件,则基于图像识别技术在拍摄到的影像中识别目标对象,以便于对所述目标对象进行持续拍摄。
进一步可选地,所述处理器901,具体用于当已经确定的各个位置初始估计信息中至少两个位置初始估计信息所对应的位置之间的位置变化幅度满足预置变化幅度要求时,确定满足所述稳定条件。
进一步可选地,所述处理器901,还用于对拍摄到的影像进行图像识别,以识别所述目标对象;当识别出所述目标对象时,对所述目标对象进行持续拍摄;当未识别出所述目标对象时,执行所述确定所述目标对象的位置估计信息。
进一步可选地,所述拍摄调整指令具体用于调整所述摄像设备在位于特定位置时的拍摄角度;所述处理器901,具体用于确定所述目标对象在影像中的目标显示位置;根据所述目标显示位置、所述位置估计信息以及所述特定位置生成拍摄调整指令;所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使得所述摄像设备在特定位置时,所述位置估计信息对应的位置在摄像设备的视场角内、且所述位置估计信息对应位置处的对象能够成像到拍摄影像的所述目标显示位置处。
进一步可选地,所述处理器901,具体用于将在影像中的指定位置确定为目标显示位置;所述拍摄调整指令调整所述摄像设备的拍摄角度后,所述位置估计信息对应位置处的对象能够固定成像到拍摄影像的所述目标显示位置处。
进一步可选地,所述处理器901,具体用于将在影像中绘制的轨迹上的位 置点作为目标显示位置,作为目标显示位置的位置点至少包括第一位置点和第二位置点;并用于根据预置的生成策略,并根据所述位置估计信息和所述特定位置,至少生成与所第一位置点对应的第一拍摄调整指令和与所述第二位置点对应的第二拍摄调整指令,所述第一拍摄调整指令和第二拍摄调整指令用于调整所述摄像设备的拍摄角度,使得所述位置估计信息对应位置处的对象能够依次成像到拍摄影像的所述第一位置点和第二位置点。
其中可选地,所述预置的生成策略是根据摄像设备的移动速度、摄像设备的移动位置、各个目标显示位置在所述目标显示轨迹上的位置中的任意一个或者多个预先设置的。
进一步可选地,所述处理器901,还用于根据所述位置估计信息,生成移动控制指令以控制搭载所述摄像设备的移动物体移动;所述移动控制指令用于控制所述移动物体围绕所述位置估计信息对应的位置移动以改变拍摄位置。
进一步可选地,所述摄像设备是通过云台搭载在移动物体上,所述拍摄位置信息包括采集到的所述移动物体的位置坐标信息,所述拍摄角度信息包括所述云台的姿态信息和所述目标对象在拍摄得到的影像中的位置信息。
进一步可选地,所述处理器901从所述信息集合选取至少两组拍摄信息所采用的选取规则包括:基于拍摄信息中的拍摄位置信息计算得到的间隔距离来选取拍摄信息;和/或,基于拍摄信息中的拍摄角度信息计算得到的间隔角度来选取拍摄信息。
具体的,本发明实施例中所述处理器901的具体实现可参考图1至图6所对应实施例中相关步骤、功能的具体描述,在此不赘述。
本发明实施例通过拍摄位置和拍摄角度来对需要持续拍摄的目标对象进行位置估计,再基于位置估计得到的位置估计信息来调整拍摄模块的拍摄方向,实现方式简便快捷,可以有效避免因为目标对象被遮挡所导致的图像识别运算出错的问题。提高了对目标对象的持续拍摄效率。并且,在确定了目标对象的相应位置后,还可以基于位置实现相应的环绕飞行等移动功能。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。 其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (36)

  1. 一种拍摄控制方法,其特征在于,包括:
    获取包括至少两组拍摄信息的信息集合,所述拍摄信息包括:拍摄到目标对象时的拍摄位置信息和拍摄角度信息;
    基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置估计信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同;
    根据所述位置估计信息生成拍摄调整指令以调整摄像设备;
    所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使所述位置估计信息所对应的位置在所述摄像设备的视场内。
  2. 如权利要求1所述的方法,其特征在于,所述基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置估计信息,包括:
    基于至少三组拍摄信息,确定出至少两个所述目标对象的位置初始估计信息;
    检测已经确定的各个位置初始估计信息是否满足预设的稳定条件;
    若满足所述稳定条件,则根据各个位置初始估计信息确定出所述目标对象的位置估计信息。
  3. 如权利要求2所述的方法,其特征在于,所述方法还包括:
    若不满足所述稳定条件,则基于图像识别技术在拍摄到的影像中识别目标对象,以便于对所述目标对象进行持续拍摄。
  4. 如权利要求2或3所述方法,其特征在于,当已经确定的各个位置初始估计信息中至少两个位置初始估计信息所对应的位置之间的位置变化幅度满足预置变化幅度要求时,确定满足所述稳定条件。
  5. 如权利要求1-4任一项所述的方法,其特征在于,在所述确定所述目标对象的位置估计信息之前,还包括:
    对拍摄到的影像进行图像识别,以识别所述目标对象;
    当识别出所述目标对象时,对所述目标对象进行持续拍摄;
    当未识别出所述目标对象时,执行所述确定所述目标对象的位置估计信息。
  6. 如权利要求1-5任一项所述的方法,其特征在于,所述拍摄调整指令具体用于调整所述摄像设备在位于特定位置时的拍摄角度;
    所述根据所述位置估计信息生成拍摄调整指令以调整摄像设备,包括:
    确定所述目标对象在影像中的目标显示位置;
    根据所述目标显示位置、所述位置估计信息以及所述特定位置生成拍摄调整指令;
    所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使得所述摄像设备在特定位置时,所述位置估计信息对应的位置在摄像设备的视场角内、且所述位置估计信息对应位置处的对象能够成像到拍摄影像的所述目标显示位置处。
  7. 如权利要求6所述的方法,其特征在于,所述确定所述目标对象在影像中的目标显示位置,包括:
    将在影像中的指定位置确定为目标显示位置;
    所述拍摄调整指令调整所述摄像设备的拍摄角度后,所述位置估计信息对应位置处的对象能够固定成像到拍摄影像的所述目标显示位置处。
  8. 如权利要求6所述的方法,其特征在于,
    将在影像中绘制的轨迹上的位置点作为目标显示位置,作为目标显示位置的位置点至少包括第一位置点和第二位置点;
    所述根据所述目标显示位置、所述位置估计信息以及所述特定位置,生成拍摄调整指令,包括:
    根据预置的生成策略,并根据所述位置估计信息和所述特定位置,至少生成与所第一位置点对应的第一拍摄调整指令和与所述第二位置点对应的第二拍摄调整指令,所述第一拍摄调整指令和第二拍摄调整指令用于调整所述摄像设备的拍摄角度,使得所述位置估计信息对应位置处的对象能够依次成像到拍摄影像的所述第一位置点和第二位置点。
  9. 如权利要求8所述的方法,其特征在于,所述预置的生成策略是根据摄像设备的移动速度、摄像设备的移动位置、各个目标显示位置在所述目标显示轨迹上的位置中的任意一个或者多个预先设置的。
  10. 如权利要求1-9任一项所述的方法,其特征在于,还包括:
    根据所述位置估计信息,生成移动控制指令以控制搭载所述摄像设备的移动物体移动;
    所述移动控制指令用于控制所述移动物体围绕所述位置估计信息对应的位置移动以改变拍摄位置。
  11. 如权利要求1-10任一项所述的方法,其特征在于,所述摄像设备是通过云台搭载在移动物体上,所述拍摄位置信息包括采集到的所述移动物体的位置坐标信息,所述拍摄角度信息包括所述云台的姿态信息和所述目标对象在拍摄得到的影像中的位置信息。
  12. 如权利要求1-11任一项所述的方法,其特征在于,
    从所述信息集合选取至少两组拍摄信息所采用的选取规则包括:基于拍摄信息中的拍摄位置信息计算得到的间隔距离来选取拍摄信息;和/或,基于拍摄信息中的拍摄角度信息计算得到的间隔角度来选取拍摄信息。
  13. 一种拍摄控制装置,其特征在于,包括:
    获取模块,用于获取包括至少两组拍摄信息的信息集合,所述拍摄信息包括:拍摄到目标对象时的拍摄位置信息和拍摄角度信息;
    确定模块,用于基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置估计信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同;
    控制模块,用于根据所述位置估计信息生成拍摄调整指令以调整摄像设备;所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使所述位置估计信息所对应的位置在所述摄像设备的视场内。
  14. 如权利要求13所述的装置,其特征在于,
    所述确定模块,具体用于基于至少三组拍摄信息,确定出至少两个所述目标对象的位置初始估计信息;检测已经确定的各个位置初始估计信息是否满足预设的稳定条件;若满足所述稳定条件,则根据各个位置初始估计信息确定出所述目标对象的位置估计信息。
  15. 如权利要求14所述的装置,其特征在于,还包括:
    第二识别模块,用于若不满足所述稳定条件,则基于图像识别技术在拍摄到的影像中识别目标对象,以便于对所述目标对象进行持续拍摄。
  16. 如权利要求14或15所述装置,其特征在于,
    所述确定模块,具体用于当已经确定的各个位置初始估计信息中至少两个位置初始估计信息所对应的位置之间的位置变化幅度满足预置变化幅度要求时,确定满足所述稳定条件。
  17. 如权利要求13-16任一项所述的装置,其特征在于,还包括:
    第一识别模块,用于对拍摄到的影像进行图像识别,以识别所述目标对象;当识别出所述目标对象时,对所述目标对象进行持续拍摄;当未识别出所述目标对象时,则通知所述确定模块。
  18. 如权利要求13-17任一项所述的装置,其特征在于,所述拍摄调整指令具体用于调整所述摄像设备在位于特定位置时的拍摄角度;所述控制模块包括:
    确定单元,用于确定所述目标对象在影像中的目标显示位置;
    控制单元,用于根据所述目标显示位置、所述位置估计信息以及所述特定位置生成拍摄调整指令;所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使得所述摄像设备在特定位置时,所述位置估计信息对应的位置在摄像设备的视场角内、且所述位置估计信息对应位置处的对象能够成像到拍摄影像的所述目标显示位置处。
  19. 如权利要求18所述的装置,其特征在于,
    所述确定单元,具体用于将在影像中的指定位置确定为目标显示位置;所述拍摄调整指令调整所述摄像设备的拍摄角度后,所述位置估计信息对应位置处的对象能够固定成像到拍摄影像的所述目标显示位置处。
  20. 如权利要求19所述的装置,其特征在于,
    所述确定单元,具体用于将在影像中绘制的轨迹上的位置点作为目标显示位置,作为目标显示位置的位置点至少包括第一位置点和第二位置点;
    所述控制单元,具体用于根据预置的生成策略,并根据所述位置估计信息和所述特定位置,至少生成与所第一位置点对应的第一拍摄调整指令和与所述第二位置点对应的第二拍摄调整指令,所述第一拍摄调整指令和第二拍摄调整指令用于调整所述摄像设备的拍摄角度,使得所述位置估计信息对应位置处的对象能够依次成像到拍摄影像的所述第一位置点和第二位置点。
  21. 如权利要求20所述的装置,其特征在于,所述预置的生成策略是根据摄像设备的移动速度、摄像设备的移动位置、各个目标显示位置在所述目标显示轨迹上的位置中的任意一个或者多个预先设置的。
  22. 如权利要求13-21任一项所述的装置,其特征在于,还包括:
    移动控制模块,用于根据所述位置估计信息,生成移动控制指令以控制搭载所述摄像设备的移动物体移动;所述移动控制指令用于控制所述移动物体围绕所述位置估计信息对应的位置移动以改变拍摄位置。
  23. 如权利要求13-22任一项所述的装置,其特征在于,所述摄像设备是通过云台搭载在移动物体上,所述拍摄位置信息包括采集到的所述移动物体的位置坐标信息,所述拍摄角度信息包括所述云台的姿态信息和所述目标对象在拍摄得到的影像中的位置信息。
  24. 如权利要求13-23任一项所述的装置,其特征在于,从所述信息集合 选取至少两组拍摄信息所采用的选取规则包括:基于拍摄信息中的拍摄位置信息计算得到的间隔距离来选取拍摄信息;和/或,基于拍摄信息中的拍摄角度信息计算得到的间隔角度来选取拍摄信息。
  25. 一种控制设备,其特征在于,包括:处理器和输出接口;
    所述处理器,用于获取包括至少两组拍摄信息的信息集合,所述拍摄信息包括:拍摄到目标对象时的拍摄位置信息和拍摄角度信息;基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置估计信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同;根据所述位置估计信息生成拍摄调整指令以调整摄像设备;所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使所述位置估计信息所对应的位置在所述摄像设备的视场内;
    所述输出接口,用于输出所述调整指令以调整摄像设备。
  26. 如权利要求25所述的控制设备,其特征在于,
    所述处理器,具体用于基于至少三组拍摄信息,确定出至少两个所述目标对象的位置初始估计信息;检测已经确定的各个位置初始估计信息是否满足预设的稳定条件;若满足所述稳定条件,则根据各个位置初始估计信息确定出所述目标对象的位置估计信息。
  27. 如权利要求26所述的控制设备,其特征在于,
    所述处理器,还用于若不满足所述稳定条件,则基于图像识别技术在拍摄到的影像中识别目标对象,以便于对所述目标对象进行持续拍摄。
  28. 如权利要求26或27所述的控制设备,其特征在于,
    所述处理器,具体用于当已经确定的各个位置初始估计信息中至少两个位置初始估计信息所对应的位置之间的位置变化幅度满足预置变化幅度要求时,确定满足所述稳定条件。
  29. 如权利要求25-28任一项所述的控制设备,其特征在于,
    所述处理器,还用于对拍摄到的影像进行图像识别,以识别所述目标对象; 当识别出所述目标对象时,对所述目标对象进行持续拍摄;当未识别出所述目标对象时,执行所述确定所述目标对象的位置估计信息。
  30. 如权利要求25-29任一项所述的控制设备,其特征在于,所述拍摄调整指令具体用于调整所述摄像设备在位于特定位置时的拍摄角度;
    所述处理器,具体用于确定所述目标对象在影像中的目标显示位置;根据所述目标显示位置、所述位置估计信息以及所述特定位置,生成拍摄调整指令;所述拍摄调整指令用于调整所述摄像设备的拍摄角度,使得所述摄像设备在特定位置时,所述位置估计信息对应的位置在摄像设备的视场角内、且所述位置估计信息对应位置处的对象能够成像到拍摄影像的所述目标显示位置处。
  31. 如权利要求30所述的控制设备,其特征在于,
    所述处理器,具体用于将在影像中的指定位置确定为目标显示位置;所述拍摄调整指令调整所述摄像设备的拍摄角度后,所述位置估计信息对应位置处的对象能够固定成像到拍摄影像的所述目标显示位置处。
  32. 如权利要求31所述的控制设备,其特征在于,
    所述处理器,具体用于将在影像中绘制的轨迹上的位置点作为目标显示位置,作为目标显示位置的位置点至少包括第一位置点和第二位置点;并用于根据预置的生成策略,并根据所述位置估计信息和所述特定位置,至少生成与所第一位置点对应的第一拍摄调整指令和与所述第二位置点对应的第二拍摄调整指令,所述第一拍摄调整指令和第二拍摄调整指令用于调整所述摄像设备的拍摄角度,使得所述位置估计信息对应位置处的对象能够依次成像到拍摄影像的所述第一位置点和第二位置点。
  33. 如权利要求32所述的控制设备,其特征在于,
    所述预置的生成策略是根据摄像设备的移动速度、摄像设备的移动位置、各个目标显示位置在所述目标显示轨迹上的位置中的任意一个或者多个预先设置的。
  34. 如权利要求25-33任一项所述的控制设备,其特征在于,
    所述处理器,还用于根据所述位置估计信息,生成移动控制指令以控制搭载所述摄像设备的移动物体移动;所述移动控制指令用于控制所述移动物体围绕所述位置估计信息对应的位置移动以改变拍摄位置。
  35. 如权利要求25-34任一项所述的控制设备,其特征在于,所述摄像设备是通过云台搭载在移动物体上,所述拍摄位置信息包括采集到的所述移动物体的位置坐标信息,所述拍摄角度信息包括所述云台的姿态信息和所述目标对象在拍摄得到的影像中的位置信息。
  36. 如权利要求25-35任一项所述的控制设备,其特征在于,
    所述处理器从所述信息集合选取至少两组拍摄信息所采用的选取规则包括:基于拍摄信息中的拍摄位置信息计算得到的间隔距离来选取拍摄信息;和/或,基于拍摄信息中的拍摄角度信息计算得到的间隔角度来选取拍摄信息。
PCT/CN2016/108446 2016-12-02 2016-12-02 一种拍摄控制方法、装置以及控制设备 WO2018098824A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
PCT/CN2016/108446 WO2018098824A1 (zh) 2016-12-02 2016-12-02 一种拍摄控制方法、装置以及控制设备
CN201680030410.8A CN107710283B (zh) 2016-12-02 2016-12-02 一种拍摄控制方法、装置以及控制设备
US16/426,975 US10897569B2 (en) 2016-12-02 2019-05-30 Photographing control method, apparatus, and control device
US17/151,335 US11575824B2 (en) 2016-12-02 2021-01-18 Photographing control method, apparatus, and control device
US18/164,811 US11863857B2 (en) 2016-12-02 2023-02-06 Photographing control method, apparatus, and control device
US18/544,884 US20240155219A1 (en) 2016-12-02 2023-12-19 Photographing control method, apparatus, and control device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/108446 WO2018098824A1 (zh) 2016-12-02 2016-12-02 一种拍摄控制方法、装置以及控制设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/426,975 Continuation US10897569B2 (en) 2016-12-02 2019-05-30 Photographing control method, apparatus, and control device

Publications (1)

Publication Number Publication Date
WO2018098824A1 true WO2018098824A1 (zh) 2018-06-07

Family

ID=61169419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/108446 WO2018098824A1 (zh) 2016-12-02 2016-12-02 一种拍摄控制方法、装置以及控制设备

Country Status (3)

Country Link
US (4) US10897569B2 (zh)
CN (1) CN107710283B (zh)
WO (1) WO2018098824A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021026780A1 (zh) * 2019-08-13 2021-02-18 深圳市大疆创新科技有限公司 拍摄控制方法、终端、云台、***及存储介质
CN114187349A (zh) * 2021-11-03 2022-03-15 深圳市正运动技术有限公司 产品加工方法、装置、终端设备以及存储介质

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110873563B (zh) * 2018-08-30 2022-03-08 杭州海康机器人技术有限公司 一种云台姿态估计方法及装置
CN109062220B (zh) * 2018-08-31 2021-06-29 创新先进技术有限公司 控制终端运动的方法和装置
CN109976533B (zh) * 2019-04-15 2022-06-03 珠海天燕科技有限公司 显示控制方法及装置
CN110086988A (zh) * 2019-04-24 2019-08-02 薄涛 拍摄角度调整方法、装置、设备及其存储介质
CN110083180A (zh) * 2019-05-22 2019-08-02 深圳市道通智能航空技术有限公司 云台控制方法、装置、控制终端及飞行器***
CN110225250A (zh) * 2019-05-31 2019-09-10 维沃移动通信(杭州)有限公司 一种拍照方法及终端设备
CN110191288B (zh) * 2019-07-17 2021-05-18 图普科技(广州)有限公司 一种摄像机位置调整方法及装置
CN112640422A (zh) * 2020-04-24 2021-04-09 深圳市大疆创新科技有限公司 拍摄方法、可移动平台、控制设备和存储介质
WO2021258251A1 (zh) * 2020-06-22 2021-12-30 深圳市大疆创新科技有限公司 用于可移动平台的测绘方法、可移动平台和存储介质
CN112913221A (zh) * 2020-07-20 2021-06-04 深圳市大疆创新科技有限公司 图像处理方法、装置、穿越机、图像优化***及存储介质
CN111932623A (zh) * 2020-08-11 2020-11-13 北京洛必德科技有限公司 一种基于移动机器人的人脸数据自动采集标注方法、***及其电子设备
CN112261281B (zh) * 2020-09-03 2022-08-02 科大讯飞股份有限公司 视野调整方法及电子设备、存储装置
CN112843739B (zh) * 2020-12-31 2023-04-28 上海米哈游天命科技有限公司 拍摄方法、装置、电子设备及存储介质
CN113179371B (zh) * 2021-04-21 2023-04-07 新疆爱华盈通信息技术有限公司 一种拍摄方法、装置及抓拍***
CN113206958B (zh) * 2021-04-30 2023-06-09 成都睿铂科技有限责任公司 一种航线拍摄方法
CN114758208B (zh) * 2022-06-14 2022-09-06 深圳市海清视讯科技有限公司 考勤设备调整方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104243796A (zh) * 2013-06-11 2014-12-24 索尼公司 摄影装置、摄影方法、模板创建装置和模板创建方法
CN104881650A (zh) * 2015-05-29 2015-09-02 成都通甲优博科技有限责任公司 一种基于无人机动平台的车辆跟踪方法
CN105353772A (zh) * 2015-11-16 2016-02-24 中国航天时代电子公司 一种无人机机动目标定位跟踪中的视觉伺服控制方法
CN105979133A (zh) * 2015-10-22 2016-09-28 乐视移动智能信息技术(北京)有限公司 一种跟踪拍摄的方法、移动终端和***
WO2016162973A1 (ja) * 2015-04-08 2016-10-13 オリンパス株式会社 細胞追跡修正方法、細胞追跡修正装置及びコンピュータにより読み取り可能な細胞追跡修正プログラムを一時的に記憶する記録媒体

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE05858317T1 (de) * 2005-11-15 2009-04-30 Bell Helicopter Textron, Inc., Fort Worth Flugsteuersystem für automatische Kreisflüge
US7724188B2 (en) * 2008-05-23 2010-05-25 The Boeing Company Gimbal system angle compensation
KR20120119144A (ko) * 2011-04-20 2012-10-30 주식회사 레이스전자 카메라 기반 지능형 관리 장치 및 방법
CN104145474A (zh) * 2011-12-07 2014-11-12 英特尔公司 引导式图像拍摄
JP6518069B2 (ja) * 2015-01-09 2019-05-22 キヤノン株式会社 表示装置、撮像システム、表示装置の制御方法、プログラム、及び記録媒体
KR20160114434A (ko) * 2015-03-24 2016-10-05 삼성전자주식회사 전자 장치 및 전자 장치의 이미지 촬영 방법
CA2929254C (en) * 2016-05-06 2018-12-11 SKyX Limited Unmanned aerial vehicle (uav) having vertical takeoff and landing (vtol) capability
CA3030349A1 (en) * 2016-09-28 2018-05-05 Federal Express Corporation Systems and methods for monitoring the internal storage contents of a shipment storage using one or more internal monitor drones
US20180295335A1 (en) * 2017-04-10 2018-10-11 Red Hen Systems Llc Stereographic Imaging System Employing A Wide Field, Low Resolution Camera And A Narrow Field, High Resolution Camera
CN114397903A (zh) * 2017-05-24 2022-04-26 深圳市大疆创新科技有限公司 一种导航处理方法及控制设备
CN113163119A (zh) * 2017-05-24 2021-07-23 深圳市大疆创新科技有限公司 拍摄控制方法及装置
US10479243B2 (en) * 2017-12-05 2019-11-19 Ford Global Technologies, Llc Air channel thermocomfort foam pad
US20210061465A1 (en) * 2018-01-15 2021-03-04 Hongo Aerospace Inc. Information processing system
US10574881B2 (en) * 2018-02-15 2020-02-25 Adobe Inc. Smart guide to capture digital images that align with a target image model
US11687869B2 (en) * 2018-02-22 2023-06-27 Flytrex Aviation Ltd. System and method for securing delivery using an autonomous vehicle
EP3889928B1 (en) * 2018-11-28 2023-08-16 Panasonic Intellectual Property Management Co., Ltd. Unmanned aerial vehicle, control method, and program
JP2021179718A (ja) * 2020-05-12 2021-11-18 トヨタ自動車株式会社 システム、移動体、及び、情報処理装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104243796A (zh) * 2013-06-11 2014-12-24 索尼公司 摄影装置、摄影方法、模板创建装置和模板创建方法
WO2016162973A1 (ja) * 2015-04-08 2016-10-13 オリンパス株式会社 細胞追跡修正方法、細胞追跡修正装置及びコンピュータにより読み取り可能な細胞追跡修正プログラムを一時的に記憶する記録媒体
CN104881650A (zh) * 2015-05-29 2015-09-02 成都通甲优博科技有限责任公司 一种基于无人机动平台的车辆跟踪方法
CN105979133A (zh) * 2015-10-22 2016-09-28 乐视移动智能信息技术(北京)有限公司 一种跟踪拍摄的方法、移动终端和***
CN105353772A (zh) * 2015-11-16 2016-02-24 中国航天时代电子公司 一种无人机机动目标定位跟踪中的视觉伺服控制方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021026780A1 (zh) * 2019-08-13 2021-02-18 深圳市大疆创新科技有限公司 拍摄控制方法、终端、云台、***及存储介质
CN114187349A (zh) * 2021-11-03 2022-03-15 深圳市正运动技术有限公司 产品加工方法、装置、终端设备以及存储介质
CN114187349B (zh) * 2021-11-03 2022-11-08 深圳市正运动技术有限公司 产品加工方法、装置、终端设备以及存储介质

Also Published As

Publication number Publication date
US20210144296A1 (en) 2021-05-13
US20240155219A1 (en) 2024-05-09
US20230188825A1 (en) 2023-06-15
US20190281209A1 (en) 2019-09-12
CN107710283B (zh) 2022-01-28
US11575824B2 (en) 2023-02-07
CN107710283A (zh) 2018-02-16
US11863857B2 (en) 2024-01-02
US10897569B2 (en) 2021-01-19

Similar Documents

Publication Publication Date Title
WO2018098824A1 (zh) 一种拍摄控制方法、装置以及控制设备
CN112567201B (zh) 距离测量方法以及设备
CN108476288B (zh) 拍摄控制方法及装置
CN113038016B (zh) 无人机图像采集方法及无人机
WO2020014909A1 (zh) 拍摄方法、装置和无人机
WO2020107372A1 (zh) 拍摄设备的控制方法、装置、设备及存储介质
US8897543B1 (en) Bundle adjustment based on image capture intervals
US11983898B2 (en) Monitoring method, electronic device and storage medium
US20200267309A1 (en) Focusing method and device, and readable storage medium
WO2018072063A1 (zh) 一种对飞行器的飞行控制方法、装置及飞行器
CN109035330A (zh) 箱体拟合方法、设备和计算机可读存储介质
WO2021168804A1 (zh) 图像处理方法、图像处理装置和图像处理***
CN110602376B (zh) 抓拍方法及装置、摄像机
WO2020181506A1 (zh) 一种图像处理方法、装置及***
CN109814588A (zh) 飞行器以及应用于飞行器的目标物追踪***和方法
CN103581562A (zh) 全景拍摄的方法和装置
JP2023010769A (ja) 情報処理装置、制御方法、及びプログラム
WO2021217403A1 (zh) 可移动平台的控制方法、装置、设备及存储介质
WO2018121794A1 (zh) 一种控制方法、电子设备及存储介质
CN111935389B (zh) 拍摄对象切换方法、装置、拍摄设备及可读存储介质
WO2022000211A1 (zh) 拍摄***的控制方法、设备、及可移动平台、存储介质
WO2022040988A1 (zh) 图像处理方法、装置及可移动平台
CN106845363A (zh) 巡航拍摄跟踪的方法及装置
JP2021103410A (ja) 移動体及び撮像システム
JP6950273B2 (ja) 飛行物***置検知装置、飛行物***置検知システム、飛行物***置検知方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16922761

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16922761

Country of ref document: EP

Kind code of ref document: A1