WO2019127395A1 - 一种无人机拍照方法、图像处理方法和装置 - Google Patents

一种无人机拍照方法、图像处理方法和装置 Download PDF

Info

Publication number
WO2019127395A1
WO2019127395A1 PCT/CN2017/119916 CN2017119916W WO2019127395A1 WO 2019127395 A1 WO2019127395 A1 WO 2019127395A1 CN 2017119916 W CN2017119916 W CN 2017119916W WO 2019127395 A1 WO2019127395 A1 WO 2019127395A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image
photographing
drone
user
Prior art date
Application number
PCT/CN2017/119916
Other languages
English (en)
French (fr)
Inventor
韩峰
杨康
谷骞
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2017/119916 priority Critical patent/WO2019127395A1/zh
Priority to CN201780031839.3A priority patent/CN110192168B/zh
Publication of WO2019127395A1 publication Critical patent/WO2019127395A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present application relates to the field of drone technology, and more particularly to a drone photographing method, an image processing method and apparatus using an aerial drone for photographing.
  • Aerial photography also known as aerial photography or aerial photography, refers to photographing the earth's landscape from the air and obtaining a top view. This picture is an aerial picture.
  • Aerial drone refers to a drone that can carry aerial photography services with image acquisition equipment. With the development of aerial drone technology, there are now many drones that can be controlled by user gestures or body posture control. Selfie. At the same time, terminals such as smartphones or tablets connected to drones can also share photos taken by drones to social networking sites.
  • the user can only use the preset gesture or body posture to control the drone to perform self-timer, and the user cannot select the gesture or body posture to control the self-timer.
  • the present invention provides a UAV photographing method, an image processing method and a device, so that a user can select a target photographing posture to trigger a photographing action of the drone according to his own needs.
  • a method for photographing a drone comprising:
  • An image data processing method acquires and stores image information uploaded by a user and a shooting location and a target photographing posture that match the image information.
  • a drone camera device includes:
  • a processor configured to provide a pre-stored gesture list to the user, and obtain a target photographing gesture selected by the user from the gesture list;
  • a flight path control unit for controlling a flight path of the drone according to the target shooting location
  • An image processing unit configured to acquire image information pre-acquired by the image collection device, extracting posture information of the user from the pre-acquired image information, and comparing the posture information with the target photographing posture, and when the posture information is When the similarity with the target photographing posture is greater than a preset value, the image capturing device is controlled to perform a recording or photographing action.
  • An image data processing apparatus comprising:
  • the data collection unit is configured to acquire and store image information uploaded by the user and a shooting location and a target photographing posture that match the image information.
  • a storage medium comprising:
  • the memory is configured to store program code
  • the drone processor is configured to invoke the program code, when the program code is executed, to perform the following operations:
  • a storage medium comprising:
  • the memory is configured to store program code
  • the drone processor is configured to invoke the program code, when the program code is executed, to perform the following operations:
  • Completion information of the image information uploaded by the user is calculated and stored according to the user gesture in the image information and the target photographing posture.
  • the embodiment of the present invention provides the above-mentioned drone photographing method and apparatus, and the user can select the current self-timer list for controlling the unmanned person before performing the self-photographing.
  • the image capturing device performs the target photographing posture of the image acquisition.
  • the flying trajectory of the drone is controlled according to the position coordinates of the target shooting location, and the drone automatically recognizes the pre-acquired image information. Extracting the posture information of the user in the pre-acquired image information, comparing the posture information with the target photographing posture, and determining whether the similarity between the two is greater than a set value, and if so, controlling the drone to perform photographing or recording action. Therefore, the present application can select an arbitrary target photographing posture to control the drone action according to the user's needs, and enhance the user experience.
  • FIG. 1 is a schematic flowchart of a method for photographing a drone according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of determining a photographing position of a drone according to an embodiment of the present application
  • FIG. 3 is a schematic flow chart of a method for controlling a flight path of a drone during self-photographing according to an embodiment of the present application
  • FIG. 4 is a schematic structural diagram of a camera device for an unmanned aerial vehicle according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a storage medium according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of a storage medium according to another embodiment of the present disclosure.
  • the present application discloses a drone photographing method and apparatus.
  • the specific implementation process of the above-mentioned UAV photographing method may include:
  • Step S101 Acquire a target photographing gesture selected by the user from the gesture list
  • some photographing gestures may be stored in advance, and the photographing gestures are stored in the gesture list.
  • the user may select a photographing gesture as a target photograph according to the needs of the user.
  • the posture that is, the image capturing operation is controlled by the drone through the target photographing posture.
  • the photographing posture in the gesture list can be directly downloaded from the network.
  • the photographing posture can be obtained by performing photo recognition on the existing one.
  • the drone is controlled to perform image recognition on the existing target image, and the target image is identified.
  • the user gesture is used to import the user gesture into the gesture list as a camera gesture.
  • Step S102 controlling the flight path of the drone according to the target shooting location
  • the target shooting location is a location where the user performs a self-photographing, and when the user poses the target photographing posture at the target shooting location, the drone can control the image capturing action.
  • the user when controlling the drone flight, the user can manually control the drone to reach the photographing position, and of course, the drone can automatically control the flight.
  • the drone image capturing device is controlled to face the user, and the photographing position refers to the position of the drone when the drone photographs the user.
  • Step S103 Acquire image information pre-acquired by the image collection device.
  • pre-acquisition Before performing image acquisition, pre-acquisition is performed.
  • the pre-acquisition does not necessarily store the collected image information. It may not be stored, may be downsampled or compressed, or may be stored completely.
  • Step S104 extracting posture information of the user from pre-acquired image information
  • the image information collected in the pre-acquisition stage is processed, and the user in the image information is captured, and the user posture in the pre-acquired image information is determined according to the captured user form;
  • Step S106 determining whether the similarity between the posture information and the target posture is greater than a preset value, and if yes, executing step S107;
  • a preset value may be preset.
  • step S107 is performed to control the image collection device to operate.
  • Step S107 Control the image capturing device to perform a recording or photographing action.
  • the user may select, by the self-timer list, the target photographing posture for controlling the image capturing device of the drone to perform image capturing, when the target photographing posture is selected.
  • the flight path of the drone is controlled according to the position coordinates of the target shooting location, and the drone automatically recognizes the pre-acquired image information, and extracts the posture information of the user in the pre-acquired image information, and the posture information is obtained. Comparing with the target photographing posture, determining whether the similarity between the two is greater than a set value, and if so, controlling the drone to perform a photographing or recording action. Therefore, the present application can select an arbitrary target photographing posture to control the drone action according to the user's needs, and enhance the user experience.
  • the user may select a target shooting location for photographing according to his own needs.
  • the drone acquires the coordinate position of the user, and the acquired user's The coordinate position is used as the target shooting location.
  • the coordinate position can be collected by the smart device such as mobile phone and sent to the drone, or can be automatically collected by the positioning system of the drone.
  • the user can select a photographing mode according to his own needs, such as a half-length photograph, a full-body photograph, or a telephoto photograph, and the photographing position determined by the drone according to the photographing mode selected by the user.
  • the drone control system In order to position the user when the user takes a picture, the drone control system automatically controls the drone to fly to the photographing position and adjusts the collection direction of the image capturing device, so that the image capturing range of the image capturing device of the drone Cover the target shooting location. After the drone reaches the photographing position, after the image capturing device is controlled to be aligned with the user, the image capturing device performs image pre-acquisition.
  • the image capturing direction of the image capturing device can be controlled by communication between the drone and the UAV remote controller or a smart device such as a mobile phone bound to the drone, so that the image is collected.
  • the image acquisition direction of the device is facing the user.
  • the target shooting location is selected.
  • the target shooting location may also be selected from the image information collected by the drone. Specifically, the process is as follows:
  • the user can send a target shooting location selection instruction to the drone by clicking on the image of the target area, and the drone clicks the position in the captured image of the target area as the target shooting location, and then binds the image captured by the target area.
  • the determined depth information is calculated to obtain coordinate information of the target shooting location.
  • the shooting area can also be selected in the above scheme.
  • the user can input the shooting area selection to the drone. Instructing, the user selects an area range of the target shooting area in the target area captured image with the depth information bound to the image acquisition, and when the target shooting area is selected, the drone firstly controls the target shooting location roughly according to the target shooting area.
  • the flight path is such that the drone reaches the photographing position, and when the drone reaches the photographing position, the pre-acquired captured image is analyzed, and whether the scene in the target photographing area already exists in the drone In the pre-acquired image information, if not, continue to adjust the position information of the drone and/or the shooting direction of the image capturing device, and finally the background information in the pre-acquired image information includes the scene of the target shooting area.
  • the position of the drone at this time is taken as the final determined photographing position.
  • the different photographing environments permit the user to select different photographing postures, for example, the target photographing location is different from the selectable photographing posture in which the horizontal bar is flat with the target photographing location. Therefore, the above implementation of the present application is different.
  • the drone may automatically perform image recognition on the background information of the target shooting location, and obtain environmental background information according to the environmental background information. The selectable photographing gesture is retrieved, and the selected photographing gesture is nested in the gesture list.
  • the user may have special requirements on the shooting angle.
  • the drone needs to take a panoramic view to shoot or control the drone to take a viewing angle, etc.
  • the target shooting angle may be selected.
  • the method before controlling the drone flight trajectory according to the target shooting location, the method may further include:
  • the shooting angle may be: a top view, a bottom view, a left view, a right view, and the like.
  • the selected angle is different when the user takes the same posture, the effect of the shooting is different after the shooting angle is changed, that is, even if the user makes the target photographing posture, the angle of the drone is taken. Different, it will also make the drone unable to recognize the gesture. Therefore, in the above solution, in order to enable the drone to respond to the posture information of the user at different shooting angles, in the above solution, after the user inputs the photographing angle, the target photographing posture is adjusted according to the photographing angle input by the user. , the adjusted target photographing posture is used as the target photographing posture used in the subsequent comparison;
  • the flying trajectory of the drone is controlled according to the target shooting location, so that The image capturing range of the image capturing device of the drone covers the target shooting location, specifically: controlling the flight path of the drone according to the target shooting location and the shooting angle, so that the image capturing range of the image capturing device of the drone Covering the target shooting location, and acquiring image of the target shooting location according to the shooting angle.
  • the drone can communicate with the drone remote controller or the pre-bound mobile phone, and then the drone and the remote control are calculated.
  • the direction angle between the device or the mobile phone, and the position of the drone is adjusted according to the shooting angle, so that the direction angle between the drone and the remote controller or the mobile phone matches the shooting angle, so that the drone reaches the location
  • the photographing position required for the shooting angle is described, and the user can also obtain the image information pre-acquired by the image capturing device of the drone through the remote controller or the mobile phone bound to the drone, and the user can use the remote controller or the mobile phone to
  • the photographing position of the man-machine and the shooting angle and the focal length of the image capturing device are finely adjusted, so that the photographing effect is optimized.
  • the process of adjusting the photographing position of the unmanned aerial vehicle may be:
  • Step S201 controlling the flight path of the drone according to the coordinate position of the target shooting location, so that the drone is at the same level as the target shooting location;
  • the image capturing device of the drone Controlling the flight path of the drone according to the coordinate position of the target shooting location, so that the drone is at the same level as the target shooting location, and the image collecting device is facing the image of the user, the image capturing device of the drone
  • the range covers the target shooting location; when the drone is adjusted to the same level as the target shooting location, the drone can be controlled by the coordinates of the drone coordinates and the target shooting location, so that the drone and the drone The target shooting locations are at the same level and the image capture device is facing the target shooting location.
  • the user can also reach the target shooting location first, and then the drone detects and adjusts the direction angle between the drone and the drone remote controller or the pre-bound mobile phone, so that the drone and the drone The target shooting location is at the same level and the drone is facing the target shooting location;
  • Step S202 The drone automatically adjusts the distance between the target shooting location, so that the distance between the two is a preset distance value
  • the drone When the drone is at the same level as the target shooting location and is facing the target shooting position, the drone automatically adjusts the distance between the drone and the target shooting location, so that the distance between the two is preset a distance value, the size of the preset distance value may be selected by the user, for example, may be determined by the photographing mode selected by the user in the above, and when calculating the distance between the two, the drone and the target shooting location may be adopted. Calculate the coordinate distance between the remote control or the pre-bound mobile phone;
  • Step S203 adjusting the coordinate position of the drone in the vertical plane according to the shooting angle
  • the coordinate position of the drone in the vertical plane is adjusted according to the shooting angle, so that the drone reaches the coordinate position corresponding to the shooting angle. , the position is recorded as the photographing position of the drone.
  • the vector in the vertical plane required for the drone to be moved may be calculated first, including the moving direction and moving distance of the drone in the vertical plane, which is required for calculating the drone.
  • the vector When the vector is moved, it can be calculated according to the distance between the drone and the target photographing point and the shooting angle input by the user. For example, when the distance between the drone and the target shooting location is a preset distance and the shooting angle input by the user is an elevation angle of 45 degrees, it is necessary to control the drone between the drone and the target shooting location in a vertical plane.
  • the angle of the drone is 45 degrees, and the height of the drone is lower than the height of the target shooting location, and the moving direction of the drone required to move is vertically downward, and the moving distance is the preset distance.
  • the moving direction of the drone is calculated according to the shooting angle input by the user, and the drone is controlled to move in a vertical plane according to the calculated moving direction, and the computing drone is implemented.
  • a direction angle between the target shooting location comparing the direction angle and the shooting angle in real time during the movement of the drone, and when the two angles are the same, indicating that the drone reaches a desired shooting angle, Keep the drone's position still.
  • the drone in order to simply realize the determination of the photographing position of the drone, in the above solution, when the user takes a self-photograph by using the drone at a certain target shooting location, the drone can be remotely controlled first.
  • the device manually controls the flight path of the drone, controls the opening and closing of the image acquisition device of the drone, and controls the drone to perform image acquisition on the target shooting location at the pre-estimated photographing position.
  • the remote control adjusts the shooting posture of the drone, and the drone records its own coordinate position, the shooting posture and the binding relationship between each frame image collected by the image acquisition device in real time.
  • the user can select one frame of the ideal image from the captured video images, and select the target shooting location from the image.
  • the method for determining the photographing position of the drone at this time may be:
  • Step S301 Acquire a target area captured image for selecting a target shooting location
  • the target area captured image is a captured image used when the user selects a target shooting location. For example, if the user selects a target shooting location from the image A, the image A is the target captured image;
  • Step S302 acquiring a UAV coordinate position corresponding to the target area captured image and a photographing posture
  • each of the images captured by the image capturing device matches an unmanned aircraft coordinate position and a photographing posture
  • the drone coordinate position is that the drone collects the image in the image capturing device.
  • the spatial coordinate position of the drone wherein the photographing posture is posture information of the image capturing device when the image capturing device of the drone acquires the image;
  • Step S303 Control the drone to fly to the coordinate position of the drone, and adjust the shooting angle of the drone according to the shooting attitude.
  • the user may select the type of the target shooting location, and the system according to the The type of the target shooting location gives various photographing gestures that the user can implement, and these photographing gestures are imported into the gesture list.
  • the method can also create a gesture list by means of a support selected by the user during the photographing, and the user can configure different shooting postures for different supports in advance, and establish a binding relationship between the shooting posture and the supports.
  • the method for generating the gesture list includes:
  • the support is a support of the user's selected position of the drone selected by the user during image acquisition.
  • the user may select a preset support list.
  • the photographing posture matching the support object needs to be retrieved from the list of posture databases according to the support;
  • the captured photo gestures are imported into the gesture list.
  • the user can manually input the support type.
  • the support can be determined by image recognition of the scene near the target shooting location in the image.
  • Type, for which the support object corresponding to the selected target shooting location includes:
  • the support corresponding to the location may be a water surface, a horizontal bar, a vertical bar, or the like.
  • the image recognition may specifically refer to: performing image processing on the captured image by using a classification method of a convolutional neural network or a cluster analysis method, obtaining background information of the target shooting location, and performing image recognition on the background information. .
  • each posture is preliminarily Before setting a difficulty factor and providing the user with a list of pre-stored gestures, it may also include:
  • the foregoing solution may further include:
  • the shooting location may be a geographic location when the drone is positioned by the map during the taking action.
  • the present application also discloses an image data processing method for managing image data, the method comprising: acquiring and storing image information uploaded by a user and a target photographing posture or shooting location and matching the image information.
  • the target photographing gesture the image information refers to a picture.
  • the data obtained by the method may be collected by using the method described above, that is, the image information uploaded by the user may be the above-mentioned application. Image information obtained by any of the drone camera methods.
  • the above scheme may also calculate the degree of completion of the image information uploaded by the user, the degree of completion refers to the similarity between the user gesture in the image information uploaded by the user and the corresponding target photographing posture. Degree, the higher the similarity, the more standard the action done by the user during the self-timer. Therefore, calculating and storing the completion information of the image information uploaded by the user may specifically be:
  • the computer vision or the machine learning algorithm may be specifically used to identify the body posture of the user in the image information uploaded by the user, and the body posture is compared with the target camera posture. The similarity between the two is calculated, and the degree of completion of the body posture at the time of the user's self-photographing is evaluated based on the similarity calculation result.
  • the above method can also calculate the difficulty completion coefficient of the posture of the image information.
  • the specific process of calculating the difficulty completion coefficient of the posture of the image information is:
  • the preset parameter includes at least: an action difficulty coefficient corresponding to the target photographing posture, a background factor corresponding to the scene information corresponding to the image information, and completion of the target photographing posture under each scene information obtained by statistics in the server. The countdown to the total number of people.
  • the calculation process is: analyzing the image information uploaded by the user, and obtaining scene information corresponding to the image information;
  • Factor analyze the image uploaded by the user, and combine the angle of the self-timer's body with the ground, the area and position of the focus point, and the angle with the body to obtain a variety of different self-timer scenes.
  • C is the reciprocal of the total number of people who complete the target photographing posture under each scene information obtained by statistics in the server
  • D is a preset constant, and the value range of the body posture completion difficulty coefficient is shifted to some Fixed value attachments.
  • the method may further include: generating achievement data of the uploaded user at the shooting location;
  • the content specifically included in the achievement data may be set according to user requirements.
  • the achievement data includes, but is not limited to, characterization:
  • the total number of photographing gestures used by the user to take a photo at the shooting location Specifically, after obtaining the image information uploaded by the user, determining whether the target shooting location has been recorded in the system to which the method is applied, and if not, recording the number of gestures of the target shooting location in the system,
  • the gesture number achievement includes a number of gestures used and a set of target photographing gestures.
  • the used posture number value refers to self-photographing in a photographing posture in which several targets have been used in the target photographing location, and the target photographing gesture set records the use used by the user to perform self-photographing at the target photographing location. The goal of taking pictures.
  • the number of gestures corresponding to the target shooting location is obtained, and whether the target corresponding to the image information uploaded by the user is stored in the target photographing gesture set in the gesture number achievement is determined.
  • the photographing gesture if present, keeps the number of gestures unchanged, and if not, controls the number of gestures to increase by one.
  • the achievement is the achievement of the target photographing posture use number, which mainly records the total number of times the user completes the self-photographing using each target photographing posture. For example, the user performs the N self-timer using the target photographing posture A in total.
  • the method may further include: marking the achievement data on the image information.
  • the achievement data associated with the image information uploaded by the user may be marked in the image information, for example, it may be labeled as the Nth use target photographing posture A for self-photographing, the N+1th A user who performs photographing using the target photographing posture at the target photographing location and/or a gesture number achievement corresponding to the uploaded image, and the like.
  • the foregoing method may further include:
  • the achievement list corresponding to the photographing posture records the number of times the uploading user makes the photographing of each target photographing posture, that is, the number of times the user performs self-photographing using each target photographing posture, for example, The target photographing posture A, the total number of times the user performs self-photographing using the target photographing posture A, the target photographing posture B, the total number of self-photographs taken by the user using the target photographing posture B, and the like.
  • the foregoing method may further include:
  • the achievement content may be provided to the user in the form of an achievement map, and the achievement map may be electronically
  • the map is provided to the user, and the map is marked with various shooting locations corresponding to all the image information uploaded by the user, and the user can click on the marked location to obtain all the achievement information associated with the marked location, for example, when the user clicks
  • the shooting location S is marked on the achievement map
  • the user may be presented with the following achievements: the user is the N+1th user who takes the photo with the target photographing posture at the target shooting location S, and the user is at the target At the shooting location S, a total of X photos were taken, and the user has used the Y shooting gestures at the target shooting location to perform self-portraits and the like.
  • the present application also discloses a drone photographing device.
  • a drone photographing device In the specific working content of each unit in the device in this embodiment, please refer to the content of the above-mentioned drone photographing method embodiment, below The drone photographing device provided by the embodiment of the present application is described, and the drone photographing device described below and the drone photographing method described above can refer to each other.
  • the apparatus for photographing a drone may include:
  • the processor 100 which corresponds to step S101 in the above method, is configured to provide a pre-stored gesture list to the user, and acquire a target photographing gesture selected by the user from the gesture list;
  • a flight trajectory control unit 200 which corresponds to step S102 in the above method, for controlling the flight path of the drone according to the target shooting location;
  • the image processing unit 300 which corresponds to the steps S103-S107 in the above method, is configured to acquire image information pre-acquired by the image collection device, and extract the posture information of the user from the pre-acquired image information, The posture information is compared with the target photographing posture. When the similarity between the posture information and the target photographing posture is greater than a preset value, the image capturing device is controlled to perform a recording or photographing action.
  • the foregoing apparatus may further include:
  • the shooting location selection unit is configured to capture a target area captured image obtained by the drone to capture the target area, and select a target shooting location selected from the image by the target area according to a user instruction.
  • the processor is further configured to: perform image analysis on the target shooting location, and retrieve and provide the user with a pre-stored gesture list corresponding to the target shooting location.
  • the flight trajectory control unit is specifically configured to: control the flight path of the drone according to the coordinate position of the target shooting location, so that the image capturing range of the image capturing device of the drone covers the target shooting location.
  • the processor is further configured to: acquire a shooting angle input by a user;
  • the flight path control unit is specifically configured to:
  • the flight path control unit is specifically configured to:
  • the flight path control unit is specifically configured to:
  • the processor is specifically configured to:
  • a support corresponding to the target shooting location the support being a support for the user's selected position of the drone selected by the user during image acquisition;
  • the processor when the processor performs image recognition on the target shooting location according to the target area captured image, the processor is specifically configured to:
  • the processor further includes:
  • a sorting unit configured to sort each photographing gesture in the gesture list according to a difficulty coefficient corresponding to each photographing gesture in the gesture list.
  • the above device further includes:
  • the mapping relationship storage unit is configured to establish and store a mapping relationship between the shooting location, the target photographing posture, and the image information obtained after the photographing action is completed.
  • the present application further discloses an image data processing apparatus.
  • the apparatus may include:
  • the data collection unit 400 is configured to acquire and store image information uploaded by the user and a target photographing posture that matches the image information.
  • the image information uploaded by the user may be image information collected by the unmanned aerial camera device described in any of the above embodiments.
  • the completion degree calculation unit 500 is configured to calculate and store the degree of completion information of the image information uploaded by the user according to the user gesture and the target photographing posture in the image information.
  • the completion degree calculation unit is specifically configured to:
  • the image data processing apparatus further includes:
  • the difficulty coefficient calculation unit is configured to calculate a difficulty completion coefficient of the posture of the image information.
  • the difficulty coefficient calculation unit is specifically configured to:
  • the preset parameter includes at least: an action difficulty coefficient corresponding to the target photographing posture, a background factor corresponding to the scene information corresponding to the image information, and completion of the target photographing posture under each scene information obtained by statistics in the server. The countdown to the total number of people.
  • the data collection unit is further configured to acquire a shooting location that matches the image information uploaded by the user;
  • the image data processing device further includes:
  • An achievement unit configured to generate achievement data of the uploading user at the shooting location
  • the achievement data includes, but is not limited to, characterization:
  • the achievement unit is further configured to mark the achievement data on the image information.
  • the achievement unit is further configured to generate an achievement list corresponding to the photographing posture, and the achievement list corresponding to the photographing posture records the number of times the uploading user makes each of the target photographing gestures complete the photographing.
  • the achievement unit is further configured to: determine whether the shooting location has been marked in the achievement map, and if not, set a label at the shooting point of the achievement map and will be at the shooting location Associated achievement data is associated with the annotation.
  • the present application also discloses a storage medium.
  • the storage medium may include:
  • the storage medium further includes a communication interface 13 and a communication bus 14, wherein the first memory 11, the drone processor 12, and the communication interface 13 communicate with each other via the communication bus 14.
  • the first memory 11 is for storing program code; the program code includes computer operation instructions.
  • the first memory 11 may include a high speed RAM memory and may also include a non-volatile memory such as at least one disk memory.
  • the UAV processor 12 can be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention.
  • the UAV processor 12 is configured to invoke the program code, and when the program code is executed, is used to execute any of the above-described drone photographing methods of the present application.
  • the drone processor can also perform any of the above-described drone photographing methods of the present application when the program code is executed.
  • the present application further discloses a storage medium.
  • the storage medium may include:
  • the storage medium further includes a communication interface 23 and a communication bus 24, wherein the second memory 21, the server processor 22, and the communication interface 23 communicate with each other via the communication bus 24.
  • the second memory 21 is configured to store program code; the program code includes computer operation instructions.
  • the second memory 21 may include a high speed RAM memory and may also include a non-volatile memory such as at least one disk memory.
  • the server processor 22 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present invention.
  • the server processor 22 is configured to invoke the program code, and when the program code is executed, is used to execute any of the above image data processing methods of the present application.
  • Completion information of the image information uploaded by the user is calculated and stored according to the user gesture in the image information and the target photographing posture.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例提供了一种无人机进行拍照的无人机拍照方法和装置,用户在进行自拍之前,用户可以由自拍列表中选择本次用于控制无人机图像采集装置进行图像采集的目标拍照姿态,当所述目标拍照姿态选定后,依据目标拍摄地点的位置坐标控制无人机的飞行轨迹,无人机自动对采预采集到的图像信息进行识别,提取预采集到的图像信息中用户的姿态信息,将所述姿态信息与所述目标拍照姿态进行对比,判断两者相似度是否大于设定值,如果是,控制无人机执行拍照或录像动作。由此,本申请可以依据用户需求选择任意目标拍照姿态控制无人机动作,增强了用户体验度。

Description

一种无人机拍照方法、图像处理方法和装置 技术领域
本申请涉及无人机技术领域,更具体涉及一种采用航拍无人机进行拍照的无人机拍照方法、图像处理方法和装置。
背景技术
航拍又称空中摄影或航空摄影,是指从空中拍摄地球地貌,获得俯视图,此图即为空照图。
航拍无人机指的是携带有图像采集设备的、能够提供航拍服务的无人机,随着航拍无人机技术的发展,现在已经有多款无人机可以通过用户手势或身体姿态控制完成自拍。同时,与无人机连接的智能手机或平板电脑等终端也可以将无人机拍得的照片分享到社交网站。
上述无人机自拍及照片分享的过程中,用户只能使用一种预先设定好的手势或身体姿态控制无人机进行自拍,用户无法选择控制自拍的手势或身体姿态。
发明内容
有鉴于此,本发明提供了一种无人机拍照方法、图像处理方法和装置,使得用户可依据自身需求选择目标拍照姿态触发无人机的拍照动作。
一种无人机拍照方法,包括:
获取用户从姿态列表中选择的目标拍照姿态;
依据目标拍摄地点控制无人机飞行轨迹;
获取所述图像采集装置预采集到的图像信息;
由预采集到的图像信息中提取所述用户的姿态信息;
将所述姿态信息与所述目标拍照姿态进行对比,当姿态信息与所述目标拍照姿态的相似度大于预设值时,控制所述图像采集装置执行录像或拍照动作。
一种图像数据处理方法,获取并存储用户上传的图像信息以及与所述图像信息相匹配的拍摄地点、目标拍照姿态。
一种无人机拍照装置,包括:
处理器,用于向用户提供预存的姿态列表,获取用户从姿态列表中选择的 目标拍照姿态;
飞行轨迹控制单元,用于依据目标拍摄地点控制无人机飞行轨迹;
图像处理单元,用于获取图像采集装置预采集到的图像信息,由预采集到的图像信息中提取所述用户的姿态信息,将所述姿态信息与所述目标拍照姿态进行对比,当姿态信息与所述目标拍照姿态的相似度大于预设值时,控制所述图像采集装置执行录像或拍照动作。
一种图像数据处理装置,包括:
数据采集单元,用于获取并存储用户上传的图像信息以及与所述图像信息相匹配的拍摄地点、目标拍照姿态。
一种存储介质,包括:
第一存储器和无人机处理器;
所述存储器用于存储程序代码,所述无人机处理器用于调用所述程序代码,当所述程序代码被执行时,用于执行以下操作:
获取用户从姿态列表中选择的目标拍照姿态;
依据获取的用户指定目标拍摄地点控制无人机飞行轨迹;
获取所述图像采集装置预采集到的图像信息;
从预采集到的图像信息中提取所述用户的姿态信息;
将所述姿态信息与所述目标拍照姿态进行对比,当姿态信息与所述目标拍照姿态的相似度大于预设值时,控制所述图像采集装置执行录像或拍照动作。
一种存储介质,包括:
第二存储器和服务器处理器;
所述存储器用于存储程序代码,所述无人机处理器用于调用所述程序代码,当所述程序代码被执行时,用于执行以下操作:
获取并存储用户上传的图像信息以及与所述图像信息相匹配的目标拍照姿态;
依据所述图像信息中的用户姿态和所述目标拍照姿态计算并存储用户上传的图像信息的完成度信息。
经由上述的技术方案可知,与现有技术相比,本发明实施例提供了的上述 无人机拍照方法和装置,用户在进行自拍之前,用户可以由自拍列表中选择本次用于控制无人机图像采集装置进行图像采集的目标拍照姿态,当所述目标拍照姿态选定后,依据目标拍摄地点的位置坐标控制无人机的飞行轨迹,无人机自动对采预到的图像信息进行识别,提取预采集到的图像信息中用户的姿态信息,将所述姿态信息与所述目标拍照姿态进行对比,判断两者相似度是否大于设定值,如果是,控制无人机执行拍照或录像动作。由此,本申请可以依据用户需求选择任意目标拍照姿态控制无人机动作,增强了用户体验度。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为本申请实施例提供的一种无人机拍照方法的流程示意图;
图2为本申请实施例公开的一种确定无人机拍照位置的流程示意图;
图3为本申请实施例公开的一种自拍时无人机飞行轨迹控制方式的流程示意图;
图4为本申请实施例提供的一种无人机拍照装置的结构示意图;
图5为本申请实施例提供的一种图像处理装置的结构示意图;
图6为本申请实施例提供的一种存储介质的结构示意图;
图7为本申请另一实施例提供的一种存储介质的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
针对于现有技术中用户只能采用固定的手势或姿势控制无人机进行拍照动作的问题,本申请公开了一种无人机拍照方法和装置。
参见图1,上述无人机拍照方法的具体实现流程可以包括:
步骤S101:获取用户从姿态列表中选择的目标拍照姿态;
在本步骤中,可以预先存储一些拍照姿态,将这些拍照姿态存储在所述姿态列表中,当用户需要使用无人机自拍时,用户可以依据自身需求由姿态列表中选择一个拍照姿态作为目标拍照姿态,即通过该目标拍照姿态控制无人机进行图像采集动作。所述姿态列表中的拍照姿态可以直接联网下载得到,当然也可以通过对现有的进行照片识别得到拍照姿态,具体的:控制无人机对已有的目标图像进行图像识别,识别目标图像中的用户姿态,将该用户姿态作为拍照姿态导入姿态列表中。
步骤S102:依据目标拍摄地点控制无人机飞行轨迹;
在本步骤中,所述目标拍摄地点为用户进行自拍的地点,用户在所述目标拍摄地点摆出所述目标拍照姿态时,即可控制无人机进行图像采集动作。为了使得所述目标拍摄地点能够位于所述无人机的图像采集范围之内,在控制无人机飞行时,用户可以手动控制无人机达到拍照位置,当然也可以由无人机自动控制飞行到拍照位置,控制无人机图像采集装置正对用户,该拍照位置指的是无人机对用户进行拍照时无人机的位置。
步骤S103:获取所述图像采集装置预采集到的图像信息;
在进行图像采集之前,先进行预采集,在预采集时并不一定存储采集到的图像信息,可以不存储,也可以降采样或者压缩后存储,也可以完整的存储。
步骤S104:由预采集到的图像信息中提取所述用户的姿态信息;
在本步骤中,对预采集阶段采集到的图像信息进行处理,抓取所述图像信息中的用户,依据抓取到的用户形态确定所述预采集得到的图像信息中的用户姿态;
步骤S106:判断所述姿态信息与所述目标姿态的相似度是否大于预设值,如果是,执行步骤S107;
在本方案中,可以预先设置一个预设值,当目标姿态与预采集到的用户姿态的相似度大于该预设值时,执行步骤S107控制图像采集装置动作。
步骤S107:控制所述图像采集装置执行录像或拍照动作。
本申请上述实施例公开的技术方案中,用户在进行自拍之前,用户可以由 自拍列表中选择本次用于控制无人机图像采集装置进行图像采集的目标拍照姿态,当所述目标拍照姿态选定后,依据目标拍摄地点的位置坐标控制无人机的飞行轨迹,无人机自动对预采集到的图像信息进行识别,提取预采集到的图像信息中用户的姿态信息,将所述姿态信息与所述目标拍照姿态进行对比,判断两者相似度是否大于设定值,如果是,控制无人机执行拍照或录像动作。由此,本申请可以依据用户需求选择任意目标拍照姿态控制无人机动作,增强了用户体验度。
在本申请实施例公开的技术方案中,用户可以依据自身需求选择拍照的目标拍摄地点,例如,用户在位于所述目标拍摄地点时,无人机获取用户的坐标位置,将获取到的用户的坐标位置作为目标拍摄地点,该坐标位置可以通过手机等智能设备采集后发送给无人机,也可以由无人机的定位***自动采集。当无人机获取到所述目标拍摄地点时,用户可以依据自身需求选择拍照模式,例如半身照、全身照或者远景照,无人机依据用户选择的拍照模式确定的拍照位置,所述拍照位置为对用户拍照时无人机所处的位置,无人机控制***自动控制无人机飞行到所述拍照位置并且调节图像采集装置的采集方向,使得无人机的图像采集装置的图像采集范围覆盖所述目标拍摄地点。当无人机到达拍照位置后,控制图像采集装置对准用户后,图像采集装置进行图像预采集。在控制图像采集装置对准用户时,可以通过无人机与无人机遥控器或与无人机绑定的手机等智能设备之间的通信来控制图像采集装置的图像采集方向,使得图像采集装置的图像采集方向正对用户。
当然,除了上述方式选择所述目标拍摄地点之外,本申请实施例公开的技术方案中也可以由无人机采集到的图像信息中选择目标拍摄地点,具体的,其过程如下:
控制无人机对目标区域进行图像采集,得到绑定有深度信息的目标区域拍摄图像;
用户可以通过对目标区域拍摄图像点击的方式向无人机发送目标拍摄地点选择指令,无人机将用户点击目标区域拍摄图像中的位置作为目标拍摄地点,再通过与所述目标区域拍摄图像绑定的深度信息计算得到所述目标拍摄地点的坐标信息。
当然,用户在拍照时,还需要将拍摄地点的代表物拍摄到图像中,因此,上述方案中,上述方案中还可以对拍摄区域进行选择,此时,用户可以向无人机输入拍摄区域选择指令,用户在所述图像采集得到绑定有深度信息的目标区域拍摄图像中选择目标拍摄区域的区域范围,当所述目标拍摄区域选定后,无人机首先依据所述目标拍摄地点粗略控制飞行轨迹,使得无人机达到所述拍照位置,当无人机到达所述拍照位置后,再对预采集到的拍摄图像进行分析,所述目标拍摄区域内的景物是否已经存在于无人机预采集到的图像信息中,如果否,继续调整无人机的位置信息和/或图像采集装置的拍摄方向,最终使得预采集到的图像信息中的背景信息中包含所述目标拍摄区域的景物,将此时无人机的位置作为最终确定的拍照位置。
在本申请实施例公开的技术方案中,不同的拍照环境许可用户选择的拍照姿态不同,例如,目标拍摄地点为单杠与目标拍摄地点为平地的可选择的拍照姿势不同,因此,本申请上述实施例公开的技术方案中,当用户通过图像信息选择目标拍摄地点时,所述无人机可以自动对所述目标拍摄地点的背景信息进行图像识别,识别得到环境背景信息,依据所述环境背景信息调取可选择的拍照姿态,将选择得到的拍照姿态套入所述姿态列表中。
在实际拍照时,用户除了对拍摄地点有要求之外,还可能对拍摄角度有些特殊要求,例如,需要无人机采用俯视角度进行拍摄或控制无人机采用仰视角度进行拍摄等,因此,上述方案中除了可以选择目标拍摄地点和目标拍摄区域之外还可以选择目标拍摄角度,对此,上述方法中,在依据目标拍摄地点控制无人机飞行轨迹之前,还可以包括:
获取用户输入的拍摄角度,所述拍摄角度可以为:俯视拍摄、仰视拍摄、左侧视拍摄、右侧视拍摄等。当然,由于用户采用同一姿态拍照时,所选择的角度不同,拍摄角度变化以后,拍摄得到的效果也就不同,即,即便用户做出了所述目标拍照姿态,但由于无人机拍摄角度的不同,也会使得无人机无法识别该姿态。因此,上述方案中,为了使无人机能够响应用户在不同拍摄角度下的姿态信息,上述方案中,当用户输入拍照角度以后,还会依据用户输入的拍照角度对所述目标拍照姿态进行调整,将调整后的目标拍照姿态作为后续对比中用到的目标拍照姿态;
当用户输入了拍摄角度以后,无人机的拍照位置也需要跟随调整,对此,本申请上述实施例公开的技术方案中,所述依据所述目标拍摄地点控制无人机飞行轨迹,以使得无人机的图像采集装置的图像采集范围覆盖所述目标拍摄地点,具体为:依据所述目标拍摄地点和拍摄角度控制无人机飞行轨迹,以使得无人机的图像采集装置的图像采集范围覆盖所述目标拍摄地点,并能依据所述拍摄角度对目标拍摄地点进行图像采集。在本方案中,当无人机依据所述目标拍摄地点达到拍照位置以后,可以通过无人机与无人机遥控器或预先绑定的手机之间进行通信,然后计算得到无人机与遥控器或手机之间的方向角,依据所述拍摄角度对无人机位置进行调整,使得无人机与遥控器或手机之间的方向角与所述拍摄角度相匹配,使得无人机达到所述拍摄角度所需的拍照位置,并且用户还可以可通过遥控器或与无人机绑定的手机获取无人机的图像采集设备预采集到的图像信息,用户可通过遥控器或手机对无人机的拍照位置以及拍摄角度、图像采集装置的焦距进行微调,从而使得拍照效果达到最佳。
具体的,参见图2,上述无人机拍照位置的调整过程可以为:
步骤S201:依据目标拍摄地点的坐标位置控制无人机飞行轨迹,使得无人机与所述目标拍摄拍摄地点处于同一水平高度;
所述依据目标拍摄地点的坐标位置控制无人机飞行轨迹,使得无人机与所述目标拍摄拍摄地点处于同一水平高度,且图像采集装置正对用户,无人机的图像采集装置的图像采集范围覆盖所述目标拍摄地点;在调整无人机与目标拍摄地点处于同一水平高度时,可通过比无人机坐标与所述目标拍摄地点的坐标来控制无人机,使得无人机与所述目标拍摄地点处于同一水平高度且所述图像采集装置正对所述目标拍摄地点。当然,用户也可以先到达所述目标拍摄地点,然后无人机通过检测和调整无人机与无人机遥控器或预绑定的手机之间的方向角的方式,使得无人机与所述目标拍摄地点处于同一水平高度且无人机正对目标拍摄地点;
步骤S202:无人机自动调整与所述目标拍摄地点之间的距离,使得两者之间的距离为预设距离值;
当无人机与所述目标拍摄地点处于同一水平高度且正对所述目标拍摄位置后,无人机自动调整与所述目标拍摄地点之间的距离,使得两者之间的距离 为预设距离值,所述预设距离值的大小用户可以自行选择,例如可以通过上文中用户选择的拍照模式来确定,在计算两者之间的距离时,可以通过无人机与所述目标拍摄地点、遥控器或预绑定的手机之间的坐标距离进行计算;
步骤S203:依据拍摄角度调整无人机在垂直平面内的坐标位置;
当调整完无人机与目标拍摄地点之间的距离值以后,再依据拍摄角度调整无人机在垂直平面内的坐标位置,以使得所述无人机达到所述拍摄角度所对应的坐标位置,将该位置记为无人机的拍照位置。
在进行拍摄角度调整时,可先计算得到无人机所需移动的在垂直平面内的向量,所述向量包括无人机在垂直平面内的移动方向以及移动距离,在计算无人机所需移动的向量时,可依据无人机与所述目标拍照点之间的距离以及用户输入的拍摄角度计算得到。例如,当无人机与所述目标拍摄地点之间的距离为预设距离、用户输入的拍摄角度为仰角45度,此时需要控制无人机与所述目标拍摄地点之间在垂直平面内的角度为45度,且无人机的高度低于所述目标拍摄地点的高度,所述无人机所需移动的移动方向为垂直向下,移动距离为所述预设距离。
当然,在进行拍摄角度调整时,依据用户输入的所述拍摄角度计算得到所述无人机的运动方向,依据计算得到的运动方向控制无人机在垂直平面内移动,并实施计算无人机与所述目标拍摄地点之间的方向角,在所述无人机运动过程中实时比较所述方向角与所述拍摄角度,当两者角度相同时表明无人机达到所需的拍摄角度,保持无人机位置不动。
在本申请实施例公开的技术方案中,为了简单实现对无人机拍照位置的确定,上述方案中,当用户在某一目标拍摄地点采用无人机进行自拍时,可以先通过无人机遥控器手动控制无人机的飞行轨迹、控制无人机图像采集装置的开启与关闭,控制无人机在预先估算的拍照位置对目标拍摄地点进行图像采集,当无人机图像采集装置开启后,通过遥控器调整无人机的拍摄姿态,无人机实时记录自身的坐标位置、拍摄姿态与图像采集装置采集到的各帧图像之间的绑定关系。图像采集结束后,用户可以由拍摄得到的视频图像中选择一帧理想的图像,由该图像中选择目标拍摄地点。参见图3,此时无人机的拍照位置的确定方法,可以为:
步骤S301:获取用于选择目标拍摄地点的目标区域拍摄图像;
所述目标区域拍摄图像为用户选择目标拍摄地点时所用到的拍摄图像,例如,用户由图像A中选择了目标拍摄地点,则所述图像A即为所述目标拍摄图像;
步骤S302:获取与所述目标区域拍摄图像对应的无人机坐标位置以及拍照姿态;
在本步骤中,无人机会对图像采集装置拍摄到的每一张图像匹配一无人机坐标位置以及拍照姿态,该无人机坐标位置即为所述无人机在图像采集装置采集该图像时所述无人机的空间坐标位置,所述拍照姿态为无人机图像采集装置在采集该图像时图像采集装置的姿态信息;
步骤S303:控制无人机飞行到所述无人机坐标位置,依据所述拍摄姿态调整无人机的拍摄角度。
为了方便***能够给出适合所述目标拍摄地点的拍摄姿态,在本申请实施例公开的技术方案中,当所述目标拍摄地点确定以后,用户可以选择所述目标拍摄地点的类型,***依据所述目标拍摄地点的类型给出用户可以实现的各种拍照姿态,将这些拍照姿态导入所述姿态列表。当然,本方法也可以通过用户选择的拍照时的支撑物的方式创建姿态列表,用户可以预先针对不同的支撑物配置不同的拍摄姿态,建立拍摄姿态与这些支撑物之间的绑定关系,此时,姿态列表的生成方法包括:
选择目标拍摄地点对应的支撑物,所述支撑物为用户选择的无人机在进行图像采集时用户的着力位置的支撑物,在选择支撑物时,用户可在预设的支撑物列表中选择用户在所述目标拍摄地点拍照时用于向用户提供着力点的支撑物;
由于着力位置不同,用户可以摆出的姿态也就不同,因此,当用户选择支撑物后,需要依据所述支撑物由姿态数据库列表中调取与所述支撑物相匹配的拍照姿态;
将调取到的各个拍照姿态导入姿态列表中。
在选择支撑物时,用户可以手动输入支撑物类型,当然,如果所述目标拍 摄地点为由图片中确定的地点时,可以通过对该图像中目标拍摄地点附近景物进行图像识别的方式确定支撑物类型,对此,所述选择目标拍摄地点对应的支撑物,包括:
获取所述目标拍摄地点对应的目标区域拍摄图像;
依据所述标区域拍摄图像对所述目标拍摄地点进行图像识别,得到所述目标拍摄地点对应的物体的形状信息和/或介质信息,依据所述形状信息和/或介质信息确定所述目标拍摄地点对应的支撑物,例如,所述支撑物可以为水面、横杠、竖杠等等。其中,上述图像识别具体可以指的是:采用卷积神经网络的分类方法或者聚类分析的方法对拍摄图像进行图像处理,得到所述目标拍摄地点的背景信息,对所述背景信息进行图像识别。
在本申请实施例公开的技术方案中,为了能够使得用户对姿态列表中的各个姿态的姿态难度有所认识,依据自身的协调性选择合适难度的姿态进行拍照,上述方案中,预先对各个姿态设置一难度系数,向用户提供预存的姿态列表之前,还可以包括:
依据所述姿态列表中各个拍照姿态对应的难度系数对所述姿态列表中的各个拍照姿态进行排序。
上述方案中,为了方便用户对拍摄得到的图像信息进行归纳,上述方案中,还可以包括:
建立并存储拍摄地点、目标拍照姿态与拍照动作完成后得到的图像信息之间的映射关系。所述拍摄地点可以为无人机由地图中定位的在进行拍照动作时的地理位置。
此外,本申请还公开了一种用于对图像数据进行管理的图像数据处理方法,该方法包括:获取并存储用户上传的图像信息以及与所述图像信息相匹配的目标拍照姿态或拍摄地点和目标拍照姿态,所述图像信息指的是图片。所述拍摄地点和目标拍照的定义可以参见上文所述,本方法获取到的数据可以为通过采用上文所述的方法采集得到,即,所述用户上传的图像信息可以为采用本申请上述任意一项无人机拍照方法得到的图像信息。
为了方便向其他用户提供自己的成就,上述方案中还可以计算用户上传的 图像信息的完成度,所述完成度指代的是用户上传的图像信息中的用户姿态与对应的目标拍照姿态的相似度,相似度越高,表明用户自拍时完成的动作越标准。因此,计算并存储用户上传的图像信息的完成度信息,具体可以为:
对所述图像信息进行图像处理,提取得到所述图像信息中用户的姿态信息;
计算所述姿态信息与所述目标拍照姿态的相似度,将所述相似度作为所述图像信息的完成度;
具体的,在计算图像信息的完成度时,具体可采用计算机视觉或机器学习算法识别用户上传的图像信息中用户自拍时的身体姿态,通过将所述身体姿态与所述目标拍照姿态进行对比,计算两者的相似度,依据相似度计算结果评估用户自拍时身体姿态的完成程度信息。
除了向用户可以计算图像信息的难度完成度信息之外,上述方法还可以计算所述图像信息的姿态的难度完成系数。计算所述图像信息的姿态的难度完成系数具体过程为:
提取预设参数,依据所述预设参数计算得到所述图像信息的姿态的难度完成系数;
所述预设参数至少包括:所述目标拍照姿态对应的动作难度系数、所述图像信息对应的场景信息所对应的背景因子、服务器中统计得到的每种场景信息下完成所述目标拍照姿态的总人数的倒数。
具体的其计算过程为:对用户上传的所述图像信息进行分析,得到所述图像信息对应的场景信息;
基于公式S=A*B*C+D计算得到所述图像信息的姿态的难度完成系数S,其中,A为所述目标拍照姿态对应的动作难度系数,B为所述场景信息所对应的背景因子,对用户上传的图像进行分析,将自拍用户的身体与地面的角度、着力点面积和位置以及与身体的夹角等因素排列组合后,可得到多种不同自拍场景,每种自拍场景都对应一个背景因子;C为服务器中统计得到的每种场景信息下完成所述目标拍照姿态的总人数的倒数,D为一预设常数,将身体姿态完成难度系数的取值范围偏移到某个固定值附件。
为了向其他用户提供自己的自拍成果,或方便对自拍数据进行归纳,上述 方法还可以包括:生成上传用户在所述拍摄地点的成就数据;
所述成就数据具体所包括的内容可以依据用户需求自行设定,例如,所述成就数据包括但不限于表征:
在按照拍摄时间排序的处于所述拍摄地点且使用所述目标拍照姿态进行拍摄的所有用户中,所述上传用户的排序顺序。其具体指的是,在用户拍照之前已经有N个人在所述目标拍摄地点采用该目标拍照姿态进行了自拍并已经上传到了应用有本方法的服务器中,侧此时上传用户的排序顺序记为N+1,此时用户的成就记为:第N+1个在所述目标拍摄地点采用所述目标拍照姿态进行拍照的用户;
上传用户在所述拍摄地点进行拍照时使用的拍照姿态的总数量。其具体的指的是,当获取到用户上传的图像信息后,判断应用本方法的***中是否已经记录有所述目标拍摄地点,如果否,在***中记录该目标拍摄地点的姿态数量成就,所述姿态数量成就包括所用姿态数量值和目标拍照姿态集合。所述所用姿态数量值指的是在所述目标拍摄地点已用过几种目标在拍照姿态进行自拍,所述目标拍照姿态集合中记录有用户在所述目标拍摄地点进行自拍时所用到的所用的目标拍照姿态。当***中已经记录有所述目标拍摄地点,获取所述目标拍摄地点所对应的姿态数量成就,判断所述姿态数量成就中的目标拍照姿态集合中是否存储有用户上传的图像信息所对应的目标拍照姿态,如果存在,保持所述姿态数量成就不变,如果不存在,控制所述姿态数量值加1。
上传用户使所述目标拍照姿态完成拍照的次数。具体的,该成就为目标拍照姿态使用次数成就,其主要是记录有用户使用每种目标拍照姿态完成自拍的总次数,例如,用户总共使用目标拍照姿态A进行了N次自拍。
进一步的,为了更加直观的向其他用户提供自己的自拍成果,上述方案中,还可以包括:在所述图像信息上标注所述成就数据。在本步骤中,可以将与所述用户上传的图像信息相关联的成就数据标注在所述图像信息中,例如,其可以标注为第N次使用目标拍照姿态A进行自拍、第N+1个在所述目标拍摄地点采用所述目标拍照姿态进行拍照的用户和/或与上传图像对应的姿态数量成就等。
进一步的,上述方法还可以包括:
生成与拍照姿态对应的成就列表,所述拍照姿态对应的成就列表中记录有上传用户使各个目标拍照姿态完成拍照的次数,即,用户使用每种目标拍照姿态进行自拍的次数,例如,其可以包括:目标拍照姿态A,用户使用目标拍照姿态A进行自拍的总次数,目标拍照姿态B,用户使用目标拍照姿态B进行自拍的总次数等。
进一步的,上述方法还可以包括:
生成与拍摄地点对应的成就地图,在所述成就地图上对所述拍摄地点进行标注,具体的,在本方案中,可以通过成就地图的形式向用户提供成就内容,所述成就地图可以以电子地图的形式提供给用户,该地图上标注有用户上传的所有图像信息多对应的各个拍摄地点,用户可以通过点击该标注位置获取与该标注位置所关联的所有的成就信息,例如:当用户点击所述成就地图上标注的拍摄地点S时,即可向用户展现如下成就:用户为第N+1个在所述目标拍摄地点S采用所述目标拍照姿态进行拍照的用户、用户在所述目标拍摄地点S共自拍了X张照片、用户已经在所述目标拍摄地点使用过Y种拍照姿态进行了自拍等等。
对应于上述无人机拍照方法,本申请还公开了一种无人机拍照装置,本实施例中该装置中各个单元的具体工作内容,请参见上述无人机拍照方法实施例的内容,下面对本申请实施例提供的无人机拍照装置进行描述,下文描述的无人机拍照装置与上文描述的无人机拍照方法可相互对应参照。
参见图4,所述一种无人机拍照装置,可以包括:
处理器100,其与上文方法中步骤S101相对应,用于向用户提供预存的姿态列表,获取用户从姿态列表中选择的目标拍照姿态;
飞行轨迹控制单元200,其与上文方法中步骤S102相对应,用于依据目标拍摄地点控制无人机飞行轨迹;
图像处理单元300,其与上文方法中步骤S103-S107相对应,用于获取图像采集装置预采集到的图像信息,由预采集到的图像信息中提取所述用户的姿态信息,将所述姿态信息与所述目标拍照姿态进行对比,当姿态信息与所述目标拍照姿态的相似度大于预设值时,控制所述图像采集装置执行录像或拍照动作。
与上述方法相对应,上述装置还可以还包括:
拍摄地点选择单元,用于调取无人机对目标区域进行拍摄得到的目标区域拍摄图像,依据用户指令由所述目标区域拍摄图像中选择的目标拍摄地点。
与上述方法相对应,上述装置中,所述处理器还用于:对目标拍摄地点进行图像分析,调取并向用户提供与所述目标拍摄地点相对应的预存的姿态列表。
与上述方法相对应,上述装置中,所述飞行轨迹控制单元具体用于:依据目标拍摄地点的坐标位置控制无人机飞行轨迹,以使得无人机的图像采集装置的图像采集范围覆盖目标拍摄地点。
与上述方法相对应,上述装置中,所述处理器还用于:获取用户输入的拍摄角度;
所述飞行轨迹控制单元,具体用于:
依据所述目标拍摄地点和拍摄角度控制无人机飞行轨迹,以使得无人机的图像采集装置的图像采集范围覆盖所述目标拍摄地点,并能依据所述拍摄角度对目标拍摄地点进行图像采集。
与上述方法相对应,上述装置中,所述飞行轨迹控制单元,具体用于:
所述依据目标拍摄地点的坐标位置控制无人机飞行轨迹,以使得无人机的图像采集装置的图像采集范围覆盖所述目标拍摄地点;
调整无人机的位置使用户与无人机之间的水平距离为预设距离值;
依据拍摄角度调整所述无人机在垂直平面内的位置,使得所述无人机能依据所述拍摄角度对目标拍摄地点进行图像采集。
与上述方法相对应,上述装置中,所述飞行轨迹控制单元,具体用于:
获取用于选择目标拍摄地点所对应的目标区域拍摄图像;
获取与所述目标区域拍摄图像对应的无人机坐标位置;
控制无人机飞行到所述无人机坐标位置。
与上述方法相对应,上述装置中,所述处理器,具体用于:
选择目标拍摄地点对应的支撑物,所述支撑物为用户选择的无人机在进行图像采集时用户的着力位置的支撑物;
依据所述支撑物由姿态数据库列表中调取与所述支撑物相匹配的拍照姿 态;
将调取到的各个拍照姿态导入姿态列表中;
用于向用户提供预存的姿态列表;
获取用户由姿态列表中选择的目标拍照姿态。
与上述方法相对应,上述装置中,所述处理器选择目标拍摄地点对应的支撑物时,具体用于:
获取所述目标拍摄地点对应的标区域拍摄图像;依据所述标区域拍摄图像对所述目标拍摄地点进行图像识别,得到所述目标拍摄地点对应的物体的形状信息和/或介质信息,依据所述形状信息和/或介质信息确定所述目标拍摄地点对应的支撑物。
与上述方法相对应,上述装置中,所述处理器依据所述标区域拍摄图像对所述目标拍摄地点进行图像识别时,具体用于:
对拍摄图像进行图像处理,得到所述目标拍摄地点的背景信息,对所述背景信息进行图像识别。
与上述方法相对应,上述装置中,所述处理器内,还包括:
排序单元,用于依据所述姿态列表中各个拍照姿态对应的难度系数对所述姿态列表中的各个拍照姿态进行排序。
与上述方法相对应,上述装置中,还包括:
映射关系存储单元,用于建立并存储拍摄地点、目标拍照姿态与拍照动作完成后得到的图像信息之间的映射关系。
对应于上述图像处理方法,本申请还公开公开了一种图像数据处理装置,参见图5该装置可以包括:
数据采集单元400,用于获取并存储用户上传的图像信息以及与所述图像信息相匹配的目标拍照姿态。用户上传的图像信息可以为采用上述任意一项实施例所述的无人机拍照装置采集到的图像信息。
完成度计算单元500,用于依据所述图像信息中的用户姿态和所述目标拍照姿态计算并存储用户上传的图像信息的完成度信息。
与上述方法相对应,所述图像数据处理装置中,所述完成度计算单元具体用于:
对所述图像信息进行图像处理,提取得到所述图像信息中用户的姿态信息;计算所述姿态信息与所述目标拍照姿态的相似度,将所述相似度作为所述图像信息的完成度。
与上述方法相对应,所述图像数据处理装置,还包括:
难度系数计算单元,用于计算所述图像信息的姿态的难度完成系数。
与上述方法相对应,所述难度系数计算单元,具体用于:
提取预设参数,依据所述预设参数计算得到所述图像信息的姿态的难度完成系数;
所述预设参数至少包括:所述目标拍照姿态对应的动作难度系数、所述图像信息对应的场景信息所对应的背景因子、服务器中统计得到的每种场景信息下完成所述目标拍照姿态的总人数的倒数。
与上述方法相对应,所述数据采集单元还用于获取与用户上传的图像信息相匹配的拍摄地点;所述图像数据处理装置,还包括:
成就单元,用于生成上传用户在所述拍摄地点的成就数据;
所述成就数据包括但不限于表征:
在按照拍摄时间排序的处于所述拍摄地点且使用所述目标拍照姿态进行拍摄的所有用户中,所述上传用户的排序顺序;
上传用户在所述拍摄地点处经完成的拍照姿态总数量;
上传用户使所述目标拍照姿态完成拍照的次数。
与上述方法相对应,所述成就单元还用于,用于在所述图像信息上标注所述成就数据。
与上述方法相对应,所述成就单元还用于,生成与拍照姿态对应的成就列表,所述拍照姿态对应的成就列表中记录有上传用户使各个目标拍照姿态完成拍照的次数。
与上述方法相对应,所述成就单元还用于,判断成就地图中是否已经对所述拍摄地点进行了标注,如果否,在所述成就地图的拍摄点处设置标注并将于所述拍摄地点相关联的成就数据与所述标注相关联。
对应于上述无人机拍照方法本申请还公开了一种存储介质,参见图6,该存储介质可以包括:
第一存储器11和无人机处理器12;
所述存储介质还包括通信接口13以及通信总线14,其中,第一存储器11、无人机处理器12以及通信接口13通信均通过通信总线14实现相互间的通信。
所述第一存储器11用于存储程序代码;所述程序代码包括计算机操作指令。
第一存储器11可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
所述无人机处理器12可以是一个中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。所述无人机处理器12用于调用所述程序代码,当所述程序代码被执行时,用于执行本申请上述任意一项无人机拍照方法。
例如,其可以用于执行以下操作:
获取用户从姿态列表中选择的目标拍照姿态;
依据获取的用户指定目标拍摄地点控制无人机飞行轨迹;
获取所述图像采集装置预采集到的图像信息;
从预采集到的图像信息中提取所述用户的姿态信息;
将所述姿态信息与所述目标拍照姿态进行对比,当姿态信息与所述目标拍照姿态的相似度大于预设值时,控制所述图像采集装置执行录像或拍照动作。
当然,当所述程序代码被执行时所述无人机处理器还可以执行本申请上述任意一项无人机拍照方法。
对应于上述图像数据处理方法本申请还公开了一种存储介质,参见图7,该存储介质可以包括:
第二存储器21和服务器处理器22;
所述存储介质还包括通信接口23以及通信总线24,其中,第二存储器21、服务器处理器22以及通信接口23通信均通过通信总线24实现相互间的通信。
所述第二存储器21用于存储程序代码;所述程序代码包括计算机操作指令。
第二存储器21可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
所述服务器处理器22可以是一个中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。所述服务器处理器22用于调用所述程序代码,当所述程序代码被执行时,用于执行本申请上述任意一项图像数据处理方法。
例如,其可以用于执行以下操作:
获取并存储用户上传的图像信息以及与所述图像信息相匹配的目标拍照姿态;
依据所述图像信息中的用户姿态和所述目标拍照姿态计算并存储用户上传的图像信息的完成度信息。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可,在不冲突的情况下,对公开的实施例中的特征可以任意组合。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (44)

  1. 一种无人机拍照方法,其特征在于,包括:
    获取用户从姿态列表中选择的目标拍照姿态;
    依据获取的用户指定目标拍摄地点控制无人机飞行轨迹;
    获取所述图像采集装置预采集到的图像信息;
    从预采集到的图像信息中提取所述用户的姿态信息;
    将所述姿态信息与所述目标拍照姿态进行对比,当姿态信息与所述目标拍照姿态的相似度大于预设值时,控制所述图像采集装置执行录像或拍照动作。
  2. 根据权利要求1所述的无人机拍照方法,其特征在于,获取用户从姿态列表中选择的目标拍照姿态之前,还包括:
    获取目标区域拍摄图像;
    依据用户指令由所述目标区域拍摄图像中选择的目标拍摄地点。
  3. 根据权利要求1所述的无人机拍照方法,其特征在于,获取用户从姿态列表中选择的目标拍照姿态之前,还包括:
    对所述目标拍摄地点进行图像分析,调取并向用户提供与所述目标拍摄地点相对应的预存的姿态列表。
  4. 根据权利要求1所述的无人机拍照方法,其特征在于,依据所述目标拍摄地点控制无人机飞行轨迹,具体为:
    依据目标拍摄地点的坐标位置控制无人机飞行轨迹,以使得无人机的图像采集装置的图像采集范围覆盖目标拍摄地点。
  5. 根据权利1所述的无人机拍照方法,其特征在于,依据目标拍摄地点控制无人机飞行轨迹之前,还包括:
    获取用户输入的拍摄角度;
    所述依据所述目标拍摄地点控制无人机飞行轨迹,以使得无人机的图像采集装置的图像采集范围覆盖所述目标拍摄地点,具体为:
    依据所述目标拍摄地点和拍摄角度控制无人机飞行轨迹,以使得无人机的图像采集装置的图像采集范围覆盖所述目标拍摄地点,并能依据所述拍摄角度对目标拍摄地点进行图像采集。
  6. 根据权利要求5所述的无人机拍照方法,其特征在于,依据所述目标拍摄地点和拍摄角度控制无人机飞行轨迹,以使得无人机的图像采集装置的图像采集范围覆盖所述目标拍摄地点,并能依据所述拍摄角度对目标拍摄地点进行图像采集,具体为:
    所述依据目标拍摄地点的坐标位置控制无人机飞行轨迹,以使得无人机的图像采集装置的图像采集范围覆盖所述目标拍摄地点;
    调整无人机的位置使用户与无人机之间的水平距离为预设距离值;
    依据拍摄角度调整所述无人机在垂直平面内的位置,使得所述无人机能依据所述拍摄角度对目标拍摄地点进行图像采集。
  7. 根据权利要求2所述的无人机拍照方法,其特征在于,依据所述目标拍摄地点控制无人机飞行轨迹,具体为:
    获取用于选择目标拍摄地点的目标区域拍摄图像;
    获取与所述目标区域拍摄图像对应的无人机坐标位置;
    控制无人机飞行到所述无人机坐标位置。
  8. 根据权利要求1所述的无人机拍照方法,其特征在于,所述向用户提供预存的姿态列表,包括:
    选择目标拍摄地点对应的支撑物,所述支撑物为用户选择的无人机在进行图像采集时用户的着力位置的支撑物;
    依据所述支撑物由姿态数据库列表中调取与所述支撑物相匹配的拍照姿态;
    将调取到的各个拍照姿态导入所述姿态列表中。
  9. 根据权利要求8所述的无人机拍照方法,其特征在于,所述选择目标拍摄地点对应的支撑物,包括:
    获取所述目标拍摄地点对应的标区域拍摄图像;依据所述标区域拍摄图像对所述目标拍摄地点进行图像识别,得到所述目标拍摄地点对应的物体的形状信息和/或介质信息,依据所述形状信息和/或介质信息确定所述目标拍摄地点对应的支撑物。
  10. 根据权利要求9所述的无人机拍照方法,其特征在于,依据所述标区域拍摄图像对所述目标拍摄地点进行图像识别,具体为:
    对拍摄图像进行图像处理,得到所述目标拍摄地点的背景信息,对所述背景信息进行图像识别。
  11. 根据权利要求1所述的无人机拍照方法,其特征在于,向用户提供预存的姿态列表之前,还包括:
    依据所述姿态列表中各个拍照姿态对应的难度系数对所述姿态列表中的各个拍照姿态进行排序。
  12. 根据权利1-11任意一项所述的无人机拍照方法,其特征在于,当姿态信息与所述目标拍照姿态的相似度大于预设值时,控制所述图像采集装置执行拍照动作之后,还包括:
    建立并存储所述拍摄地点、所述目标拍照姿态与拍照动作完成后得到的所述图像信息之间的映射关系。
  13. 一种图像数据处理方法,其特征在于,获取并存储用户上传的图像信息以及与所述图像信息相匹配的目标拍照姿态;
    依据所述图像信息中的用户姿态和所述目标拍照姿态计算并存储用户上传的图像信息的完成度信息。
  14. 根据权利要求13所述的图像数据处理方法,其特征在于,计算并存储用户上传的图像信息的完成度信息,具体为:
    对所述图像信息进行图像处理,提取得到所述图像信息中用户的姿态信息;
    计算所述姿态信息与所述目标拍照姿态的相似度,将所述相似度作为所述图像信息的完成度。
  15. 根据权利要求13所述的图像数据处理方法,其特征在于,还包括:
    计算所述图像信息的姿态的难度完成系数。
  16. 根据权利要求15所述的图像数据处理方法,其特征在于,计算所述图像信息的姿态的难度完成系数具体包括:
    提取预设参数,依据所述预设参数计算得到所述图像信息的姿态的难度完成系数;
    所述预设参数至少包括:所述目标拍照姿态对应的动作难度系数、所述图像信息对应的场景信息所对应的背景因子、服务器中统计得到的每种场景信息 下完成所述目标拍照姿态的总人数的倒数。
  17. 根据权利要求13所述的图像数据处理方法,其特征在于,还包括:
    获取与用户上传的图像信息相匹配的拍摄地点;
    生成上传用户在所述拍摄地点的成就数据;
    所述成就数据包括但不限于表征:
    在按照拍摄时间排序的处于所述拍摄地点且使用所述目标拍照姿态进行拍摄的所有用户中,所述上传用户的排序顺序;
    上传用户在所述拍摄地点进行拍照时使用的拍照姿态的总数量;
    上传用户使所述目标拍照姿态完成拍照的次数。
  18. 根据权利要求17所述的图像数据处理方法,其特征在于,还包括:在所述图像信息上标注所述成就数据。
  19. 根据权利要求17所述的图像数据处理方法,其特征在于,还包括:
    生成与拍照姿态对应的成就列表,所述拍照姿态对应的成就列表中记录有上传用户使各个目标拍照姿态完成拍照的次数。
  20. 根据权利要求17所述的图像数据处理方法,其特征在于,还包括:
    判断成就地图中是否已经对所述拍摄地点进行了标注,如果否,在所述成就地图的拍摄点处设置标注并将于所述拍摄地点相关联的成就数据与所述标注相关联。
  21. 根据权利要求13所述的图像数据处理方法,其特征在于,还包括:所述用户上传的图像信息为采用权利要求1-12任意一项无人机拍照方法得到的图像信息。
  22. 一种无人机拍照装置,其特征在于,包括:
    处理器,所述处理器用于获取用户从姿态列表中选择的目标拍照姿态;
    飞行轨迹控制单元,用于依据目标拍摄地点控制无人机飞行轨迹;
    图像处理单元,用于获取图像采集装置预采集到的图像信息,由预采集到的图像信息中提取所述用户的姿态信息,将所述姿态信息与所述目标拍照姿态进行对比,当姿态信息与所述目标拍照姿态的相似度大于预设值时,控制所述图像采集装置执行录像或拍照动作。
  23. 根据权利要求22所述的无人机拍照装置,其特征在于,还包括:
    拍摄地点选择单元,用于调取目标区域拍摄图像,依据用户指令由所述目标区域拍摄图像中选择的目标拍摄地点。
  24. 根据权利要求22所述的无人机拍照装置,其特征在于,所述处理器还用于:对目标拍摄地点进行图像分析,调取并向用户提供与所述目标拍摄地点相对应的预存的姿态列表。
  25. 根据权利要求22所述的无人机拍照装置,其特征在于,所述飞行轨迹控制单元具体用于:依据目标拍摄地点的坐标位置控制无人机飞行轨迹,以使得无人机的图像采集装置的图像采集范围覆盖目标拍摄地点。
  26. 根据权利22所述的无人机拍照装置,其特征在于,所述处理器还用于:获取用户输入的拍摄角度;
    所述飞行轨迹控制单元,具体用于:
    依据所述目标拍摄地点和拍摄角度控制无人机飞行轨迹,以使得无人机的图像采集装置的图像采集范围覆盖所述目标拍摄地点,并能依据所述拍摄角度对目标拍摄地点进行图像采集。
  27. 根据权利要求26所述的无人机拍照装置,其特征在于,所述飞行轨迹控制单元,具体用于:
    所述依据目标拍摄地点的坐标位置控制无人机飞行轨迹,以使得无人机的图像采集装置的图像采集范围覆盖所述目标拍摄地点;
    调整无人机的位置使用户与无人机之间的水平距离为预设距离值;
    依据拍摄角度调整所述无人机在垂直平面内的位置,使得所述无人机能依据所述拍摄角度对目标拍摄地点进行图像采集。
  28. 根据权利要求22所述的无人机拍照装置,其特征在于,所述飞行轨迹控制单元,具体用于:
    获取用于选择目标拍摄地点所对应的目标区域拍摄图像;
    获取与所述目标区域拍摄图像对应的无人机坐标位置;
    控制无人机飞行到所述无人机坐标位置。
  29. 根据权利要求22所述的无人机拍照装置,其特征在于,所述处理器,具体用于:
    选择目标拍摄地点对应的支撑物,所述支撑物为用户选择的无人机在进行 图像采集时用户的着力位置的支撑物;
    依据所述支撑物由姿态数据库列表中调取与所述支撑物相匹配的拍照姿态;
    将调取到的各个拍照姿态导入姿态列表中;
    用于向用户提供预存的姿态列表;
    获取用户由姿态列表中选择的目标拍照姿态。
  30. 根据权利要求29所述的无人机拍照装置,其特征在于,所述处理器选择目标拍摄地点对应的支撑物时,具体用于:
    获取所述目标拍摄地点对应的标区域拍摄图像;依据所述标区域拍摄图像对所述目标拍摄地点进行图像识别,得到所述目标拍摄地点对应的物体的形状信息和/或介质信息,依据所述形状信息和/或介质信息确定所述目标拍摄地点对应的支撑物。
  31. 根据权利要求30所述的无人机拍照装置,其特征在于,所述处理器依据所述标区域拍摄图像对所述目标拍摄地点进行图像识别时,具体用于:
    对拍摄图像进行图像处理,得到所述目标拍摄地点的背景信息,对所述背景信息进行图像识别。
  32. 根据权利要求22所述的无人机拍照装置,其特征在于,所述处理器内,还包括:
    排序单元,用于依据所述姿态列表中各个拍照姿态对应的难度系数对所述姿态列表中的各个拍照姿态进行排序。
  33. 根据权利要求22-32所述的无人机拍照装置,其特征在于,还包括:
    映射关系存储单元,用于建立并存储拍摄地点、目标拍照姿态与拍照动作完成后得到的图像信息之间的映射关系。
  34. 一种图像数据处理装置,其特征在于,包括:
    数据采集单元,用于获取并存储用户上传的图像信息以及与所述图像信息相匹配的目标拍照姿态;
    完成度计算单元,用于依据所述图像信息中的用户姿态和所述目标拍照姿态计算并存储用户上传的图像信息的完成度信息。
  35. 根据权利要求34所述的图像数据处理装置,其特征在于,所述完成 度计算单元具体用于:
    对所述图像信息进行图像处理,提取得到所述图像信息中用户的姿态信息;计算所述姿态信息与所述目标拍照姿态的相似度,将所述相似度作为所述图像信息的完成度。
  36. 根据权利要求34所述的图像数据处理装置,其特征在于,还包括:
    难度系数计算单元,用于计算所述图像信息的姿态的难度完成系数。
  37. 根据权利要求34所述的图像数据处理装置,其特征在于,所述难度系数计算单元,具体用于:
    提取预设参数,依据所述预设参数计算得到所述图像信息的姿态的难度完成系数;
    所述预设参数至少包括:所述目标拍照姿态对应的动作难度系数、所述图像信息对应的场景信息所对应的背景因子、服务器中统计得到的每种场景信息下完成所述目标拍照姿态的总人数的倒数。
  38. 根据权利要求34所述的图像数据处理装置,其特征在于,还包括:
    所述数据采集单元还用于获取与用户上传的图像信息相匹配的拍摄地点;
    成就单元,用于生成上传用户在所述拍摄地点的成就数据;
    所述成就数据包括但不限于表征:
    在按照拍摄时间排序的处于所述拍摄地点且使用所述目标拍照姿态进行拍摄的所有用户中,所述上传用户的排序顺序;
    上传用户在所述拍摄地点处经完成的拍照姿态总数量;
    上传用户使所述目标拍照姿态完成拍照的次数。
  39. 根据权利要求38所述的图像数据处理装置,其特征在于,所述成就单元还用于,用于在所述图像信息上标注所述成就数据。
  40. 根据权利要求38所述的图像数据处理装置,其特征在于,所述成就单元还用于,生成与拍照姿态对应的成就列表,所述拍照姿态对应的成就列表中记录有上传用户使各个目标拍照姿态完成拍照的次数。
  41. 根据权利要求38所述的图像数据处理装置,其特征在于,所述成就单元还用于,判断成就地图中是否已经对所述拍摄地点进行了标注,如果否,在所述成就地图的拍摄点处设置标注并将于所述拍摄地点相关联的成就数据 与所述标注相关联。
  42. 根据权利要求34所述的图像数据处理装置,其特征在于,用户上传的图像信息为采用权利要求22-33任意一项所述的无人机拍照装置采集到的图像信息。
  43. 一种存储介质,其特征在于,包括:
    第一存储器和无人机处理器;
    所述存储器用于存储程序代码,所述无人机处理器用于调用所述程序代码,当所述程序代码被执行时,用于执行以下操作:
    获取用户从姿态列表中选择的目标拍照姿态;
    依据获取的用户指定目标拍摄地点控制无人机飞行轨迹;
    获取所述图像采集装置预采集到的图像信息;
    从预采集到的图像信息中提取所述用户的姿态信息;
    将所述姿态信息与所述目标拍照姿态进行对比,当姿态信息与所述目标拍照姿态的相似度大于预设值时,控制所述图像采集装置执行录像或拍照动作。
  44. 一种存储介质,其特征在于,包括:
    第二存储器和服务器处理器;
    所述存储器用于存储程序代码,所述无人机处理器用于调用所述程序代码,当所述程序代码被执行时,用于执行以下操作:
    获取并存储用户上传的图像信息以及与所述图像信息相匹配的目标拍照姿态;
    依据所述图像信息中的用户姿态和所述目标拍照姿态计算并存储用户上传的图像信息的完成度信息。
PCT/CN2017/119916 2017-12-29 2017-12-29 一种无人机拍照方法、图像处理方法和装置 WO2019127395A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/119916 WO2019127395A1 (zh) 2017-12-29 2017-12-29 一种无人机拍照方法、图像处理方法和装置
CN201780031839.3A CN110192168B (zh) 2017-12-29 2017-12-29 一种无人机拍照方法、图像处理方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/119916 WO2019127395A1 (zh) 2017-12-29 2017-12-29 一种无人机拍照方法、图像处理方法和装置

Publications (1)

Publication Number Publication Date
WO2019127395A1 true WO2019127395A1 (zh) 2019-07-04

Family

ID=67064434

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/119916 WO2019127395A1 (zh) 2017-12-29 2017-12-29 一种无人机拍照方法、图像处理方法和装置

Country Status (2)

Country Link
CN (1) CN110192168B (zh)
WO (1) WO2019127395A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111666431A (zh) * 2020-07-01 2020-09-15 成都新潮传媒集团有限公司 一种广告播放列表的排序方法及装置
CN112017119A (zh) * 2020-09-04 2020-12-01 江门市低空遥感科技有限公司 一种单相机拼接影像数据采集方法
CN112313650A (zh) * 2019-10-30 2021-02-02 深圳市大疆创新科技有限公司 用户权限界定方法、移动终端及计算机可读存储介质
CN112507768A (zh) * 2020-04-16 2021-03-16 苏州极目机器人科技有限公司 目标物检测方法、装置和图像采集方法、装置
CN112702532A (zh) * 2020-12-29 2021-04-23 佛山科学技术学院 一种无人车自主采集图像的控制方法及装置
CN114302200A (zh) * 2021-06-28 2022-04-08 海信视像科技股份有限公司 一种显示设备、及基于用户姿态触发的拍照方法
CN116109956A (zh) * 2023-04-12 2023-05-12 安徽省空安信息技术有限公司 一种无人机自适应变焦高精度目标检测智能巡检方法

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110971824A (zh) * 2019-12-04 2020-04-07 深圳市凯达尔科技实业有限公司 无人机拍摄控制方法
CN113361300A (zh) * 2020-03-04 2021-09-07 阿里巴巴集团控股有限公司 标识信息识别方法、装置、设备和存储介质
CN112771465A (zh) * 2020-04-27 2021-05-07 深圳市大疆创新科技有限公司 无人机的控制方法、***、装置及存储介质
CN111595303A (zh) * 2020-07-03 2020-08-28 成都微宇科技有限责任公司 一种筛选航片的方法
CN112001419A (zh) * 2020-07-22 2020-11-27 李峰 一种防伪识别方法和装置
CN111967388B (zh) * 2020-08-18 2024-02-09 山东泰和建设管理有限公司 基于无人机的监理检测方法及***
CN114374815B (zh) * 2020-10-15 2023-04-11 北京字节跳动网络技术有限公司 图像采集方法、装置、终端和存储介质
CN113537198B (zh) * 2021-07-31 2023-09-01 北京晟天行科技有限公司 一种无人机图像采集时自动拍照的控制方法
CN115331174B (zh) * 2022-08-19 2023-06-13 中国安全生产科学研究院 一种企业安全生产标准化智能监管***及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150117708A1 (en) * 2012-06-25 2015-04-30 Softkinetic Software Three Dimensional Close Interactions
CN105138126A (zh) * 2015-08-26 2015-12-09 小米科技有限责任公司 无人机的拍摄控制方法及装置、电子设备
CN105512643A (zh) * 2016-01-06 2016-04-20 北京二郎神科技有限公司 一种图像采集方法和装置
CN105991924A (zh) * 2015-03-04 2016-10-05 珠海金山办公软件有限公司 一种拍摄辅助方法和装置
US20160364004A1 (en) * 2015-06-11 2016-12-15 Intel Corporation Drone controlling device and method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201339903A (zh) * 2012-03-26 2013-10-01 Hon Hai Prec Ind Co Ltd 無人飛行載具控制系統及方法
CN104125396B (zh) * 2014-06-24 2018-06-08 小米科技有限责任公司 图像拍摄方法和装置
CN107172360A (zh) * 2017-07-06 2017-09-15 杨顺伟 无人机跟拍方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150117708A1 (en) * 2012-06-25 2015-04-30 Softkinetic Software Three Dimensional Close Interactions
CN105991924A (zh) * 2015-03-04 2016-10-05 珠海金山办公软件有限公司 一种拍摄辅助方法和装置
US20160364004A1 (en) * 2015-06-11 2016-12-15 Intel Corporation Drone controlling device and method
CN105138126A (zh) * 2015-08-26 2015-12-09 小米科技有限责任公司 无人机的拍摄控制方法及装置、电子设备
CN105512643A (zh) * 2016-01-06 2016-04-20 北京二郎神科技有限公司 一种图像采集方法和装置

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112313650A (zh) * 2019-10-30 2021-02-02 深圳市大疆创新科技有限公司 用户权限界定方法、移动终端及计算机可读存储介质
CN112507768A (zh) * 2020-04-16 2021-03-16 苏州极目机器人科技有限公司 目标物检测方法、装置和图像采集方法、装置
CN111666431A (zh) * 2020-07-01 2020-09-15 成都新潮传媒集团有限公司 一种广告播放列表的排序方法及装置
CN111666431B (zh) * 2020-07-01 2024-05-28 成都屏盟科技有限公司 一种广告播放列表的排序方法及装置
CN112017119A (zh) * 2020-09-04 2020-12-01 江门市低空遥感科技有限公司 一种单相机拼接影像数据采集方法
CN112702532A (zh) * 2020-12-29 2021-04-23 佛山科学技术学院 一种无人车自主采集图像的控制方法及装置
CN114302200A (zh) * 2021-06-28 2022-04-08 海信视像科技股份有限公司 一种显示设备、及基于用户姿态触发的拍照方法
CN116109956A (zh) * 2023-04-12 2023-05-12 安徽省空安信息技术有限公司 一种无人机自适应变焦高精度目标检测智能巡检方法

Also Published As

Publication number Publication date
CN110192168B (zh) 2022-06-10
CN110192168A (zh) 2019-08-30

Similar Documents

Publication Publication Date Title
WO2019127395A1 (zh) 一种无人机拍照方法、图像处理方法和装置
CN108702444B (zh) 一种图像处理方法、无人机及***
CN108229369B (zh) 图像拍摄方法、装置、存储介质及电子设备
WO2019137131A1 (zh) 图像处理方法、装置、存储介质及电子设备
CN108702448B (zh) 无人机图像采集方法及无人机、计算机可读存储介质
CN109241820B (zh) 基于空间探索的无人机自主拍摄方法
CN103916587B (zh) 用于生成合成图像的拍摄装置以及使用所述装置的方法
CN101706793B (zh) 搜索图片的方法和装置
WO2020151750A1 (zh) 图像处理方法及装置
EP3182202B1 (en) Selfie-drone system and performing method thereof
CN106131413B (zh) 一种拍摄设备的控制方法及拍摄设备
CN110799921A (zh) 拍摄方法、装置和无人机
KR102407190B1 (ko) 영상 촬영 장치 및 그 동작 방법
CN109040474B (zh) 照片显示方法、装置、终端及存储介质
JP2017531950A (ja) 撮影テンプレートデータベースを構築し、且つ撮影推薦情報を提供するための方法及び装置
WO2014194676A1 (zh) 拍照方法、照片管理方法及设备
TW201011696A (en) Information registering device for detection, target sensing device, electronic equipment, control method of information registering device for detection, control method of target sensing device, information registering device for detection control progr
US20180359411A1 (en) Cameras with autonomous adjustment and learning functions, and associated systems and methods
CN110493517A (zh) 图像捕获装置的辅助拍摄方法和图像捕获装置
CN112702521B (zh) 图像拍摄方法及装置、电子设备、计算机可读存储介质
KR20090087670A (ko) 촬영 정보 자동 추출 시스템 및 방법
US9400924B2 (en) Object recognition method and object recognition apparatus using the same
WO2021168804A1 (zh) 图像处理方法、图像处理装置和图像处理***
CN112464012B (zh) 可自动筛选照片的景区自动拍照***及景区自动拍照方法
CN108600610A (zh) 拍摄辅助方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936265

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17936265

Country of ref document: EP

Kind code of ref document: A1