WO2021237616A1 - 图像传输方法、可移动平台及计算机可读存储介质 - Google Patents

图像传输方法、可移动平台及计算机可读存储介质 Download PDF

Info

Publication number
WO2021237616A1
WO2021237616A1 PCT/CN2020/093035 CN2020093035W WO2021237616A1 WO 2021237616 A1 WO2021237616 A1 WO 2021237616A1 CN 2020093035 W CN2020093035 W CN 2020093035W WO 2021237616 A1 WO2021237616 A1 WO 2021237616A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
threshold
frame rate
target
rate
Prior art date
Application number
PCT/CN2020/093035
Other languages
English (en)
French (fr)
Inventor
饶雄斌
赵亮
陈颖
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/093035 priority Critical patent/WO2021237616A1/zh
Priority to CN202080005966.8A priority patent/CN113056904A/zh
Publication of WO2021237616A1 publication Critical patent/WO2021237616A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/933Lidar systems specially adapted for specific applications for anti-collision purposes of aircraft or spacecraft
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • This application relates to the field of image transmission technology, and in particular to an image transmission method, a movable platform, and a computer-readable storage medium.
  • the user can manually select the frame rate by controlling the terminal.
  • the user's subjective selection of the frame rate may lead to problems of delayed adjustment or low accuracy.
  • the embodiments of the present application provide an image transmission method, a movable platform, and a computer-readable storage medium, which can improve the timeliness and accuracy of target frame rate determination.
  • an embodiment of the present application provides an image transmission method, the method is applied to a movable platform, the movable platform includes an image acquisition device, the movable platform is communicatively connected with a control terminal, and the method includes :
  • a target frame rate for image transmission by the movable platform to the control terminal is determined.
  • an embodiment of the present application also provides a movable platform, the movable platform is in communication connection with a control terminal, and the movable platform includes:
  • Image acquisition device for acquiring images
  • Memory used to store computer programs
  • the processor is used to call the computer program in the memory to execute:
  • a target frame rate for image transmission by the movable platform to the control terminal is determined.
  • the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, and the computer program is loaded by a processor to execute any of the provided by the embodiments of the present application.
  • An image transmission method An image transmission method.
  • the embodiment of the application can obtain the movement characteristics of the target pixel in the image collected by the image acquisition device; and/or obtain the movement state information of the movable platform; according to the movement characteristic of the target pixel in the image and/or the movement of the movable platform
  • the status information determines the target frame rate for image transmission from the movable platform to the control terminal.
  • This solution can automatically determine the target frame rate without the user's manual selection, which improves the timeliness and accuracy of determining the target frame rate.
  • Fig. 1 is a schematic diagram of an application scenario of an image transmission method provided by an embodiment of the present application
  • Figure 2 is a schematic diagram of interaction between a drone and a control terminal provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of an image transmission method provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of determining multiple moving targets from a first image and a second image provided by an embodiment of the present application
  • Fig. 5 is a schematic structural diagram of a movable platform provided by an embodiment of the present application.
  • the embodiments of the present application provide an image transmission method, a movable platform, and a computer-readable storage medium, which are used based on the movement characteristics of the target pixel in the image collected by the image acquisition device and/or the movement state information of the movable platform , Determine the target frame rate for image transmission from the movable platform to the control terminal, which improves the timeliness and accuracy of the target frame rate determination.
  • the movable platform may include a pan/tilt, a platform body, an image acquisition device, etc.
  • the platform body may be used to carry a pan/tilt, and the pan/tilt may carry an image acquisition device, so that the pan/tilt can drive the image acquisition device to move.
  • the image capture device may include one or more.
  • the types of the movable platform and the image acquisition device can be flexibly set according to actual needs, and the specific content is not limited here.
  • the image acquisition device may be a camera or a vision sensor, etc.
  • the movable platform may be a mobile terminal, a drone, a robot, or a pan-tilt camera, etc.
  • the pan-tilt camera may include a camera, a pan-tilt, etc.
  • the pan-tilt may include a pivot arm, etc., and the pivot arm can drive the camera to move, for example, the pivot arm can control the camera to move to a suitable position so that the camera can collect a desired image.
  • the camera may be a monocular camera, and the type of the camera may be an ultra-wide-angle camera, a wide-angle camera, a telephoto camera (that is, a zoom camera), an infrared camera, a far-infrared camera, an ultraviolet camera, and a time-of-flight (TOF, Time of Flight) Depth Camera (TOF Depth Camera for short), etc.
  • the drone may include a camera, a distance measuring device, an obstacle sensing device, and so on.
  • the unmanned aerial vehicle may also include a pan-tilt for carrying a camera, and the pan-tilt can drive the camera to a suitable position so as to collect the required images through the camera.
  • the drone can include a rotary-wing drone (such as a quad-rotor drone, a hexa-rotor drone, or an eight-rotor drone, etc.), a fixed-wing drone, or a rotary-wing and fixed-wing drone The combination of is not limited here.
  • the mobile platform may also be provided with positioning devices such as Global Positioning System (GPS) for accurate positioning of the mobile position of the mobile platform.
  • GPS Global Positioning System
  • the positional relationship between the camera and the positioning device can be on the same plane. In this plane, the camera and the positioning device can be on the same straight line, or form a preset angle, etc.; of course, the camera and the positioning device can also be on the same plane. They can be located on different planes.
  • GPS Global Positioning System
  • FIG. 1 is a schematic diagram of a scene for implementing the image transmission method provided by the embodiment of the present application.
  • the control terminal 100 is in communication connection with a drone 200, and the control terminal 100 It can be used to control the flight of the UAV 200 or perform corresponding actions, and obtain corresponding motion information from the UAV 200.
  • the motion information may include flight direction, flight attitude, flight height, flight speed and position information, etc.
  • the acquired exercise information is sent to the control terminal 100, and the control terminal 100 performs analysis and display.
  • the control terminal 100 can also receive control instructions input by the user, and control the distance measuring device or camera on the UAV 200 accordingly based on the control instructions.
  • control terminal 100 may receive a shooting instruction or a distance measurement instruction input by a user, and send the shooting instruction or a distance measurement instruction to the drone 200, and the drone 200 can control the camera to shoot the captured image according to the shooting instruction. Or control the distance measuring device to measure the distance of the target according to the distance measuring instruction.
  • the type of the control terminal 100 can be flexibly set according to actual needs, and the specific content is not limited here.
  • the control terminal 100 may be a remote control device provided with a display, control buttons, etc., for establishing a communication connection with the drone 200 and controlling the drone 200, and the display may be used for displaying images or videos.
  • the control terminal 100 may also be a third-party mobile phone or tablet computer, etc., and establish a communication connection with the drone 200 through a preset protocol, and control the drone 200.
  • the obstacle sensing device of the drone 200 can obtain the sensing signals around the drone 200, and by analyzing the sensing signals, the obstacle information can be obtained and displayed on the display of the drone 200. Obstacle information is displayed inside, so that the user can learn the obstacles that the drone 200 perceives, and it is convenient for the user to control the drone 200 to avoid the obstacle.
  • the display may be a liquid crystal display, or a touch screen, etc.
  • the obstacle sensing device may include at least one sensor for acquiring sensing signals from the drone 200 in at least one direction.
  • the obstacle sensing device may include a sensor for detecting obstacles in front of the drone 200.
  • the obstacle sensing device may include two sensors for detecting obstacles in front of and behind the drone 200, respectively.
  • the obstacle sensing device may include four sensors for detecting obstacles in the front, rear, left, and right of the drone 200, respectively.
  • the obstacle sensing device may include five sensors, which are used to detect obstacles in the front, rear, left, right, and above of the drone 200, respectively.
  • the obstacle sensing device may include six sensors for detecting obstacles in front, rear, left, right, above, and below the drone 200, respectively.
  • the sensors in the obstacle sensing device can be implemented separately or integrated.
  • the detection direction of the sensor can be set according to specific needs to detect obstacles in various directions or combinations of directions, and is not limited to the above-mentioned forms disclosed in this application.
  • the drone 200 may have multiple rotors.
  • the rotor may be connected to the body of the drone 200, and the body may include a control unit, an inertial measurement unit (IMU), a processor, a battery, a power supply, and/or other sensors.
  • the rotor can be connected to the body by one or more arms or extensions branching from the central part of the body.
  • one or more arms may extend radially from the central body of the drone 200, and may have a rotor at or near the end of the arm.
  • Figure 2 is a schematic diagram of the interaction between the drone and the control terminal provided by the embodiment of the application.
  • Humans and machines can collect images through visual sensors and determine the movement characteristics of the target pixels in the images (such as the offset rate), and obtain the movement status information of the drone itself (such as the height and translation of the drone) through the inertial measurement unit IMU. Movement speed, and/or angular velocity in the yaw direction, etc.), and then the drone can make frame rate control decisions to determine the target frame rate, that is, the drone can be based on the movement characteristics of the target pixel in the image and/or the unmanned
  • the motion state information of the aircraft itself determines the target frame rate for image transmission from the UAV to the control terminal.
  • the image collected by the camera can be video-encoded based on the target frame rate to generate video stream data, and the video stream data is sent to the control terminal through the wireless communication module of the drone.
  • the control terminal can receive the video stream data sent by the drone through its own wireless communication module, decode the video stream data to obtain an image, and display the image on the display.
  • the device structures such as the drone and the control terminal in FIGS. 1 and 2 do not constitute a limitation on the application scenario of the image transmission method.
  • FIG. 3 is a schematic flowchart of an image transmission method according to an embodiment of the present application.
  • the image transmission method can be applied to a movable platform to accurately determine the target frame rate.
  • the following will use the movable platform as a drone for detailed description.
  • the image transmission method may include step S101 to step S102, etc., which may be specifically as follows:
  • one or more image acquisition devices may be preset on the drone, and the image acquisition devices may be cameras or vision sensors. During the flight of the drone, one or more frames of images can be collected through the image acquisition device.
  • the target pixel can include one or more.
  • the target pixel can be a feature point, or the target pixel can be a pixel in the background area. , Or the target pixel can be the pixel of the area where the moving target is located, and so on.
  • the movement characteristic can be used to characterize the movement state of the target pixel.
  • the movement characteristic can be flexibly set according to actual needs. The specific content is not limited here.
  • the movement characteristic can be the movement rate, acceleration, etc. of the target pixel.
  • acquiring the movement characteristics of the target pixel in the image acquired by the image acquisition device may include: acquiring the first image and the second image in the multi-frame images acquired by the image acquisition device; acquiring the target pixel of the first image The pixel coordinates of the target pixel of the second image and the pixel coordinate of the target pixel of the second image.
  • the target pixel of the first image corresponds to the target pixel of the second image; according to the pixel coordinates of the target pixel of the first image and the target of the second image The pixel coordinates of the pixel determine the movement characteristics of the target pixel.
  • the movement characteristic may be determined based on the pixel coordinates of the target pixel in the multi-frame image.
  • the UAV can collect one frame of images every preset time interval through the image acquisition device, and after the preset time period, multiple frames of images can be collected.
  • the preset time per interval and the preset time period can be based on time requirements. Make flexible settings, the specific value is not limited here.
  • the target pixel is extracted from multiple frames of images, and the corresponding pixel coordinates of the target pixel on each frame of image are obtained, and the movement characteristics of the target pixel are determined according to the pixel coordinates of the target pixel in each frame of image.
  • the first image and the second image can be filtered from the multi-frame images collected by the image collection device.
  • the first image and the second image can be images that are adjacent in collection time, or can be Images that are not adjacent in acquisition time.
  • the first image and the second image that are adjacent in acquisition time can be filtered from the multi-frame images collected by the image acquisition device, or the first image and the second image with good image quality can be filtered from the multi-frame images collected by the image acquisition device.
  • the second image, or the first image and the second image with high image definition can be filtered from multiple frames of images collected by the image collection device.
  • the image acquisition device can acquire multiple frames of images based on the initial frame rate.
  • the default frame rate can be set as the initial frame rate of the image capture device.
  • the pixel coordinates of the target pixel of the first image and the pixel coordinate of the target pixel of the second image can be obtained, where the target pixel of the first image corresponds to the target pixel of the second image, for example, when the target When the pixel is the pixel of a moving vehicle, the pixel coordinates of the pixel in the area where the vehicle is located in the first image can be obtained, and the pixel coordinates of the pixel in the area where the vehicle is located in the second image can be obtained.
  • the movement characteristics of the target pixel can be determined according to the pixel coordinates of the target pixel of the first image and the pixel coordinate of the target pixel of the second image.
  • the target pixel may include the first target pixel corresponding to the background and/or the second target pixel corresponding to the moving target, the first target pixel of the first image and the first target pixel of the second image
  • the dots correspond
  • the second target pixel of the first image corresponds to the second target pixel of the second image.
  • the target pixel can be divided into the first target pixel corresponding to the background and/or the second target pixel corresponding to the moving target, where the moving target may be collected by the image acquisition device
  • the moving target may include one or more.
  • the moving target may be a walking person, a moving vehicle, or a running puppy, etc.
  • the background may be the image captured by the image capture device except for motion. The area outside the target.
  • the background and the moving target in the first image can be identified, and the pixel coordinates of the first target pixel corresponding to the background in the first image can be obtained, and the pixel of the second target pixel corresponding to the moving target in the first image can be obtained Coordinates, and can identify the background and the moving target in the second image, and obtain the pixel coordinates of the first target pixel corresponding to the background in the second image, and obtain the second target pixel corresponding to the moving target in the second image
  • the pixel coordinates can identify the background and the moving target in the second image, and obtain the pixel coordinates of the first target pixel corresponding to the background in the second image, and obtain the second target pixel corresponding to the moving target in the second image
  • the background or moving target in the first image can be identified, and the pixel coordinates of the first target pixel corresponding to the background in the first image or the pixel coordinate of the second target pixel corresponding to the moving target can be obtained.
  • the background or moving target in the second image can be recognized, and the
  • the image transmission method may further include: recognizing the background and/or moving target in the first image and the second image based on a pre-trained calculation model.
  • the background and/or the moving target in the multi-frame image can be recognized through a pre-trained calculation model.
  • the pre-trained calculation model may be a deep learning module.
  • the calculation model can be flexibly set according to actual needs.
  • the calculation model can be a target detection algorithm SSD or YOLO, and the calculation model can also be a convolutional neural network R-CNN or Faster R-CNN.
  • the background and/or moving target in the first image can be accurately identified through the pre-trained calculation model, and the background in the second image can be accurately identified And/or the moving target in order to obtain the pixel coordinates of the background and/or the pixel in the area where the moving target is located.
  • the calculation model for background recognition and the calculation model for moving target recognition can be the same or different.
  • the background is recognized to obtain the location of the background area on each frame of image, and the pixel coordinates of the background area on each frame of image are determined according to the location of the background area.
  • the moving target in the first image and the second image is recognized to obtain the location of the moving target on each frame of image, and the location of each frame of image is determined according to the location of the moving target.
  • the pixel coordinates of the moving target, where the first calculation model and the second calculation model may be the same or different.
  • the background area in the image or the area where the moving target is located may include multiple pixels.
  • Feature points can be extracted for the background area in the image and the area where the moving target is located, the extracted feature points are used as target pixels, and the feature point matching method is used to determine the target pixel points in the first image and the second image. Correspondence between target pixels.
  • the moving target may include one or the first multiple moving targets with the largest corresponding image area among the multiple moving targets to be selected; or, the moving target may include the moving target with the largest moving amplitude among the multiple moving targets to be selected. Or the previous multiple motion areas.
  • moving targets can be filtered based on the area of the image area. Specifically, when multiple moving targets to be selected can be identified from the image, the area of the image area occupied by each moving target can be obtained, and the multiple moving targets One or more moving objects with the largest image area are filtered out of the candidate moving objects. For example, one moving object with the largest image area or the top 3 moving objects with the largest image area can be selected from multiple moving objects to be selected. Sports goals. In another embodiment, the moving targets can be filtered based on the motion range.
  • the motion range of each candidate moving target can be obtained, and the moving targets can be selected from the multiple candidate moving targets.
  • One or more previous sports targets with the largest motion amplitude can be filtered out of the sports targets. For example, one sports target with the largest motion amplitude or the first 3 sports targets with the largest motion amplitude can be selected from multiple candidate sports targets.
  • determining the movement characteristics of the target pixel according to the pixel coordinates of the target pixel of the first image and the pixel coordinates of the target pixel of the second image may include: according to the pixel coordinates of the target pixel of the first image and The pixel coordinates of the target pixel of the second image determine the relative position difference; determine the offset rate of the target pixel according to the relative position difference and the collection time interval of the first image and the second image; determine the offset rate of the target pixel according to the offset rate Mobile characteristics.
  • the movement characteristic may be determined based on the relative position difference of the target pixel.
  • the relative position difference between the pixel coordinates of the target pixel point in the multi-frame image collected by the image acquisition device and the collection time interval of the multi-frame image can be obtained, and the target pixel determined according to the relative position difference and the collection time interval Point’s movement characteristics.
  • the two frames of images include a first image and a second image
  • the relative position difference can be determined according to the pixel coordinates of the target pixel of the first image and the pixel coordinates of the target pixel of the second image.
  • the relative position difference ⁇ A can be determined by the following formula (1):
  • (x i , y i ) represents the pixel coordinates of the target pixel of the first image
  • (x k , y k ) represents the pixel coordinates of the target pixel of the second image.
  • the acquisition time interval T between the first image and the second image can be acquired, and then the offset rate of the target pixel can be determined according to the relative position difference ⁇ A and the acquisition time interval T between the first image and the second image, which can be specifically As shown in the following formula (2):
  • p represents the offset rate of the target pixel
  • ⁇ A represents the relative position difference
  • T represents the collection time interval
  • the movement characteristics of the target pixel can be determined according to the offset rate.
  • determining the movement characteristics of the target pixel according to the offset rate may include: weighting the offset rate of the first target pixel and the offset rate of the second target pixel to use the weighted average The offset rate characterizes the movement characteristics of the target pixel.
  • the target pixel includes the first target pixel and the second target pixel
  • the target pixel includes the first target pixel corresponding to the background and the second target pixel corresponding to the moving target, which may be based on the first target pixel
  • the offset rate is used to characterize the movement characteristics of the target pixel, which can be expressed as the following formula (3) Shown:
  • p 1 indicates the offset rate of the first target pixel
  • p 2 indicates the offset rate of the second target pixel
  • the target pixel includes the first target pixel and the second target pixel
  • the target pixel includes the first target pixel corresponding to the background and the second target pixel corresponding to the moving target, which may be based on The pixel coordinates of the first target pixel in the first image and the second image are determined, the offset rate of the first target pixel is determined, and the pixel coordinates of the second target pixel in the first image and the second image are determined The offset rate of the second target pixel.
  • the weight value of the first target pixel (such as the weight value of the background) and the weight value of the second target pixel (such as the weight value of the moving target) can be obtained, and the weight value of the first target pixel, the weight value of the first target
  • the shift rate of the pixel, the weight value of the second target pixel, and the shift rate of the second target pixel are weighted and summed, and the shift rate after the weighted sum is used to characterize the movement characteristics of the target pixel. It can be shown in the following formula (4):
  • p 1 indicates the offset rate of the first target pixel
  • p 2 indicates the offset rate of the second target pixel
  • a 11 indicates the weight value of the first target pixel
  • a 12 represents the weight value of the second target pixel
  • a 11 + a 12 1.
  • the target pixel includes the target pixel corresponding to the background, the target pixel corresponding to the first moving target, the target pixel corresponding to the second moving target, and the target pixel corresponding to the third moving target as an example.
  • the first image and the second image will be described in detail as an example.
  • the pixel coordinate B of the target pixel corresponding to the background in the first image can be obtained, and the pixel coordinate B'of the target pixel corresponding to the background in the second image can be obtained, where,
  • the acquisition time interval T of the first image and the second image is acquired, and the offset rate p b corresponding to the background is determined according to the acquisition time interval T and the relative position difference ⁇ B:
  • the moving targets ⁇ S i ⁇ other than the background can be identified from the first image and the second image. Three of them can be taken as the three occupying the largest area of the image area (that is, occupying the largest number of image pixels).
  • the pixel coordinates of the moving target on the first image ⁇ S 1 , S 2 , S 3 ⁇ , where S 1 represents the pixel coordinates of the target pixel corresponding to the first moving target on the first image, and S 2 represents the pixel coordinates of the target pixel on the first image.
  • S 1' represents the pixel on the second image of the target pixel corresponding to the first moving object Coordinates
  • S′ 2 represents the pixel coordinates of the target pixel corresponding to the second moving target on the second image
  • S 3 ′ represents the pixel coordinate of the target pixel corresponding to the third moving target on the second image.
  • Indicates the pixel coordinates of the moving target in the first image Indicates the pixel coordinates of the moving target in the second image.
  • Indicates the pixel coordinates of the first moving target in the first image Indicates the pixel coordinates of the first moving target in the second image, Represents the pixel coordinates of the second moving target in the first image, Indicates the pixel coordinates of the second moving target in the second image, Indicates the pixel coordinates of the third moving target in the first image, Represents the pixel coordinates of the third moving target in the second image.
  • the relative position may be acquired first and second images of moving objects in three difference, i.e., obtaining the relative position between the coordinates of the pixel on the first image on the S 1 S 1 and the second image 'is the difference [Delta] S 1, the first The relative position difference ⁇ S 2 between the pixel coordinates of S 2 on the first image and S′ 2 on the second image, and the relative position difference ⁇ S between the pixel coordinates of S 3 on the first image and S 3 ′ on the second image 3.
  • the details can be as follows:
  • ⁇ S j represents the relative position difference of the three moving targets.
  • the offset rate of each moving target can be determined according to the acquisition time interval T of the first image and the second image and the relative position difference ⁇ S j of the three moving targets.
  • p t1 indicates the offset rate corresponding to the first moving target
  • p t2 indicates the offset rate corresponding to the second moving target
  • p t3 indicates the offset rate corresponding to the third moving target
  • ⁇ S 1 indicates that the first moving target is in the first moving target.
  • ⁇ S 2 represents the relative position difference of the second moving target on the first image and the second image
  • ⁇ S 3 represents the relative position difference of the third moving target on the first image and the second image.
  • the relative position is poor.
  • the offset rate p t1 corresponding to the first moving target the offset rate p t2 corresponding to the second moving target, and the offset rate p t3 corresponding to the third moving target, Determine the comprehensive calculation target offset rate
  • p b indicates the offset rate corresponding to the background
  • p t1 indicates the offset rate corresponding to the first moving target
  • p t2 indicates the offset rate corresponding to the second moving target
  • p t3 indicates the corresponding offset rate of the third moving target
  • a 0 represents the weight value of the offset rate corresponding to the background
  • a 1 represents the weight value of the offset rate corresponding to the first moving target
  • a 2 represents the weight of the offset rate corresponding to the second moving target Value
  • a 3 represents the weight value of the offset rate corresponding to the third moving target
  • a 0 +a 1 +a 2 +a 3 1.
  • the target offset rate can be used to characterize the movement characteristics of the target pixel.
  • acquiring the movement state information of the movable platform may include: acquiring the height, translational movement speed, and/or angular velocity in the yaw direction of the movable platform.
  • the movement state information of the movable platform may include at least one of the height of the movable platform, the translational movement speed, and the angular velocity in the yaw direction.
  • the movement state information may include At least one of the flying height of the drone, the translational speed, and the angular velocity in the yaw direction, the height may be the height of the drone from the ground, and the translational speed may be the flying speed of the drone.
  • the accuracy of frame rate adjustment can be further improved.
  • the drone shoots video at a height close to the ground and a height away from the ground at the same speed
  • the user's visual perception is completely different.
  • the speed of the user's visual perception when it is close to the ground is greater than the speed of the user's visual perception when it is far from the ground. Therefore, further referencing the altitude based on the consideration of the translational motion speed and the angular velocity in the yaw direction can make the adjustment of the frame rate more in line with the user's intuitive experience.
  • motion state information may also include other information such as flight direction, flight attitude, or position information, and the specific content is not limited here.
  • obtaining the height, translational speed, and/or angular velocity in the yaw direction of the movable platform may include: obtaining the height, translational speed, and speed of the movable platform through an inertial measurement unit installed on the movable platform And/or angular velocity in the yaw direction.
  • the inertial measurement unit IMU installed on the movable platform can be used to obtain the height, translational movement speed, and/or yaw direction of the movable platform. Angular velocity and other information.
  • S102 Determine a target frame rate for image transmission by the movable platform to the control terminal according to the movement characteristics of the target pixel in the image and/or the movement state information of the movable platform.
  • the target frame rate of the drone to the control terminal can be determined according to the movement characteristics of the target pixel; or, after obtaining the unmanned After the movement status information of the drone, the target frame rate for image transmission from the drone to the control terminal can be determined according to the movement status information of the drone; or, after obtaining the movement characteristics of the target pixel and the movement status information of the drone , According to the movement characteristics of the target pixel and the movement state information of the drone, the target frame rate for image transmission from the drone to the control terminal can be determined.
  • the drone can intelligently perceive the current flight environment, combined with the movement characteristics of the drone's visual perception and the detected motion state information, adaptively dynamically adjust the target frame rate of image transmission, so that the subsequent control terminal can achieve the display image
  • the movement characteristic of the target pixel is characterized by the offset rate of the target pixel.
  • determining the target frame rate of the image transmission from the movable platform to the control terminal may include: When the offset rate is less than the first preset rate threshold, set the first frame rate as the target frame rate; or, when the offset rate is greater than the second preset rate threshold, set the second frame rate as the target frame rate; or , When the offset rate is greater than or equal to the first preset rate threshold, and the offset rate is less than or equal to the second preset rate threshold, the current frame rate is set as the target frame rate; wherein, the first preset rate threshold is less than the first preset rate threshold Two preset rate thresholds, the first frame rate is less than the second frame rate.
  • the target frame rate can be determined only by the offset rate of the target pixel. Specifically, after the offset rate of the target pixel is obtained, the offset rate from the target pixel can be obtained According to the corresponding frame rate control decision, the target frame rate of image transmission corresponding to the offset rate of the target pixel is determined according to the frame rate control decision.
  • the frame rate control decision can be a number of different offset rates and each frame rate. The mapping relationship between the two can be determined by querying the mapping relationship to determine the target frame rate of image transmission corresponding to the offset rate of the target pixel.
  • the frame rate control decision may be a calculated conversion relationship between the offset rate and the frame rate, and the corresponding target frame rate of image transmission may be calculated based on the offset rate of the target pixel through the calculation conversion relationship.
  • the frame rate control decision can also be flexibly set according to actual needs, and the specific content is not limited here.
  • the offset rate of the target pixel it can be determined whether the offset rate is less than the first preset rate threshold.
  • the offset rate is less than the first preset rate threshold, it indicates that the flying speed of the drone is slow.
  • the content of the video screen changes slowly.
  • a lower frame rate can be used at this time, that is, the first frame rate is set as the target frame rate.
  • the offset rate is greater than the second preset rate threshold, it indicates that the content of the video screen changes quickly.
  • a higher frame rate can be used at this time.
  • the second frame rate as the target frame rate.
  • the offset rate is greater than or equal to the first preset rate threshold, and the offset rate is less than or equal to the second preset rate threshold, it indicates that the flight status of the drone has not changed much and the frame rate does not need to be adjusted.
  • the current frame rate can be maintained unchanged, and the current frame rate can be set as the target frame rate. It can be as follows:
  • the first preset rate threshold is less than the second preset rate threshold
  • the first frame rate is less than the second frame rate
  • the first preset rate threshold, the second preset rate threshold, the first frame rate, and the second frame rate may be based on Flexible settings are actually required, and the specific values are not limited here.
  • the frame rate corresponding to the high-definition mode may be 30fps
  • the frame rate corresponding to the smooth mode may be 60fps, and so on.
  • the frame rate adjustment is not limited to the first frame rate and the second frame rate, etc., but can also include multiple different frame rates such as the third frame rate, the fourth frame rate, and the fifth frame rate, and multiple frames can be established.
  • the mapping relationship between different frame rates and each offset rate is used to determine the frame rate corresponding to the currently detected offset rate based on the mapping relationship between multiple different frame rates and each offset rate to obtain the target frame rate.
  • determining the target frame rate of image transmission by the movable platform to the control terminal according to the movement state information of the movable platform may include: if the height is less than the first height threshold, and the translational movement speed is greater than the first speed threshold, Then set the second frame rate as the target frame rate; if the height is greater than or equal to the first height threshold, or the translational motion speed is less than or equal to the first speed threshold, it is determined whether the angular velocity is greater than the angular velocity threshold; if the angular velocity is greater than the angular velocity threshold, the The second frame rate is set as the target frame rate; if the angular velocity is less than or equal to the angular velocity threshold, it is determined whether the height is greater than the second height threshold, and whether the translational motion speed is less than the second speed threshold; if the height is greater than the second height threshold, and the translational motion If the speed is less than the second speed threshold, the first frame rate is set as the target frame rate; if the height is less than the second speed threshold
  • the target frame rate can be determined only by using the motion state information of the drone.
  • the frame rate control decision corresponding to the movement state information can be obtained, and the target frame rate of the image transmission corresponding to the movement state information can be determined according to the frame rate control decision.
  • the frame rate control decision may be a mapping relationship between a plurality of different motion state information and each frame rate, and the target frame rate of image transmission corresponding to the currently detected motion state information can be determined by querying the mapping relationship.
  • the frame rate control decision may be a calculated conversion relationship between the motion state information and the frame rate, and the corresponding target frame rate of image transmission may be calculated based on the motion state information through the calculation conversion relationship.
  • the frame rate control decision can also be flexibly set according to actual needs, and the specific content is not limited here.
  • the altitude is less than the first height threshold and whether the translational movement speed is greater than the first speed threshold. Less than the first height threshold, and the translational movement speed is greater than the first speed threshold, it means that the content of the video screen changes quickly.
  • a higher frame can be used at this time Rate, that is, set the second frame rate as the target frame rate.
  • the height is greater than or equal to the first height threshold, or the translational motion speed is less than or equal to the first speed threshold, it can be further judged whether the angular velocity is greater than the angular velocity threshold; if the angular velocity is greater than the angular velocity threshold, it means that the content of the video screen changes quickly, in order to ensure Subsequent image generation video playback smoothness to improve the effect of aerial image transmission.
  • a higher frame rate can be used, that is, the second frame rate is set as the target frame rate; if the angular velocity is less than or equal to the angular velocity threshold, further judgment can be made Whether the height is greater than the second height threshold and whether the translational motion speed is less than the second speed threshold; if the height is greater than the second height threshold and the translational motion speed is less than the second speed threshold, it means that the content of the video screen changes slowly, in order to ensure collection To achieve the high definition of each frame of image, improve the image quality of aerial image transmission.
  • a lower frame rate can be used, that is, the first frame rate is set as the target frame rate; if the height is less than or equal to the second height threshold, or translation The movement speed is greater than or equal to the second speed threshold, indicating that the flight status of the drone has not changed much and the frame rate does not need to be adjusted.
  • the current frame rate can be maintained unchanged, and the current frame rate can be set as the target frame rate. It can be as follows:
  • h altitude
  • v the speed of translational movement
  • w y the angular velocity in the yaw direction
  • H l the first altitude threshold
  • Hu the second altitude threshold
  • v l the first speed threshold
  • v u the first altitude threshold
  • K2 represents the second frame rate
  • K1 represents the first frame rate.
  • the first height threshold is less than the second height threshold
  • the first speed threshold is less than the second speed threshold
  • the first frame rate is less than the second frame rate.
  • the first preset rate threshold, the second preset rate threshold, the first frame rate, and the second frame rate can be flexibly set according to actual needs, and specific values are not limited here.
  • the adjustment of the frame rate is not limited to the first frame rate and the second frame rate. It can also include multiple different frame rates such as the third frame rate, the fourth frame rate, and the fifth frame rate.
  • the mapping relationship between different frame rates and each motion state information so as to determine the frame rate corresponding to the currently detected motion state information based on the mapping relationship between multiple different frame rates and each motion state information to obtain the target frame rate .
  • the absolute value of the translational speed can be compared with the speed threshold, and the absolute value of the angular speed can be compared with the angular speed threshold.
  • determining the target frame rate of image transmission by the movable platform to the control terminal may include: if the offset rate is greater than the second preset Rate threshold, set the second frame rate as the target frame rate; if the offset rate is less than or equal to the second preset rate threshold, determine whether the height is less than the first height threshold, and whether the translational motion speed is greater than the first speed threshold; If the height is less than the first height threshold, and the translational movement speed is greater than the first speed threshold, the second frame rate is set as the target frame rate; if the height is greater than or equal to the first height threshold, or the translational movement speed is less than or equal to the first speed Threshold, judge whether the angular velocity is greater than the angular velocity threshold; if the angular velocity is greater than the angular velocity threshold, set the second frame rate as the target frame rate; if the angular velocity is less than or equal to the
  • the target frame rate can be determined by combining the target pixel offset rate and the drone's motion state information. Specifically, the target pixel offset rate is obtained, and the drone After the height, translational motion speed, and angular velocity in the yaw direction and other motion state information, the frame rate control decision corresponding to the offset rate and motion state information can be obtained, and the offset rate and motion state can be determined according to the frame rate control decision.
  • the target frame rate of image transmission corresponding to the information, where the frame rate control decision can be different offset rates and motion state information, and the mapping relationship with each frame rate. By querying the mapping relationship, you can determine the The target frame rate of image transmission corresponding to the offset rate and motion state information.
  • the frame rate control decision may be a calculated conversion relationship between the offset rate and motion state information and the frame rate, and the corresponding target frame rate of image transmission may be calculated based on the offset rate and motion state information through the calculation conversion relationship.
  • the frame rate control decision can also be flexibly set according to actual needs, and the specific content is not limited here.
  • the offset rate of the target pixel After obtaining the offset rate of the target pixel, and the movement status information of the drone's altitude, translational motion speed, and angular velocity in the yaw direction, it can be determined whether the offset rate is greater than the second preset rate threshold. If the offset rate is greater than the second preset rate threshold, it indicates that the drone is flying faster and the content of the video screen changes quickly. In order to ensure the smoothness of subsequent image generation video playback, and improve the transmission of aerial images As a result, a higher frame rate can be used at this time, that is, the second frame rate is set as the target frame rate.
  • the offset rate is less than or equal to the second preset rate threshold, it can be further determined whether the height is less than the first height threshold, and whether the translational movement speed is greater than the first speed threshold; if the height is less than the first height threshold, and the translational movement speed is greater than The first speed threshold indicates that the content of the video screen changes quickly.
  • a higher frame rate can be used at this time, that is, the second frame rate Set as the target frame rate; if the height is greater than or equal to the first height threshold, or the translational motion speed is less than or equal to the first speed threshold, it can be further judged whether the angular velocity is greater than the angular velocity threshold; if the angular velocity is greater than the angular velocity threshold, the content of the video screen The change is also fast.
  • a higher frame rate can be used at this time, that is, the second frame rate is set as the target frame rate; if the angular velocity is less than or equal to Angular velocity threshold, you can further determine whether the offset rate is less than the first preset rate threshold; if the offset rate is less than the first preset rate threshold, it means that the content of the video screen changes slowly, in order to ensure that the capture of each frame of image Sharpness improves the image quality of aerial image transmission.
  • a lower frame rate can be used, that is, the first frame rate is set as the target frame rate; if the offset rate is greater than or equal to the first preset rate threshold, further judgment can be made Whether the height is greater than the second height threshold and whether the translational motion speed is less than the second speed threshold; if the height is greater than the second height threshold and the translational motion speed is less than the second speed threshold, it means that the content of the video screen changes slowly, in order to ensure collection To achieve the high definition of each frame of image, improve the image quality of aerial image transmission.
  • a lower frame rate can be used, that is, the first frame rate is set as the target frame rate; if the height is less than or equal to the second height threshold, or translation The movement speed is greater than or equal to the second speed threshold, indicating that the flight status of the drone has not changed much and the frame rate does not need to be adjusted.
  • the current frame rate can be maintained unchanged, and the current frame rate can be set as the target frame rate.
  • the target frame rate can be determined based only on the offset rate and part of the motion state information.
  • the judgment order of the offset rate and various motion state information can be flexibly adjusted according to actual needs.
  • the specific content is here Not limited.
  • the image transmission method may further include: generating video code stream data based on the target frame rate; and sending the video code stream data to the control terminal.
  • the encoding module can be used to encode the image collected by the image acquisition device based on the target frame rate to generate video stream data.
  • the encoding method can be based on actual needs. Make flexible settings.
  • the encoding method can include encoding such as H.264 or H.265.
  • the video stream data can be sent to the control terminal connected to the drone through the wireless communication module.
  • the control terminal After the control terminal receives the video code stream data, it can decode the video code stream data through the video decoding module to obtain an image.
  • the image can include multiple frames.
  • the multiple frame images can generate video data.
  • the video data can be YUV video.
  • the YUV is divided into three components, "Y" represents the brightness (Luminance or Luma), which is the gray value; "U” and “V” represent the chrominance (Chrominance or Chroma), which is used to describe the color and saturation of the image Degree, used to specify the color of the pixel.
  • the control terminal can display the decoded image through the display. Since the control terminal decodes one frame every time it receives a frame of video code stream data, frame rate information is not required, so the drone does not need to notify the control terminal of the current frame rate used, which ensures the reliability of dynamic adaptive frame rate implementation .
  • the image capture device is a first image capture device
  • the movable platform further includes a second image capture device.
  • generating video stream data may include: capturing the second image based on the target frame rate
  • the image collected by the device is encoded to generate video stream data.
  • the image acquisition device that collects the image used to determine the target frame rate may be the same or different from the image acquisition device that collects the image for transmission to the control terminal.
  • the image capture device that captures images may be set based on the The image is used to determine the target frame rate, and in the process of generating video stream data, the image can be captured by the second image capture device, and the image captured by the second image capture device is encoded based on the target frame rate to generate video stream data , Send the video stream data to the control terminal.
  • the first image acquisition device is an image acquisition device with a lower resolution
  • the second image acquisition device is an image acquisition device with a higher resolution.
  • the first image acquisition device may be, for example, a binocular camera installed on the drone
  • the second image acquisition device may be, for example, a main camera mounted on the pan/tilt of the drone.
  • the image capture device that captures the image used to determine the target frame rate may be the same as the image capture device that captures the image to be transmitted to the control terminal. E.g. They are all the main cameras mounted on the drone's gimbal.
  • the embodiment of the application can obtain the movement characteristics of the target pixel in the image collected by the image acquisition device; and/or obtain the movement state information of the movable platform; according to the movement characteristic of the target pixel in the image and/or the movement of the movable platform
  • the status information determines the target frame rate for image transmission from the movable platform to the control terminal.
  • This solution can automatically determine the target frame rate without the user's manual selection, which improves the timeliness and accuracy of determining the target frame rate.
  • FIG. 5 is a schematic block diagram of a movable platform provided by an embodiment of the present application.
  • the mobile platform 11 may include a processor 111 and a memory 112, and the processor 111 and the memory 112 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 111 may be a micro-controller unit (MCU), a central processing unit (Central Processing Unit, CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 112 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk, etc., and may be used to store computer programs.
  • ROM Read-Only Memory
  • the memory 112 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk, etc., and may be used to store computer programs.
  • the movable platform 11 may also include an image capture device 113, etc.
  • the image capture device 113 is used to capture images
  • the mobile platform 11 may also include a pan/tilt for carrying the image capture device 113, which can drive the image capture device 113 to move. To the appropriate location, accurately collect the required images, etc.
  • the type of the movable platform 11 can be flexibly set according to actual needs.
  • the movable platform 11 can be a mobile terminal, a drone, a robot, or a pan-tilt camera.
  • the pan-tilt camera can include a camera and a pan-tilt, etc.
  • the camera is used to collect images
  • the pan-tilt is used to carry the camera to drive the camera to a suitable position and accurately collect the required images.
  • the pan-tilt camera can be mounted on the unmanned on board.
  • the pan/tilt camera can collect images, and obtain the movement characteristics of the target pixels in the collected images, and/or send an information acquisition request to the drone, and receive the drone's movement based on the information acquisition request.
  • the target frame rate for image transmission from the drone to the control terminal can be determined based on the movement characteristics of the target pixel in the image and/or the movement status information of the drone, and the target frame rate can be sent to the drone .
  • the drone can encode the image based on the target frame rate to generate video stream data, and send the video stream data to the control terminal.
  • the processor 111 is configured to call a computer program stored in the memory 112 and implement the image transmission method provided in the embodiment of the present application when the computer program is executed. For example, the following steps may be performed:
  • the processor 111 after determining the target frame rate for image transmission of the movable platform to the control terminal, the processor 111 further executes: generating video stream data based on the target frame rate; and sending the video stream data to the control terminal.
  • the image capture device is a first image capture device
  • the movable platform further includes a second image capture device.
  • the processor 111 executes: , Encoding the image collected by the second image collecting device to generate video stream data.
  • the processor 111 when acquiring the movement characteristics of the target pixel in the image acquired by the image acquisition device, the processor 111 further executes: acquiring the first image and the second image in the multi-frame images acquired by the image acquisition device; The pixel coordinates of the target pixel of an image and the pixel coordinates of the target pixel of the second image.
  • the target pixel of the first image corresponds to the target pixel of the second image; according to the pixel coordinate of the target pixel of the first image And the pixel coordinates of the target pixel of the second image to determine the movement characteristics of the target pixel.
  • the processor 111 when determining the movement characteristics of the target pixel according to the pixel coordinates of the target pixel of the first image and the pixel coordinates of the target pixel of the second image, the processor 111 further executes: Determine the relative position difference between the pixel coordinates of the pixel and the pixel coordinates of the target pixel of the second image; determine the offset rate of the target pixel according to the relative position difference and the collection time interval of the first image and the second image; according to the offset The velocity determines the movement characteristics of the target pixel.
  • the target pixel includes the first target pixel corresponding to the background and/or the second target pixel corresponding to the moving target, the first target pixel of the first image and the first target pixel of the second image
  • the second target pixel of the first image corresponds to the second target pixel of the second image.
  • the processor 111 when determining the movement characteristics of the target pixel according to the offset rate, the processor 111 further executes: weighted average the offset rate of the first target pixel and the offset rate of the second target pixel, The shift rate after weighted average is used to characterize the movement characteristics of the target pixel.
  • the processor 111 further executes: based on a pre-trained calculation model, recognizing the background and/or moving target in the first image and the second image.
  • the moving target includes one or the previous moving targets with the largest corresponding image area among the multiple moving targets to be selected; or, the moving target includes the one or the previous moving targets with the largest moving amplitude among the multiple moving targets to be selected. Multiple sports areas.
  • the movement characteristics of the target pixel are characterized by the offset rate of the target pixel.
  • the processing The device 111 further executes: when the offset rate is less than the first preset rate threshold, the first frame rate is set as the target frame rate; or, when the offset rate is greater than the second preset rate threshold, the second frame rate is set Is the target frame rate; or, when the offset rate is greater than or equal to the first preset rate threshold, and the offset rate is less than or equal to the second preset rate threshold, the current frame rate is set as the target frame rate; where the first The preset rate threshold is less than the second preset rate threshold, and the first frame rate is less than the second frame rate.
  • the processor 111 when acquiring the movement state information of the movable platform, the processor 111 further executes: acquiring the height, translational movement speed, and/or angular velocity in the yaw direction of the movable platform.
  • the processor 111 when acquiring the height, translational motion speed, and/or angular velocity in the yaw direction of the movable platform, the processor 111 further executes: obtain the movable platform's information through the inertial measurement unit installed on the movable platform Altitude, translational speed, and/or angular velocity in the yaw direction.
  • the processor 111 when determining the target frame rate for image transmission from the movable platform to the control terminal according to the movement state information of the movable platform, the processor 111 further executes: if the height is less than the first height threshold, and the translational movement speed If it is greater than the first speed threshold, the second frame rate is set as the target frame rate; if the height is greater than or equal to the first height threshold, or the translational motion speed is less than or equal to the first speed threshold, then it is determined whether the angular velocity is greater than the angular velocity threshold; if the angular velocity If the angular velocity is greater than the threshold, the second frame rate is set as the target frame rate; if the angular velocity is less than or equal to the angular velocity threshold, it is determined whether the height is greater than the second height threshold, and the translational motion speed is less than the second speed threshold; if the height is greater than the second Height threshold, and the translational motion speed is less than the second speed threshold, then the first frame rate is set as the target frame rate
  • the processor 111 when determining the target frame rate for image transmission from the movable platform to the control terminal according to the movement characteristics of the target pixel in the image and the movement state information of the movable platform, the processor 111 further executes: If the rate is greater than the second preset rate threshold, the second frame rate is set as the target frame rate; if the offset rate is less than or equal to the second preset rate threshold, it is determined whether the height is less than the first height threshold and whether the translational motion speed is Greater than the first speed threshold; if the height is less than the first height threshold, and the translation movement speed is greater than the first speed threshold, then the second frame rate is set as the target frame rate; if the height is greater than or equal to the first height threshold, or the translation movement speed If the angular velocity is less than or equal to the first velocity threshold, it is judged whether the angular velocity is greater than the angular velocity threshold; if the angular velocity is greater than the angular velocity threshold, the second frame rate is set as the target frame rate; if
  • a preset rate threshold if the offset rate is less than the first preset rate threshold, the first frame rate is set as the target frame rate; if the offset rate is greater than or equal to the first preset rate threshold, it is determined whether the height is greater than the first 2. Height threshold, and whether the translational motion speed is less than the second speed threshold; if the height is greater than the second height threshold, and the translational motion speed is less than the second speed threshold, set the first frame rate as the target frame rate; if the height is less than or equal to If the second height threshold, or the translational motion speed is greater than or equal to the second speed threshold, the current frame rate is set as the target frame rate; where the first height threshold is less than the second height threshold, and the first speed threshold is less than the second speed threshold, The first frame rate is less than the second frame rate.
  • the embodiment of the present application also provides a computer program.
  • the computer program includes program instructions, and the processor executes the program instructions to implement the image transmission method provided in the embodiments of the present application.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium is a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, the processor executes The program instructions implement the image transmission method provided in the embodiment of the present application.
  • the computer-readable storage medium may be the internal storage unit of the removable platform described in any of the foregoing embodiments, such as the hard disk or memory of the removable platform.
  • the computer-readable storage medium can also be an external storage device on a removable platform, such as a plug-in hard disk equipped on a removable platform, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, and a flash memory. Card (Flash Card) etc.
  • the computer program stored in the computer-readable storage medium can execute any of the image transmission methods provided in the embodiments of this application, it can implement what can be achieved by any of the image transmission methods provided in the embodiments of this application.
  • the beneficial effects of refer to the previous embodiment for details, and will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供一种图像传输方法、可移动平台及计算机可读存储介质,包括:获取图像采集装置采集的图像中目标像素点的移动特性;和/或,获取可移动平台的运动状态信息(S101);根据图像中目标像素点的移动特性和/或可移动平台的运动状态信息,确定可移动平台向控制终端进行图像传输的目标帧率(S102)。提高了目标帧率确定的及时性和准确性。

Description

图像传输方法、可移动平台及计算机可读存储介质 技术领域
本申请涉及图像传输技术领域,尤其涉及一种图像传输方法、可移动平台及计算机可读存储介质。
背景技术
目前,在无人机***中,无人机将视频数据通过无线通信链路传输给控制终端时,存在平衡流畅度和清晰度的问题。图像传输的帧率越高,控制终端显示的图像传输画面越流畅,但是由于带宽的限制,能够分配到每一帧的传输比特数就越少,从而导致单帧图像的清晰度越低。
现有技术中,用户可以通过控制终端手动选择帧率,然而用户主观地选择帧率可能导致调整不及时或者准确性低的问题。
发明内容
本申请实施例提供一种图像传输方法、可移动平台及计算机可读存储介质,可以提高目标帧率确定的及时性和准确性。
第一方面,本申请实施例提供了一种图像传输方法,所述方法应用于可移动平台,所述可移动平台包括图像采集装置,所述可移动平台与控制终端通信连接,所述方法包括:
获取所述图像采集装置采集的图像中目标像素点的移动特性;和/或,
获取所述可移动平台的运动状态信息;
根据所述图像中目标像素点的移动特性和/或所述可移动平台的运动状态信息,确定所述可移动平台向所述控制终端进行图像传输的目标帧率。
第二方面,本申请实施例还提供了一种可移动平台,所述可移动平台与控制终端通信连接,所述可移动平台包括:
图像采集装置,用于采集图像;
存储器,用于存储计算机程序;
处理器,用于调用所述存储器中的计算机程序,以执行:
获取所述图像采集装置采集的图像中目标像素点的移动特性;和/或,
获取所述可移动平台的运动状态信息;
根据所述图像中目标像素点的移动特性和/或所述可移动平台的运动状态信息,确定所述可移动平台向所述控制终端进行图像传输的目标帧率。
第三方面,本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,所述计算机程序被处理器加载,以执行本申请实施例提供的任一种图像传输方法。
本申请实施例可以获取图像采集装置采集的图像中目标像素点的移动特性;和/或,获取可移动平台的运动状态信息;根据图像中目标像素点的移动特性和/或可移动平台的运动状态信息,确定可移动平台向控制终端进行图像传输的目标帧率。该方案可以自动确定目标帧率,而不需要用户手动选择,提高了目标帧率确定的及时性和准确性。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的图像传输方法的应用场景的示意图;
图2是本申请实施例提供的无人机和控制终端交互的示意图;
图3是本申请实施例提供的图像传输方法的流程示意图;
图4是本申请实施例提供的从第一图像和第二图像中确定多个运动目标的示意图;
图5是本申请实施例提供的可移动平台的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下, 下述的实施例及实施例中的特征可以相互组合。
本申请的实施例提供了一种图像传输方法、可移动平台及计算机可读存储介质,用于基于图像采集装置采集到的图像中目标像素点的移动特性和/或可移动平台的运动状态信息,确定可移动平台向控制终端进行图像传输的目标帧率,提高了目标帧率确定的及时性和准确性。
其中,可移动平台可以包括云台、平台本体、以及图像采集装置等,该平台本体可以用于搭载云台,该云台可以搭载图像采集装置,从而使得云台可以带动图像采集装置移动,该图像采集装置可以包括一个或多个。具体地,可移动平台和图像采集装置的类型可以根据实际需要进行灵活设置,具体内容在此处不做限定。例如,图像采集装置可以是相机或视觉传感器等,可移动平台可以为移动终端、无人机、机器人或云台相机等。该云台相机可以包括相机和云台等,该云台可以包括轴臂等,该轴臂可以带动相机移动,例如,通过轴臂控制相机移动到合适位置,以便通过相机采集所需的图像。其中,该相机可以是单目相机,该相机的类型可以是超广角相机、广角相机、长焦相机(即变焦相机)、红外相机、远红外相机、紫外相机、以及飞行时间测距(TOF,Time of Flight)深度相机(简称TOF深度相机)等。该无人机可以包括相机、测距装置以及障碍物感知装置等。该无人机还可以包括用于搭载相机的云台,该云台可以带动相机移动到合适位置,以便通过相机采集所需的图像。该无人机可以包括旋翼型无人机(例如四旋翼无人机、六旋翼无人机、或八旋翼无人机等)、固定翼无人机、或者是旋翼型与固定翼无人机的组合,在此不作限定。
可移动平台还可以设置有全球定位***(Global Positioning System,GPS)等定位装置,用于可移动平台的移动位置进行准确定位。相机和定位装置之间的位置关系可以在同一平面上,在该平面内相机和定位装置之间可以是在同一直线上、或者形成预设的夹角等;当然,相机和定位装置之间也可分别位于不同的平面上。
图1是实施本申请实施例提供的图像传输方法的一场景示意图,如图1所示,以可移动平台为无人机为例,控制终端100与一无人机200通信连接,控制终端100可以用于控制无人机200的飞行或执行相应的动作,并从无人机200中获取相应的运动信息,运动信息可以包括飞行方向、飞行姿态、飞行高度、飞行速度和位置信息等,并将获取的运动信息发送给控制终端100,由控制终端100进行分析及显示等。控制终端100还可以接收用户输入的控制指令,基于控制指令对 无人机200上的测距装置或相机等进行相应的控制。例如,控制终端100可以接收用户输入的拍摄指令或测距指令,并将拍摄指令或测距指令发送给无人机200,无人机200可以根据拍摄指令控制相机对采集到的画面进行拍摄,或者根据测距指令控制测距装置对目标物进行测距等。
具体地,控制终端100的类型可以根据实际需要进行灵活设置,具体内容在此处不做限定。例如,控制终端100可以是设置有显示器和控制按键等的遥控设备,用于与无人机200建立通信连接,并对无人机200进行控制,该显示器可以用于显示图像或视频等。该控制终端100还可以是第三方手机或平板电脑等,通过预设的协议与无人机200建立通信连接,并对无人机200进行控制。
在一些实施方式中,无人机200的障碍物感知装置可以获取无人机200周围的感测信号,通过对感测信号进行分析,可以得到障碍物信息,并在该无人机200的显示器内显示障碍物信息,使得用户可以获知无人机200感知到的障碍物,便于用户控制无人机200避开障碍物。其中,该显示器可以为液晶显示屏,也可以为触控屏等。
在一些实施方式中,障碍物感知装置可以包括至少一个传感器,用于获取来自无人机200的至少一个方向上的感测信号。例如,障碍物感知装置可以包括一个传感器,用于检测无人机200的前方的障碍物。例如,障碍物感知装置可以包括两个传感器,分别用于检测无人机200的前方和后方的障碍物。例如,障碍物感知装置可以包括四个传感器,分别用于检测无人机200的前方、后方、左方、以及右方的障碍物等。例如,障碍物感知装置可以包括五个传感器,分别用于检测无人机200的前方、后方、左方、右方、以及上方的障碍物等。例如,障碍物感知装置可以包括六个传感器,分别用于检测无人机200的前方、后方、左方、右方、上方、以及下方的障碍物。障碍物感知装置中的各个传感器可以是分离实现的,也可以是集成实现的。传感器的检测方向可以根据具体需要进行设置,以检测各种方向或方向组合的障碍物,而不仅限于本申请公开的上述形式。
无人机200可具有多个旋翼。旋翼可连接至无人机200的本体,本体可包含控制单元、惯性测量单元(inertial measuring unit,IMU)、处理器、电池、电源和/或其他传感器。旋翼可通过从本体中心部分分支出来的一个或多个臂或延伸而连接至本体。例如,一个或多个臂可从无人机200的中心本体放射状延伸出来,而且在臂末端或靠近末端处可具有旋翼。
图2是实施本申请实施例提供的无人机与控制终端交互的示意图,如图2所示,以可移动平台为无人机为例,以图像采集装置包括视觉传感器和相机为例,无人机可以通过视觉传感器采集图像,并确定图像中目标像素点的移动特性(例如偏移速率),以及通过惯性测量单元IMU获取无人机自身的运动状态信息(例如无人机的高度、平移运动速度、和/或偏航方向上的角速度等),然后无人机可以进行帧率控制决策来确定目标帧率,即无人机可以基于图像中目标像素点的移动特性和/或无人机自身的运动状态信息,确定无人机向控制终端进行图像传输的目标帧率。此时可以基于目标帧率对相机采集到的图像进行视频编码,生成视频码流数据,通过无人机的无线通讯模块将视频码流数据发送给控制终端。控制终端可以通过自身的无线通讯模块接收无人机发送的视频码流数据,对视频码流数据进行解码,得到图像,并通过显示器显示该图像。
需要说明的是,图1和图2中的无人机和控制终端等各设备结构并未构成对图像传输方法的应用场景的限定。
请参阅图3,图3是本申请一实施例提供的一种图像传输方法的流程示意图。该图像传输方法可以应用于可移动平台中,用于准确确定目标帧率,以下将以可移动平台为无人机进行详细说明。
如图3所示,该图像传输方法可以包括步骤S101至步骤S102等,具体可以如下:
S101、获取图像采集装置采集的图像中目标像素点的移动特性;和/或,获取可移动平台的运动状态信息。
其中,无人机上可以预设一个或多个图像采集装置,该图像采集装置可以是相机或视觉传感器等。无人机在飞行的过程中,可以通过图像采集装置采集一帧或多帧图像。
在采集得到图像后,可以获取图像中目标像素点的移动特性,该目标像素点可以包括一个或多个,例如,目标像素点可以是特征点,或者目标像素点可以是背景区域内的像素点,或者目标像素点可以是运动目标所在区域的像素点,等等。移动特性可以用于表征目标像素点的移动状态,该移动特性可以根据实际需要进行灵活设置,具体内容在此处不做限定,例如,该移动特性可以是目标像素点的移动速率、加速度等。
在一些实施方式中,获取图像采集装置采集的图像中目标像素点的移动特性可以包括:获取图像采集装置采集的多帧图像中的第一图像和第二图像;获 取第一图像的目标像素点的像素坐标和第二图像的目标像素点的像素坐标,第一图像的目标像素点与第二图像的目标像素点相对应;根据第一图像的目标像素点的像素坐标和第二图像的目标像素点的像素坐标确定目标像素点的移动特性。
为了提高目标像素点的移动特性确定的准确性,可以基于多帧图像中目标像素点的像素坐标确定移动特性。具体地,无人机可以通过图像采集装置每间隔预设时间采集一帧图像,经过预设时间段后,可以采集得到多帧图像,该每间隔预设时间和预设时间段可以根据时间需要进行灵活设置,具体取值在此处不做限定。此时,从多帧图像中提取目标像素点,并获取目标像素点在每帧图像上对应的像素坐标,根据每帧图像中目标像素点对应的像素坐标确定目标像素点的移动特性。例如,以两帧图像为例,可以从图像采集装置采集的多帧图像中筛选出第一图像和第二图像,该第一图像和第二图像可以是采集时间相邻的图像,也可以是采集时间非相邻的图像。例如,可以从图像采集装置采集的多帧图像中筛选出采集时间相邻的第一图像和第二图像,或者可以从图像采集装置采集的多帧图像中筛选出图像质量好的第一图像和第二图像,或者可以从图像采集装置采集的多帧图像中筛选出图像清晰度高的第一图像和第二图像。
需要说明的是,图像采集装置可以基于初始帧率采集得到多帧图像。其中,当图像采集装置开始采集图像时,例如,图像采集装置开启时,可以将默认帧率设置为图像采集装置的初始帧率。
然后,可以获取第一图像的目标像素点的像素坐标和第二图像的目标像素点的像素坐标,其中,第一图像的目标像素点与第二图像的目标像素点相对应,例如,当目标像素点为运动的车辆的像素点时,可以获取第一图像中车辆所在区域的像素点的像素坐标,以及获取第二图像中该车辆所在区域的像素点的像素坐标。此时可以根据第一图像的目标像素点的像素坐标和第二图像的目标像素点的像素坐标确定目标像素点的移动特性。
在一些实施方式中,目标像素点可以包括背景对应的第一目标像素点和/或运动目标对应的第二目标像素点,第一图像的第一目标像素点与第二图像的第一目标像素点相对应,第一图像的第二目标像素点与第二图像的第二目标像素点相对应。
为了提高后续确定移动特性的精准性,可以将目标像素点划分为背景对应的第一目标像素点和/或运动目标对应的第二目标像素点,其中,运动目标可以 是图像采集装置采集到的画面中存在移动的物体,运动目标可以包括一个或多个,例如,运动目标可以是行走的人、移动的车辆或奔跑的小狗等,该背景可以是图像采集装置采集到的画面中除了运动目标之外的区域。例如,可以对第一图像中背景和运动目标进行识别,并获取第一图像中背景对应的第一目标像素点的像素坐标,以及获取第一图像中运动目标对应的第二目标像素点的像素坐标,以及,可以对第二图像中背景和运动目标进行识别,并获取第二图像中背景对应的第一目标像素点的像素坐标,以及获取第二图像中运动目标对应的第二目标像素点的像素坐标。当然,可以也仅或者可以对第一图像中背景或运动目标进行识别,并获取第一图像中背景对应的第一目标像素点的像素坐标或运动目标对应的第二目标像素点的像素坐标,以及,可以对第二图像中背景或运动目标进行识别,并获取第二图像中背景对应的第一目标像素点的像素坐标或运动目标对应的第二目标像素点的像素坐标。
在一些实施方式中,图像传输方法还可以包括:基于预先训练的计算模型,识别第一图像和第二图像中的背景和/或运动目标。
为了提高对背景和/或运动目标识别的准确性,可以通过预先训练的计算模型对多帧图像中背景和/或运动目标进行识别,该预先训练的计算模型可以为深度学习模块,该预先训练的计算模型可以根据实际需要进行灵活设置,例如,该计算模型可以是目标检测算法SSD或YOLO,该计算模型还可以是卷积神经网络R-CNN或Faster R-CNN等,例如,可以获取包含不同类型的运动目标和背景的多张样本图像,基于多张样本图像对计算模型进行训练,得到训练后的计算模型,即预先训练的计算模型。此时对于采集到的第一图像和第二图像等多帧图像,可以通过预先训练的计算模型精准识别出第一图像中的背景和/或运动目标,以及精准识别出第二图像中的背景和/或运动目标,以便获取背景和/或运动目标所在区域的像素点的像素坐标。
需要说明的是,对背景识别的计算模型与对运动目标识别的计算模型可以相同,也可以不同,例如,通过预先训练的第一计算模型对第一图像和第二图像等多帧图像中的背景进行识别,得到每帧图像上背景区域所在位置,并根据背景区域所在位置确定每帧图像上背景区域的像素坐标。以及,通过预先训练的第二计算模型对第一图像和第二图像等多帧图像中的运动目标进行识别,得到每帧图像上运动目标所在位置,并根据运动目标所在位置确定每帧图像上运动目标的像素坐标,其中,第一计算模型和第二计算模型可以相同也可以不同。
在一些实施方式中,图像中的背景区域或运动目标所在的区域可能包括多个像素。可以针对图像中的背景区域和运动目标所在的区域提取特征点,将提取得到的特征点作为目标像素点,并利用特征点匹配的方法确定第一图像中的目标像素点与第二图像中的目标像素点之间的对应关系。
在一些实施方式中,运动目标可以包括多个待选运动目标中对应的图像区域面积最大的一个或前多个运动目标;或者,运动目标可以包括多个待选运动目标中运动幅度最大的一个或前多个运动区域。
为了提高后续确定移动特性的精准性,可以获取可靠的一个或多个运动目标的像素坐标来确定移动特性,即运动目标可以包括一个或多个。在一实施例中,可以基于图像区域面积筛选运动目标,具体地,当从图像中可以识别出多个待选运动目标时,可以获取各个待选运动目标所占的图像区域面积,并从多个待选运动目标中筛选出图像区域面积最大的一个或前多个运动目标,例如,可以从多个待选运动目标中筛选出图像区域面积最大的一个运动目标或者图像区域面积最大的前3个运动目标。在另一实施例中,可以基于运动幅度筛选运动目标,具体地,当从图像中可以识别出多个待选运动目标时,可以获取各个待选运动目标的运动幅度,并从多个待选运动目标中筛选出运动幅度最大的一个或前多个运动目标,例如,可以从多个待选运动目标中筛选出运动幅度最大的一个运动目标或者运动幅度最大的前3个运动目标。
在一些实施方式中,根据第一图像的目标像素点的像素坐标和第二图像的目标像素点的像素坐标确定目标像素点的移动特性可以包括:根据第一图像的目标像素点的像素坐标和第二图像的目标像素点的像素坐标确定相对位置差;根据相对位置差和第一图像与第二图像的采集时间间隔,确定目标像素点的偏移速率;根据偏移速率确定目标像素点的移动特性。
为了提高目标像素点的移动特性确定的灵活性和便捷性,可以基于目标像素点的相对位置差来确定移动特性。具体地,可以获取图像采集装置采集的多帧图像中目标像素点的像素坐标之间的相对位置差,以及,多帧图像的采集时间间隔,根据该相对位置差和采集时间间隔确定的目标像素点的移动特性。例如,以两帧图像为例,该两帧图像包括第一图像和第二图像,可以根据第一图像的目标像素点的像素坐标和第二图像的目标像素点的像素坐标确定相对位置差,例如,可以通过以下公式(1)确定相对位置差ΔA:
Figure PCTCN2020093035-appb-000001
其中,(x i,y i)表示第一图像的目标像素点的像素坐标,(x k,y k)表示第二图像的目标像素点的像素坐标。
以及,可以获取第一图像与第二图像的采集时间间隔T,然后,可以根据相对位置差ΔA和第一图像与第二图像的采集时间间隔T,确定目标像素点的偏移速率,具体可以如下公式(2)所示:
Figure PCTCN2020093035-appb-000002
其中,p表示目标像素点的偏移速率,ΔA表示相对位置差,T表示采集时间间隔。
此时,可以根据偏移速率确定目标像素点的移动特性。
在一些实施方式中,根据偏移速率确定目标像素点的移动特性可以包括:将第一目标像素点的偏移速率和第二目标像素点的偏移速率进行加权平均,以用加权平均后的偏移速率表征目标像素点的移动特性。
当目标像素点包括第一目标像素点和第二目标像素点时,例如,目标像素点包括背景对应的第一目标像素点以及运动目标对应的第二目标像素点,可以基于第一目标像素点在第一图像和第二图像中的像素坐标,确定第一目标像素点的偏移速率,以及基于第二目标像素点在第一图像和第二图像中的像素坐标,确定第二目标像素点的偏移速率。然后,将第一目标像素点的偏移速率和第二目标像素点的偏移速率进行加权平均,以用加权平均后的偏移速率表征目标像素点的移动特性,具体可以如下公式(3)所示:
Figure PCTCN2020093035-appb-000003
其中,
Figure PCTCN2020093035-appb-000004
表示偏移速率,即目标像素点的移动特性,p 1表示第一目标像素点的偏移速率,p 2表示第二目标像素点的偏移速率。
需要说明的是,当目标像素点包括第一目标像素点和第二目标像素点时,例如,目标像素点包括背景对应的第一目标像素点以及运动目标对应的第二目标像素点,可以基于第一目标像素点在第一图像和第二图像中的像素坐标,确定第一目标像素点的偏移速率,以及基于第二目标像素点在第一图像和第二图像中的像素坐标,确定第二目标像素点的偏移速率。然后,可以获取第一目标像素点的权重值(例如背景的权重值)和第二目标像素点的权重值(例如运动 目标的权重值),根据第一目标像素点的权重值、第一目标像素点的偏移速率、第二目标像素点的权重值、第二目标像素点的偏移速率进行加权求和运算,以用加权求和后的偏移速率表征目标像素点的移动特性,具体可以如下公式(4)所示:
Figure PCTCN2020093035-appb-000005
其中,
Figure PCTCN2020093035-appb-000006
表示偏移速率,即目标像素点的移动特性,p 1表示第一目标像素点的偏移速率,p 2表示第二目标像素点的偏移速率,a 11表示第一目标像素点的权重值,a 12表示第二目标像素点的权重值,a 11+a 12=1。
以下将以目标像素点包括背景对应的目标像素点、第一运动目标对应的目标像素点、第二运动目标对应的目标像素点、以及第三运动目标对应的目标像素点为例,以及以第一图像和第二图像为例进行详细说明。
具体地,可以获取第一图像中背景对应的目标像素点的像素坐标B,以及获取第二图像中背景对应的目标像素点的像素坐标B',其中,
B={(x i,y i)}
B'={(x k,y k)}
(x i,y i)表示背景在第一图像中的像素坐标,(x k,y k)表示背景在第二图像中的像素坐标。
基于(x i,y i)和(x k,y k),获取第一图像和第二图像上背景的相对位置差,即获取第一图像上B和第二图像上B'的像素坐标之间的相对位置差ΔB:
Figure PCTCN2020093035-appb-000007
获取第一图像和第二图像的采集时间间隔T,根据采集时间间隔T和相对位置差ΔB确定背景对应的偏移速率p b
Figure PCTCN2020093035-appb-000008
如图4所示,可以从第一图像和第二图像中识别出除去背景之外的运动目标{S i},可以取其中3个占据图像区域面积最大(即占据图像像素最多)的三个运动目标在第一图像上的像素坐标{S 1,S 2,S 3},其中,S 1表示在第一图像上第一运动目标对应的目标像素点的像素坐标,S 2表示在第一图像上第二运动目标对应的目标像素点的像素坐标,S 3表示在第一图像上第三运动目标对应的目标像素点的像素坐标。以及获取三个运动目标在第二图像上的像素坐标{S 1',S' 2,S 3'}, 其中,S 1'表示在第二图像上第一运动目标对应的目标像素点的像素坐标,S' 2表示在第二图像上第二运动目标对应的目标像素点的像素坐标,S 3'表示在第二图像上第三运动目标对应的目标像素点的像素坐标。其中,
Figure PCTCN2020093035-appb-000009
Figure PCTCN2020093035-appb-000010
表示运动目标在第一图像中的像素坐标,
Figure PCTCN2020093035-appb-000011
表示运动目标在第二图像中的像素坐标,图4中,
Figure PCTCN2020093035-appb-000012
表示第一运动目标在第一图像中的像素坐标,
Figure PCTCN2020093035-appb-000013
表示第一运动目标在第二图像中的像素坐标,
Figure PCTCN2020093035-appb-000014
表示第二运动目标在第一图像中的像素坐标,
Figure PCTCN2020093035-appb-000015
表示第二运动目标在第二图像中的像素坐标,
Figure PCTCN2020093035-appb-000016
表示第三运动目标在第一图像中的像素坐标,
Figure PCTCN2020093035-appb-000017
表示第三运动目标在第二图像中的像素坐标。
然后,可以获取第一图像和第二图像中3个运动目标的相对位置差,即获取第一图像上S 1和第二图像上S 1'的像素坐标之间的相对位置差ΔS 1、第一图像上S 2和第二图像上S' 2的像素坐标之间的相对位置差ΔS 2、以及第一图像上S 3和第二图像上S 3'的像素坐标之间的相对位置差ΔS 3,具体可以如下:
Figure PCTCN2020093035-appb-000018
其中,ΔS j表示3个运动目标的相对位置差,此时可以根据第一图像和第二图像的采集时间间隔T和3个运动目标的相对位置差ΔS j确定各个运动目标对应的偏移速率p tj
Figure PCTCN2020093035-appb-000019
其中,p t1表示第一运动目标对应的偏移速率,p t2表示第二运动目标对应的偏移速率,p t3表示第三运动目标对应的偏移速率,ΔS 1表示第一运动目标在第一图像和第二图像上的相对位置差,ΔS 2表示第二运动目标在第一图像和第二图像上的相对位置差,ΔS 3表示第三运动目标在第一图像和第二图像上的相对位置差。
在得到背景对应的偏移速率p b、第一运动目标对应的偏移速率p t1、第二运动目标对应的偏移速率p t2、以及第三运动目标对应的偏移速率p t3后,可以确定综合计算目标偏移速率
Figure PCTCN2020093035-appb-000020
Figure PCTCN2020093035-appb-000021
其中,
Figure PCTCN2020093035-appb-000022
表示目标偏移速率,p b表示背景对应的偏移速率,p t1表示第一运动目标对应的偏移速率,p t2表示第二运动目标对应的偏移速率,p t3表示第三运动目标对应的偏移速率,a 0表示背景对应的偏移速率的权重值,a 1表示表示第一运动目标对应的偏移速率的权重值,a 2表示表示第二运动目标对应的偏移速率的权重值,a 3表示表示第三运动目标对应的偏移速率的权重值,a 0+a 1+a 2+a 3=1。
此时,可以利用目标偏移速率表征目标像素点的移动特性。
在一些实施方式中,获取可移动平台的运动状态信息可以包括:获取可移动平台的高度、平移运动速度、和/或偏航方向上的角速度。
其中,可移动平台的运动状态信息可以包括可移动平台的高度、平移运动速度、以及偏航方向上的角速度中的至少一种,以可移动平台为无人机为例,运动状态信息可以包括可无人机飞行的高度、平移运动速度、以及偏航方向上的角速度中的至少一种,该高度可以是无人机距离地面的高度,该平移运动速度可以是无人机的飞行速度。在此实施方式中,考虑可移动平台的高度可以进一步提高帧率调整的准确性。举个例子,当无人机以相同的速度,在贴近地面的高度和远离地面的高度拍摄视频时,用户的视觉感知是完全不同的。贴近地面时用户视觉感知的速度相对于远离地面时用户视觉感知的速度更大。因此,在考虑平移运动速度、以及偏航方向上的角速度的基础上进一步参考高度,可以使得帧率的调整更加符合用户的直观感受。
需要说明的是,该运动状态信息还可以包括飞行方向、飞行姿态或位置信息等其他信息,具体内容在此处不做限定。
在一些实施方式中,获取可移动平台的高度、平移运动速度、和/或偏航方向上的角速度可以包括:通过可移动平台安装的惯性测量单元,获取可移动平台的高度、平移运动速度、和/或偏航方向上的角速度。
为了提高运动状态信息获取的便捷性和准确性,可以通过可移动平台(例如无人机)安装的惯性测量单元IMU,获取可移动平台的高度、平移运动速度、和/或偏航方向上的角速度等信息。
S102、根据图像中目标像素点的移动特性和/或可移动平台的运动状态信息,确定可移动平台向控制终端进行图像传输的目标帧率。
以可移动平台为无人机为例,在得到目标像素点的移动特性后,可以根据目标像素点的移动特性确定无人机向控制终端进行图像传输的目标帧率;或者,在得到无人机的运动状态信息后,可以根据无人机的运动状态信息确定无人机向控制终端进行图像传输的目标帧率;或者,在得到目标像素点的移动特性和无人机的运动状态信息后,可以根据目标像素点的移动特性和无人机的运动状态信息,确定无人机向控制终端进行图像传输的目标帧率。从而使得无人机可以智能感知当前的飞行环境,结合无人机的视觉感知的移动特性以及检测到的运动状态信息,自适应的动态调整图像传输的目标帧率,以便后续控制终端达到显示图像传输画面的流畅度和清晰度的最佳平衡,以提升整体航拍的图像传输的效果。
在一些实施方式中,目标像素点的移动特性用目标像素点的偏移速率表征,根据图像中目标像素点的移动特性,确定可移动平台向控制终端进行图像传输的目标帧率可以包括:当偏移速率小于第一预设速率阈值时,将第一帧率设置为目标帧率;或者,当偏移速率大于第二预设速率阈值时,将第二帧率设置为目标帧率;或者,当偏移速率大于或等于第一预设速率阈值,且偏移速率小于或等于第二预设速率阈值时,将当前帧率设置为目标帧率;其中,第一预设速率阈值小于第二预设速率阈值,第一帧率小于第二帧率。
为了提高目标帧率确定的便捷性,可以仅利用目标像素点的偏移速率来确定目标帧率,具体地,在得到目标像素点的偏移速率后,可以获取与目标像素点的偏移速率对应的帧率控制决策,根据帧率控制决策确定与目标像素点的偏移速率对应的图像传输的目标帧率,其中,帧率控制决策可以是多个不同的偏移速率与各个帧率之间的映射关系,通过查询该映射关系可以确定与目标像素点的偏移速率对应的图像传输的目标帧率。或者,帧率控制决策可以是偏移速率与帧率之间的计算转换关系,通过该计算转换关系可以基于目标像素点的偏移速率计算得到对应的图像传输的目标帧率。当然,该帧率控制决策还可以根据实际需要进行灵活设置,具体内容在此处不做限定。
例如,在得到目标像素点的偏移速率后,可以判断偏移速率是否小于第一预设速率阈值,当偏移速率小于第一预设速率阈值时,说明无人机的飞行速度较慢,视频画面的内容变化较慢,为了保障采集到每帧图像的高清晰度,提升航拍的图像传输的画质,此时可以采用较低帧率,即将第一帧率设置为目标帧率。或者,当偏移速率大于第二预设速率阈值时,说明视频画面的内容变化较 快,为了保障后续视频播放的流畅度,提升航拍的图像传输的效果,此时可以采用较高帧率,即将第二帧率设置为目标帧率。或者,当偏移速率大于或等于第一预设速率阈值,且偏移速率小于或等于第二预设速率阈值时,说明无人机的飞行状态变化不大,不需要对帧率进行调整,此时可以维持当前帧率不变,将当前帧率设置为目标帧率。具体可以如下所示:
Figure PCTCN2020093035-appb-000023
then,选择帧率K2;
Figure PCTCN2020093035-appb-000024
then,选择帧率K1;
else保持当前帧率不变。
其中,
Figure PCTCN2020093035-appb-000025
表示目标像素点的偏移速率,p u表示第二预设速率阈值,p l表示第一预设速率阈值,K2表示第二帧率,K1表示第一帧率。第一预设速率阈值小于第二预设速率阈值,第一帧率小于第二帧率,第一预设速率阈值、第二预设速率阈值、第一帧率、以及第二帧率可以根据实际需要进行灵活设置,具体取值在此处不做限定。例如,高清模式对应的帧率可以为30fps,流畅模式对应的帧率可以为60fps等。
需要说明的是,帧率的调节不仅限于第一帧率和第二帧率等,还可以包括第三帧率、第四帧率、第五帧率等多个不同帧率,可以建立多个不同帧率与各个偏移速率之间的映射关系,以便基于多个不同帧率与各个偏移速率之间的映射关系,确定当前检测得到的偏移速率对应的帧率,得到目标帧率。
在一些实施方式中,根据可移动平台的运动状态信息,确定可移动平台向控制终端进行图像传输的目标帧率可以包括:若高度小于第一高度阈值,且平移运动速度大于第一速度阈值,则将第二帧率设置为目标帧率;若高度大于或等于第一高度阈值,或平移运动速度小于或等于第一速度阈值,则判断角速度是否大于角速度阈值;若角速度大于角速度阈值,则将第二帧率设置为目标帧率;若角速度小于或等于角速度阈值,则判断高度是否大于第二高度阈值,且平移运动速度是否小于第二速度阈值;若高度大于第二高度阈值,且平移运动速度小于第二速度阈值,则将第一帧率设置为目标帧率;若高度小于或等于第二高度阈值,或平移运动速度大于或等于第二速度阈值,则将当前帧率设置为目标帧率;其中,第一高度阈值小于第二高度阈值,第一速度阈值小于第二速度阈值,第一帧率小于第二帧率。
以可移动平台为无人机为例,为了提高目标帧率确定的灵活性和效率,可以仅利用无人机的运动状态信息来确定目标帧率,具体地,在得到无人机的高 度、平移运动速度、以及偏航方向上的角速度等运动状态信息后,可以获取与运动状态信息对应的帧率控制决策,根据帧率控制决策确定与运动状态信息对应的图像传输的目标帧率,其中,帧率控制决策可以是多个不同的运动状态信息与各个帧率之间的映射关系,通过查询该映射关系可以确定与当前检测得到的运动状态信息对应的图像传输的目标帧率。或者,帧率控制决策可以是运动状态信息与帧率之间的计算转换关系,通过该计算转换关系可以基于运动状态信息计算得到对应的图像传输的目标帧率。当然,该帧率控制决策还可以根据实际需要进行灵活设置,具体内容在此处不做限定。
例如,在得到无人机的高度、平移运动速度、以及偏航方向上的角速度等运动状态信息后,可以判断高度是否小于第一高度阈值,且平移运动速度是否大于第一速度阈值,若高度小于第一高度阈值,且平移运动速度大于第一速度阈值,则说明视频画面的内容变化较快,为了保障后续视频播放的流畅度,提升航拍的图像传输的效果,此时可以采用较高帧率,即将第二帧率设置为目标帧率。若高度大于或等于第一高度阈值,或平移运动速度小于或等于第一速度阈值,则可以进一步判断角速度是否大于角速度阈值;若角速度大于角速度阈值,则说明视频画面的内容变化较快,为了保障后续对图像生成视频播放的流畅度,提升航拍的图像传输的效果,此时可以采用较高帧率,即将第二帧率设置为目标帧率;若角速度小于或等于角速度阈值,则可以进一步判断高度是否大于第二高度阈值,且平移运动速度是否小于第二速度阈值;若高度大于第二高度阈值,且平移运动速度小于第二速度阈值,则说明视频画面的内容变化较慢,为了保障采集到每帧图像的高清晰度,提升航拍的图像传输的画质,此时可以采用较低帧率,即将第一帧率设置为目标帧率;若高度小于或等于第二高度阈值,或平移运动速度大于或等于第二速度阈值,则说明无人机的飞行状态变化不大,不需要对帧率进行调整,此时可以维持当前帧率不变,将当前帧率设置为目标帧率。具体可以如下所示:
if h<H l且v>v l,then,选择帧率K2;
eles if w y>w T,then,选择帧率K2;
else if h>H u且v<v u,then,选择帧率K1;
else保持当前帧率不变。
其中,h表示高度,v表示平移运动速度,w y表示偏航方向上的角速度,H l表示第一高度阈值,H u表示第二高度阈值,v l表示第一速度阈值,v u表示第二 速度阈值,w T表示角速度阈值,K2表示第二帧率,K1表示第一帧率。第一高度阈值小于第二高度阈值,第一速度阈值小于第二速度阈值,第一帧率小于第二帧率。第一预设速率阈值、第二预设速率阈值、第一帧率、以及第二帧率可以根据实际需要进行灵活设置,具体取值在此处不做限定。
需要说明的是,帧率的调节不仅限于第一帧率和第二帧率等,还可以包括第三帧率、第四帧帧率、第五帧率等多个不同帧率,可以建立多个不同帧率与各个运动状态信息之间的映射关系,以便基于多个不同帧率与各个运动状态信息之间的映射关系,确定当前检测得到的运动状态信息对应的帧率,得到目标帧率。此外,当平移运动速度和角速度为矢量值时,可以取平移运动速度的绝对值与速度阈值进行比较,以及取角速度的绝对值与角速度阈值进行比较等。
在一些实施方式中,根据图像中目标像素点的移动特性和可移动平台的运动状态信息,确定可移动平台向控制终端进行图像传输的目标帧率可以包括:若偏移速率大于第二预设速率阈值,则将第二帧率设置为目标帧率;若偏移速率小于或等于第二预设速率阈值,则判断高度是否小于第一高度阈值,且平移运动速度是否大于第一速度阈值;若高度小于第一高度阈值,且平移运动速度大于第一速度阈值,则将第二帧率设置为目标帧率;若高度大于或等于第一高度阈值,或平移运动速度小于或等于第一速度阈值,则判断角速度是否大于角速度阈值;若角速度大于角速度阈值,则将第二帧率设置为目标帧率;若角速度小于或等于角速度阈值,则判断偏移速率是否小于第一预设速率阈值;若偏移速率小于第一预设速率阈值,则将第一帧率设置为目标帧率;若偏移速率大于或等于第一预设速率阈值,则判断高度是否大于第二高度阈值,且平移运动速度是否小于第二速度阈值;若高度大于第二高度阈值,且平移运动速度小于第二速度阈值,则将第一帧率设置为目标帧率;若高度小于或等于第二高度阈值,或平移运动速度大于或等于第二速度阈值,则将当前帧率设置为目标帧率;其中,第一高度阈值小于第二高度阈值,第一速度阈值小于第二速度阈值,第一帧率小于第二帧率。
为了提高目标帧率确定的精准性,可以结合目标像素点的偏移速率和无人机的运动状态信息来确定目标帧率,具体地,在得到目标像素点的偏移速率,以及无人机的高度、平移运动速度、以及偏航方向上的角速度等运动状态信息后,可以获取与偏移速率和运动状态信息对应的帧率控制决策,根据帧率控制决策确定与偏移速率和运动状态信息对应的图像传输的目标帧率,其中,帧率 控制决策可以是不同的偏移速率和运动状态信息、与各个帧率之间的映射关系,通过查询该映射关系可以确定与当前检测得到的偏移速率和运动状态信息对应的图像传输的目标帧率。或者,帧率控制决策可以是偏移速率和运动状态信息与帧率之间的计算转换关系,通过该计算转换关系可以基于偏移速率和运动状态信息计算得到对应的图像传输的目标帧率。当然,该帧率控制决策还可以根据实际需要进行灵活设置,具体内容在此处不做限定。
例如,在得到目标像素点的偏移速率,以及无人机的高度、平移运动速度、以及偏航方向上的角速度等运动状态信息后,可以判断偏移速率是否大于第二预设速率阈值,若偏移速率大于第二预设速率阈值,则说明无人机的飞行速度较快,视频画面的内容变化也较快,为了保障后续对图像生成视频播放的流畅度,提升航拍的图像传输的效果,此时可以采用较高帧率,即将第二帧率设置为目标帧率。若偏移速率小于或等于第二预设速率阈值,则可以进一步判断高度是否小于第一高度阈值,且平移运动速度是否大于第一速度阈值;若高度小于第一高度阈值,且平移运动速度大于第一速度阈值,则说明视频画面的内容变化也较快,为了保障后续对图像生成视频播放的流畅度,提升航拍的图像传输的效果,此时可以采用较高帧率,即将第二帧率设置为目标帧率;若高度大于或等于第一高度阈值,或平移运动速度小于或等于第一速度阈值,则可以进一步判断角速度是否大于角速度阈值;若角速度大于角速度阈值,则说明视频画面的内容变化也较快,为了保障后续对图像生成视频播放的流畅度,提升航拍的图像传输的效果,此时可以采用较高帧率,即将第二帧率设置为目标帧率;若角速度小于或等于角速度阈值,则可以进一步判断偏移速率是否小于第一预设速率阈值;若偏移速率小于第一预设速率阈值,则说明视频画面的内容变化较慢,为了保障采集到每帧图像的高清晰度,提升航拍的图像传输的画质,此时可以采用较低帧率,即将第一帧率设置为目标帧率;若偏移速率大于或等于第一预设速率阈值,则可以进一步判断高度是否大于第二高度阈值,且平移运动速度是否小于第二速度阈值;若高度大于第二高度阈值,且平移运动速度小于第二速度阈值,则说明视频画面的内容变化较慢,为了保障采集到每帧图像的高清晰度,提升航拍的图像传输的画质,此时可以采用较低帧率,即将第一帧率设置为目标帧率;若高度小于或等于第二高度阈值,或平移运动速度大于或等于第二速度阈值,则说明无人机的飞行状态变化不大,不需要对帧率进行调整,此时可以维持当前帧率不变,将当前帧率设置为目标帧率。具体可以如 下所示:
Figure PCTCN2020093035-appb-000026
选择帧率K2;
else if h<H l且v>v l,then,选择帧率K2;
eles if w y>w T,then,选择帧率K2;
Figure PCTCN2020093035-appb-000027
then,选择帧率K1;
else if h>H u且v<v uthen,选择帧率K1;
else保持当前帧率不变。
其中,各个参数的表示与上述一致,在此处不做赘述。
需要说明的是,可以仅基于偏移速率和部分的运动状态信息来确定目标帧率,此外,偏移速率和各类运动状态信息的判断顺序可以根据实际需要进行灵活调整,具体内容在此处不做限定。
在一些实施方式中,确定可移动平台向控制终端进行图像传输的目标帧率之后,图像传输方法还可以包括:基于目标帧率生成视频码流数据;将视频码流数据发送给控制终端。
为了提高图像传输的效率和安全性,在确定目标帧率后,可以通过编码模块基于目标帧率对图像采集装置采集得到的图像进行编码,生成视频码流数据,其中,编码方式可以根据实际需要进行灵活设置,例如,编码方式可以包括H.264或H.265等编码。在生成视频码流数据后,可以通过无线通讯模块将视频码流数据发送给与无人机连接的控制终端。
控制终端在接收到视频码流数据后,可以通过视频解码模块对视频码流数据进行解码,得到图像,该图像可以包括多帧,多帧图像可以生成视频数据,该视频数据可以是YUV视频,该YUV分为三个分量,“Y”表示明亮度(Luminance或Luma),也就是灰度值;“U”和“V”表示色度(Chrominance或Chroma),其作用是描述影像色彩及饱和度,用于指定像素的颜色。此时,控制终端可以通过显示器对解码得到的图像进行显示。由于控制终端每收到一帧视频码流数据则解码一帧,不需要帧率信息,因此无人机不需要通知控制终端当前所采用的帧率,保障了动态自适应帧率实施的可靠性。
在一些实施方式中,图像采集装置为第一图像采集装置,可移动平台还包括第二图像采集装置,基于目标帧率,生成视频码流数据可以包括:基于目标帧率,对第二图像采集装置采集的图像进行编码,生成视频码流数据。
其中,采集用于确定目标帧率的图像的图像采集装置,与采集用于传输给 控制终端的图像的图像采集装置可以一致,也可以不一致,为了提高无人机飞行的安全性,提高图像传输的效率,以及节省计算资源,可以将采集图像确定目标帧率的图像采集装置,与采集图像生成视频码流数据的图像采集装置分别设置,即通过第一图像采集装置采集的图像,以便基于该图像来确定目标帧率,而在生成视频码流数据的过程中,可以通过第二图像采集装置采集图像,并基于目标帧率对第二图像采集装置采集的图像进行编码,生成视频码流数据,将视频码流数据发送给控制终端。
在一实施例中,第一图像采集装置为分辨率较低的图像采集装置,第二图像采集装置为分辨率较高的图像采集装置。第一图像采集装置例如可以是安装在无人机上的双目摄像头,第二图像采集装置例如可以是无人机的云台上挂载的主摄像机。在其他的实施例中,采集用于确定目标帧率的图像的图像采集装置,与采集用于传输给控制终端的图像的图像采集装置可以一致。例如。都是无人机的云台上挂载的主摄像机。
本申请实施例可以获取图像采集装置采集的图像中目标像素点的移动特性;和/或,获取可移动平台的运动状态信息;根据图像中目标像素点的移动特性和/或可移动平台的运动状态信息,确定可移动平台向控制终端进行图像传输的目标帧率。该方案可以自动确定目标帧率,而不需要用户手动选择,提高了目标帧率确定的及时性和准确性。
请参阅图5,图5是本申请一实施例提供的可移动平台的示意性框图。该可移动平台11可以包括处理器111和存储器112,处理器111和存储器112通过总线连接,该总线比如为I2C(Inter-integrated Circuit)总线。
具体地,处理器111可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器112可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等,可以用于存储计算机程序。
可移动平台11还可以包括图像采集装置113等,图像采集装置113用于采集图像,可移动平台11还可以包括用于搭载图像采集装置113的云台,该云台可以带动图像采集装置113移动至合适位置,准确采集所需的图像等。可移动平台11的类型可以根据实际需要进行灵活设置,例如,该可移动平台11可以是移动终端、无人机、机器人或云台相机等。
例如,云台相机可以包括相机和云台等,相机用于采集图像,云台用于搭载相机,以带动相机移动至合适位置,准确采集所需的图像等,云台相机可以搭载在无人机上。例如,云台相机可以采集图像,并获取采集的图像中目标像素点的移动特性,和/或,向无人机发送信息获取请求,接收无人机基于信息获取请求返回的无人机的运动状态信息;然后可以根据图像中目标像素点的移动特性和/或无人机的运动状态信息,确定无人机向控制终端进行图像传输的目标帧率,并将目标帧率发送给无人机。此时无人机可以基于目标帧率对图像进行编码生成视频码流数据,将视频码流数据发送给控制终端。
其中,处理器111用于调用存储在存储器112中的计算机程序,并在执行计算机程序时实现本申请实施例提供的图像传输方法,例如可以执行如下步骤:
获取图像采集装置采集的图像中目标像素点的移动特性;和/或,获取可移动平台的运动状态信息;根据图像中目标像素点的移动特性和/或可移动平台的运动状态信息,确定可移动平台向控制终端进行图像传输的目标帧率。
在一些实施方式中,在确定可移动平台向控制终端进行图像传输的目标帧率之后,处理器111还执行:基于目标帧率生成视频码流数据;将视频码流数据发送给控制终端。
在一些实施方式中,图像采集装置为第一图像采集装置,可移动平台还包括第二图像采集装置,在基于目标帧率,生成视频码流数据时,处理器111还执行:基于目标帧率,对第二图像采集装置采集的图像进行编码,生成视频码流数据。
在一些实施方式中,在获取图像采集装置采集的图像中目标像素点的移动特性时,处理器111还执行:获取图像采集装置采集的多帧图像中的第一图像和第二图像;获取第一图像的目标像素点的像素坐标和第二图像的目标像素点的像素坐标,第一图像的目标像素点与第二图像的目标像素点相对应;根据第一图像的目标像素点的像素坐标和第二图像的目标像素点的像素坐标确定目标像素点的移动特性。
在一些实施方式中,在根据第一图像的目标像素点的像素坐标和第二图像的目标像素点的像素坐标确定目标像素点的移动特性时,处理器111还执行:根据第一图像的目标像素点的像素坐标和第二图像的目标像素点的像素坐标确定相对位置差;根据相对位置差和第一图像与第二图像的采集时间间隔,确定目标像素点的偏移速率;根据偏移速率确定目标像素点的移动特性。
在一些实施方式中,目标像素点包括背景对应的第一目标像素点和/或运动目标对应的第二目标像素点,第一图像的第一目标像素点与第二图像的第一目标像素点相对应,第一图像的第二目标像素点与第二图像的第二目标像素点相对应。
在一些实施方式中,在根据偏移速率确定目标像素点的移动特性时,处理器111还执行:将第一目标像素点的偏移速率和第二目标像素点的偏移速率进行加权平均,以用加权平均后的偏移速率表征目标像素点的移动特性。
在一些实施方式中,处理器111还执行:基于预先训练的计算模型,识别第一图像和第二图像中的背景和/或运动目标。
在一些实施方式中,运动目标包括多个待选运动目标中对应的图像区域面积最大的一个或前多个运动目标;或者,运动目标包括多个待选运动目标中运动幅度最大的一个或前多个运动区域。
在一些实施方式中,目标像素点的移动特性用目标像素点的偏移速率表征,在根据图像中目标像素点的移动特性,确定可移动平台向控制终端进行图像传输的目标帧率时,处理器111还执行:当偏移速率小于第一预设速率阈值时,将第一帧率设置为目标帧率;或者,当偏移速率大于第二预设速率阈值时,将第二帧率设置为目标帧率;或者,当偏移速率大于或等于第一预设速率阈值,且偏移速率小于或等于第二预设速率阈值时,将当前帧率设置为目标帧率;其中,第一预设速率阈值小于第二预设速率阈值,第一帧率小于第二帧率。
在一些实施方式中,在获取可移动平台的运动状态信息时,处理器111还执行:获取可移动平台的高度、平移运动速度、和/或偏航方向上的角速度。
在一些实施方式中,在获取可移动平台的高度、平移运动速度、和/或偏航方向上的角速度时,处理器111还执行:通过可移动平台安装的惯性测量单元,获取可移动平台的高度、平移运动速度、和/或偏航方向上的角速度。
在一些实施方式中,在根据可移动平台的运动状态信息,确定可移动平台向控制终端进行图像传输的目标帧率时,处理器111还执行:若高度小于第一高度阈值,且平移运动速度大于第一速度阈值,则将第二帧率设置为目标帧率;若高度大于或等于第一高度阈值,或平移运动速度小于或等于第一速度阈值,则判断角速度是否大于角速度阈值;若角速度大于角速度阈值,则将第二帧率设置为目标帧率;若角速度小于或等于角速度阈值,则判断高度是否大于第二高度阈值,且平移运动速度是否小于第二速度阈值;若高度大于第二高度阈值, 且平移运动速度小于第二速度阈值,则将第一帧率设置为目标帧率;若高度小于或等于第二高度阈值,或平移运动速度大于或等于第二速度阈值,则将当前帧率设置为目标帧率;其中,第一高度阈值小于第二高度阈值,第一速度阈值小于第二速度阈值,第一帧率小于第二帧率。
在一些实施方式中,在根据图像中目标像素点的移动特性和可移动平台的运动状态信息,确定可移动平台向控制终端进行图像传输的目标帧率时,处理器111还执行:若偏移速率大于第二预设速率阈值,则将第二帧率设置为目标帧率;若偏移速率小于或等于第二预设速率阈值,则判断高度是否小于第一高度阈值,且平移运动速度是否大于第一速度阈值;若高度小于第一高度阈值,且平移运动速度大于第一速度阈值,则将第二帧率设置为目标帧率;若高度大于或等于第一高度阈值,或平移运动速度小于或等于第一速度阈值,则判断角速度是否大于角速度阈值;若角速度大于角速度阈值,则将第二帧率设置为目标帧率;若角速度小于或等于角速度阈值,则判断偏移速率是否小于第一预设速率阈值;若偏移速率小于第一预设速率阈值,则将第一帧率设置为目标帧率;若偏移速率大于或等于第一预设速率阈值,则判断高度是否大于第二高度阈值,且平移运动速度是否小于第二速度阈值;若高度大于第二高度阈值,且平移运动速度小于第二速度阈值,则将第一帧率设置为目标帧率;若高度小于或等于第二高度阈值,或平移运动速度大于或等于第二速度阈值,则将当前帧率设置为目标帧率;其中,第一高度阈值小于第二高度阈值,第一速度阈值小于第二速度阈值,第一帧率小于第二帧率。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见上文针对图像传输方法的详细描述,此处不再赘述。
本申请的实施例中还提供一种计算机程序,该计算机程序中包括程序指令,处理器执行程序指令,实现本申请实施例提供的图像传输方法。
本申请的实施例中还提供一种计算机可读存储介质,该计算机可读存储介质为计算机可读存储介质,该计算机可读存储介质存储有计算机程序,计算机程序中包括程序指令,处理器执行程序指令,实现本申请实施例提供的图像传输方法。
其中,计算机可读存储介质可以是前述任一实施例所述的可移动平台的内部存储单元,例如可移动平台的硬盘或内存。计算机可读存储介质也可以是可移动平台的外部存储设备,例如可移动平台上配备的插接式硬盘,智能存储卡 (Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
由于该计算机可读存储介质中所存储的计算机程序,可以执行本申请实施例所提供的任一种图像传输方法,因此,可以实现本申请实施例所提供的任一种图像传输方法所能实现的有益效果,详见前面的实施例,在此不再赘述。
应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者***不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者***所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者***中还存在另外的相同要素。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (30)

  1. 一种图像传输方法,其特征在于,所述方法应用于可移动平台,所述可移动平台包括图像采集装置,所述可移动平台与控制终端通信连接,所述方法包括:
    获取所述图像采集装置采集的图像中目标像素点的移动特性;和/或,
    获取所述可移动平台的运动状态信息;
    根据所述图像中目标像素点的移动特性和/或所述可移动平台的运动状态信息,确定所述可移动平台向所述控制终端进行图像传输的目标帧率。
  2. 根据权利要求1所述的图像传输方法,其特征在于,所述确定所述可移动平台向所述控制终端进行图像传输的目标帧率之后,所述方法还包括:
    基于所述目标帧率生成视频码流数据;
    将所述视频码流数据发送给所述控制终端。
  3. 根据权利要求2所述的图像传输方法,其特征在于,所述图像采集装置为第一图像采集装置,所述可移动平台还包括第二图像采集装置,所述基于所述目标帧率,生成视频码流数据,包括:
    基于所述目标帧率,对所述第二图像采集装置采集的图像进行编码,生成所述视频码流数据。
  4. 根据权利要求1所述的图像传输方法,其特征在于,所述获取所述图像采集装置采集的图像中目标像素点的移动特性,包括:
    获取所述图像采集装置采集的多帧图像中的第一图像和第二图像;
    获取所述第一图像的目标像素点的像素坐标和所述第二图像的目标像素点的像素坐标,所述第一图像的目标像素点与所述第二图像的目标像素点相对应;
    根据所述第一图像的目标像素点的像素坐标和所述第二图像的目标像素点的像素坐标确定所述目标像素点的移动特性。
  5. 根据权利要求4所述的图像传输方法,其特征在于,所述根据所述第一图像的目标像素点的像素坐标和所述第二图像的目标像素点的像素坐标确定所述目标像素点的移动特性,包括:
    根据所述第一图像的目标像素点的像素坐标和所述第二图像的目标像素点的像素坐标确定相对位置差;
    根据所述相对位置差和所述第一图像与所述第二图像的采集时间间隔,确定所述目标像素点的偏移速率;
    根据所述偏移速率确定所述目标像素点的移动特性。
  6. 根据权利要求5所述的图像传输方法,其特征在于,所述目标像素点包括背景对应的第一目标像素点和/或运动目标对应的第二目标像素点,所述第一图像的第一目标像素点与所述第二图像的第一目标像素点相对应,所述第一图像的第二目标像素点与所述第二图像的第二目标像素点相对应。
  7. 根据权利要求6所述的图像传输方法,其特征在于,所述根据所述偏移速率确定所述目标像素点的移动特性,包括:
    将所述第一目标像素点的偏移速率和所述第二目标像素点的偏移速率进行加权平均,以用加权平均后的偏移速率表征所述目标像素点的移动特性。
  8. 根据权利要求6所述的图像传输方法,其特征在于,所述图像传输方法还包括:
    基于预先训练的计算模型,识别所述第一图像和所述第二图像中的背景和/或运动目标。
  9. 根据权利要求6所述的图像传输方法,其特征在于,所述运动目标包括多个待选运动目标中对应的图像区域面积最大的一个或前多个运动目标;或者,所述运动目标包括多个待选运动目标中运动幅度最大的一个或前多个运动区域。
  10. 根据权利要求1至9任一项所述的图像传输方法,其特征在于,所述目标像素点的移动特性用所述目标像素点的偏移速率表征,所述根据所述图像中目标像素点的移动特性,确定所述可移动平台向所述控制终端进行图像传输的目标帧率,包括:
    当所述偏移速率小于第一预设速率阈值时,将第一帧率设置为所述目标帧率;或者,
    当所述偏移速率大于第二预设速率阈值时,将第二帧率设置为所述目标帧率;或者,
    当所述偏移速率大于或等于所述第一预设速率阈值,且所述偏移速率小于或等于所述第二预设速率阈值时,将当前帧率设置为所述目标帧率;
    其中,所述第一预设速率阈值小于所述第二预设速率阈值,所述第一帧率小于所述第二帧率。
  11. 根据权利要求1至9任一项所述的图像传输方法,其特征在于,所述获 取所述可移动平台的运动状态信息,包括:
    获取所述可移动平台的高度、平移运动速度、和/或偏航方向上的角速度。
  12. 根据权利要求11所述的图像传输方法,其特征在于,所述获取所述可移动平台的高度、平移运动速度、和/或偏航方向上的角速度,包括:
    通过所述可移动平台安装的惯性测量单元,获取所述可移动平台的高度、平移运动速度、和/或偏航方向上的角速度。
  13. 根据权利要求11所述的图像传输方法,其特征在于,所述根据所述可移动平台的运动状态信息,确定所述可移动平台向所述控制终端进行图像传输的目标帧率,包括:
    若所述高度小于第一高度阈值,且所述平移运动速度大于第一速度阈值,则将第二帧率设置为所述目标帧率;
    若所述高度大于或等于所述第一高度阈值,或所述平移运动速度小于或等于所述第一速度阈值,则判断所述角速度是否大于角速度阈值;
    若所述角速度大于所述角速度阈值,则将所述第二帧率设置为所述目标帧率;
    若所述角速度小于或等于所述角速度阈值,则判断所述高度是否大于第二高度阈值,且所述平移运动速度是否小于第二速度阈值;
    若所述高度大于所述第二高度阈值,且所述平移运动速度小于所述第二速度阈值,则将第一帧率设置为所述目标帧率;
    若所述高度小于或等于所述第二高度阈值,或所述平移运动速度大于或等于所述第二速度阈值,则将当前帧率设置为所述目标帧率;
    其中,所述第一高度阈值小于所述第二高度阈值,所述第一速度阈值小于所述第二速度阈值,所述第一帧率小于所述第二帧率。
  14. 根据权利要求11所述的图像传输方法,其特征在于,所述根据所述图像中目标像素点的移动特性和所述可移动平台的运动状态信息,确定所述可移动平台向所述控制终端进行图像传输的目标帧率,包括:
    若所述偏移速率大于第二预设速率阈值,则将第二帧率设置为所述目标帧率;
    若所述偏移速率小于或等于所述第二预设速率阈值,则判断所述高度是否小于第一高度阈值,且所述平移运动速度是否大于第一速度阈值;
    若所述高度小于所述第一高度阈值,且所述平移运动速度大于所述第一速 度阈值,则将所述第二帧率设置为所述目标帧率;
    若所述高度大于或等于所述第一高度阈值,或所述平移运动速度小于或等于所述第一速度阈值,则判断所述角速度是否大于角速度阈值;
    若所述角速度大于所述角速度阈值,则将所述第二帧率设置为所述目标帧率;
    若所述角速度小于或等于所述角速度阈值,则判断所述偏移速率是否小于第一预设速率阈值;
    若所述偏移速率小于所述第一预设速率阈值,则将第一帧率设置为所述目标帧率;
    若所述偏移速率大于或等于所述第一预设速率阈值,则判断所述高度是否大于第二高度阈值,且所述平移运动速度是否小于第二速度阈值;
    若所述高度大于所述第二高度阈值,且所述平移运动速度小于所述第二速度阈值,则将所述第一帧率设置为所述目标帧率;
    若所述高度小于或等于所述第二高度阈值,或所述平移运动速度大于或等于所述第二速度阈值,则将当前帧率设置为所述目标帧率;
    其中,所述第一高度阈值小于所述第二高度阈值,所述第一速度阈值小于所述第二速度阈值,所述第一帧率小于所述第二帧率。
  15. 一种可移动平台,其特征在于,所述可移动平台与控制终端通信连接,所述可移动平台包括:
    图像采集装置,用于采集图像;
    存储器,用于存储计算机程序;
    处理器,用于调用所述存储器中的计算机程序,以执行:
    获取所述图像采集装置采集的图像中目标像素点的移动特性;和/或,
    获取所述可移动平台的运动状态信息;
    根据所述图像中目标像素点的移动特性和/或所述可移动平台的运动状态信息,确定所述可移动平台向所述控制终端进行图像传输的目标帧率。
  16. 根据权利要求15所述的可移动平台,其特征在于,所述处理器还执行:
    基于所述目标帧率生成视频码流数据;
    将所述视频码流数据发送给所述控制终端。
  17. 根据权利要求16所述的可移动平台,其特征在于,所述处理器还执行:
    基于所述目标帧率,对所述第二图像采集装置采集的图像进行编码,生成 所述视频码流数据。
  18. 根据权利要求15所述的可移动平台,其特征在于,所述处理器还执行:
    获取所述图像采集装置采集的多帧图像中的第一图像和第二图像;
    获取所述第一图像的目标像素点的像素坐标和所述第二图像的目标像素点的像素坐标,所述第一图像的目标像素点与所述第二图像的目标像素点相对应;
    根据所述第一图像的目标像素点的像素坐标和所述第二图像的目标像素点的像素坐标确定所述目标像素点的移动特性。
  19. 根据权利要求18所述的可移动平台,其特征在于,所述处理器还执行:
    根据所述第一图像的目标像素点的像素坐标和所述第二图像的目标像素点的像素坐标确定相对位置差;
    根据所述相对位置差和所述第一图像与所述第二图像的采集时间间隔,确定所述目标像素点的偏移速率;
    根据所述偏移速率确定所述目标像素点的移动特性。
  20. 根据权利要求19所述的可移动平台,其特征在于,所述目标像素点包括背景对应的第一目标像素点和/或运动目标对应的第二目标像素点,所述第一图像的第一目标像素点与所述第二图像的第一目标像素点相对应,所述第一图像的第二目标像素点与所述第二图像的第二目标像素点相对应。
  21. 根据权利要求20所述的可移动平台,其特征在于,所述处理器还执行:
    将所述第一目标像素点的偏移速率和所述第二目标像素点的偏移速率进行加权平均,以用加权平均后的偏移速率表征所述目标像素点的移动特性。
  22. 根据权利要求20所述的可移动平台,其特征在于,所述处理器还执行:
    基于预先训练的计算模型,识别所述第一图像和所述第二图像中的背景和/或运动目标。
  23. 根据权利要求20所述的可移动平台,其特征在于,所述运动目标包括多个待选运动目标中对应的图像区域面积最大的一个或前多个运动目标;或者,所述运动目标包括多个待选运动目标中运动幅度最大的一个或前多个运动区域。
  24. 根据权利要求15至23任一项所述的可移动平台,其特征在于,所述目标像素点的移动特性用所述目标像素点的偏移速率表征,所述处理器还执行:
    当所述偏移速率小于第一预设速率阈值时,将第一帧率设置为所述目标帧率;或者,
    当所述偏移速率大于第二预设速率阈值时,将第二帧率设置为所述目标帧 率;或者,
    当所述偏移速率大于或等于所述第一预设速率阈值,且所述偏移速率小于或等于所述第二预设速率阈值时,将当前帧率设置为所述目标帧率;
    其中,所述第一预设速率阈值小于所述第二预设速率阈值,所述第一帧率小于所述第二帧率。
  25. 根据权利要求15至23任一项所述的可移动平台,其特征在于,所述处理器还执行:
    获取所述可移动平台的高度、平移运动速度、和/或偏航方向上的角速度。
  26. 根据权利要求25所述的可移动平台,其特征在于,所述处理器还执行:
    通过所述可移动平台安装的惯性测量单元,获取所述可移动平台的高度、平移运动速度、和/或偏航方向上的角速度。
  27. 根据权利要求25所述的可移动平台,其特征在于,所述处理器还执行:
    若所述高度小于第一高度阈值,且所述平移运动速度大于第一速度阈值,则将第二帧率设置为所述目标帧率;
    若所述高度大于或等于所述第一高度阈值,或所述平移运动速度小于或等于所述第一速度阈值,则判断所述角速度是否大于角速度阈值;
    若所述角速度大于所述角速度阈值,则将所述第二帧率设置为所述目标帧率;
    若所述角速度小于或等于所述角速度阈值,则判断所述高度是否大于第二高度阈值,且所述平移运动速度是否小于第二速度阈值;
    若所述高度大于所述第二高度阈值,且所述平移运动速度小于所述第二速度阈值,则将第一帧率设置为所述目标帧率;
    若所述高度小于或等于所述第二高度阈值,或所述平移运动速度大于或等于所述第二速度阈值,则将当前帧率设置为所述目标帧率;
    其中,所述第一高度阈值小于所述第二高度阈值,所述第一速度阈值小于所述第二速度阈值,所述第一帧率小于所述第二帧率。
  28. 根据权利要求25所述的可移动平台,其特征在于,所述处理器还执行:
    若所述偏移速率大于第二预设速率阈值,则将第二帧率设置为所述目标帧率;
    若所述偏移速率小于或等于所述第二预设速率阈值,则判断所述高度是否小于第一高度阈值,且所述平移运动速度是否大于第一速度阈值;
    若所述高度小于所述第一高度阈值,且所述平移运动速度大于所述第一速度阈值,则将所述第二帧率设置为所述目标帧率;
    若所述高度大于或等于所述第一高度阈值,或所述平移运动速度小于或等于所述第一速度阈值,则判断所述角速度是否大于角速度阈值;
    若所述角速度大于所述角速度阈值,则将所述第二帧率设置为所述目标帧率;
    若所述角速度小于或等于所述角速度阈值,则判断所述偏移速率是否小于第一预设速率阈值;
    若所述偏移速率小于所述第一预设速率阈值,则将第一帧率设置为所述目标帧率;
    若所述偏移速率大于或等于所述第一预设速率阈值,则判断所述高度是否大于第二高度阈值,且所述平移运动速度是否小于第二速度阈值;
    若所述高度大于所述第二高度阈值,且所述平移运动速度小于所述第二速度阈值,则将所述第一帧率设置为所述目标帧率;
    若所述高度小于或等于所述第二高度阈值,或所述平移运动速度大于或等于所述第二速度阈值,则将当前帧率设置为所述目标帧率;
    其中,所述第一高度阈值小于所述第二高度阈值,所述第一速度阈值小于所述第二速度阈值,所述第一帧率小于所述第二帧率。
  29. 根据权利要求15至23任一项所述的可移动平台,其特征在于,所述可移动平台为移动终端、无人机、云台相机或机器人。
  30. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,所述计算机程序被处理器加载以执行权利要求1至14任一项所述的图像传输方法。
PCT/CN2020/093035 2020-05-28 2020-05-28 图像传输方法、可移动平台及计算机可读存储介质 WO2021237616A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/093035 WO2021237616A1 (zh) 2020-05-28 2020-05-28 图像传输方法、可移动平台及计算机可读存储介质
CN202080005966.8A CN113056904A (zh) 2020-05-28 2020-05-28 图像传输方法、可移动平台及计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/093035 WO2021237616A1 (zh) 2020-05-28 2020-05-28 图像传输方法、可移动平台及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021237616A1 true WO2021237616A1 (zh) 2021-12-02

Family

ID=76509772

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093035 WO2021237616A1 (zh) 2020-05-28 2020-05-28 图像传输方法、可移动平台及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN113056904A (zh)
WO (1) WO2021237616A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115152200A (zh) * 2022-05-23 2022-10-04 广东逸动科技有限公司 摄像头的帧率调节方法、装置、电子设备及存储介质
CN117478929A (zh) * 2023-12-28 2024-01-30 昆明中经网络有限公司 一种基于ai大模型的新媒体精品影像处理***
CN118069894A (zh) * 2024-04-12 2024-05-24 乾健科技有限公司 一种大数据存储管理方法及***

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268742B (zh) * 2022-03-01 2022-05-24 北京瞭望神州科技有限公司 一种天眼芯片处理装置
CN114913471B (zh) * 2022-07-18 2023-09-12 深圳比特微电子科技有限公司 一种图像处理方法、装置和可读存储介质
CN116112475A (zh) * 2022-11-18 2023-05-12 深圳元戎启行科技有限公司 一种用于自动驾驶远程接管的图像传输方法及车载终端
CN116804882B (zh) * 2023-06-14 2023-12-29 黑龙江大学 一种基于流数据处理的智能无人机控制***及其无人机
CN117808324B (zh) * 2024-02-27 2024-06-04 西安麦莎科技有限公司 一种无人机视觉协同的建筑进度评估方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180359413A1 (en) * 2016-01-29 2018-12-13 SZ DJI Technology Co., Ltd. Method, system, device for video data transmission and photographing apparatus
WO2019051649A1 (zh) * 2017-09-12 2019-03-21 深圳市大疆创新科技有限公司 图像传输方法、设备、可移动平台、监控设备及***
CN109600579A (zh) * 2018-10-29 2019-04-09 歌尔股份有限公司 视频无线传输方法、装置、***和设备
CN110012267A (zh) * 2019-04-02 2019-07-12 深圳市即构科技有限公司 无人机控制方法及音视频数据传输方法
CN110291774A (zh) * 2018-03-16 2019-09-27 深圳市大疆创新科技有限公司 一种图像处理方法、设备、***及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102377730A (zh) * 2010-08-11 2012-03-14 中国电信股份有限公司 音视频信号的处理方法及移动终端
CN110807392B (zh) * 2019-10-25 2022-09-06 浙江大华技术股份有限公司 编码控制方法以及相关装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180359413A1 (en) * 2016-01-29 2018-12-13 SZ DJI Technology Co., Ltd. Method, system, device for video data transmission and photographing apparatus
WO2019051649A1 (zh) * 2017-09-12 2019-03-21 深圳市大疆创新科技有限公司 图像传输方法、设备、可移动平台、监控设备及***
CN110291774A (zh) * 2018-03-16 2019-09-27 深圳市大疆创新科技有限公司 一种图像处理方法、设备、***及存储介质
CN109600579A (zh) * 2018-10-29 2019-04-09 歌尔股份有限公司 视频无线传输方法、装置、***和设备
CN110012267A (zh) * 2019-04-02 2019-07-12 深圳市即构科技有限公司 无人机控制方法及音视频数据传输方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115152200A (zh) * 2022-05-23 2022-10-04 广东逸动科技有限公司 摄像头的帧率调节方法、装置、电子设备及存储介质
CN117478929A (zh) * 2023-12-28 2024-01-30 昆明中经网络有限公司 一种基于ai大模型的新媒体精品影像处理***
CN117478929B (zh) * 2023-12-28 2024-03-08 昆明中经网络有限公司 一种基于ai大模型的新媒体精品影像处理***
CN118069894A (zh) * 2024-04-12 2024-05-24 乾健科技有限公司 一种大数据存储管理方法及***

Also Published As

Publication number Publication date
CN113056904A (zh) 2021-06-29

Similar Documents

Publication Publication Date Title
WO2021237616A1 (zh) 图像传输方法、可移动平台及计算机可读存储介质
CN108139799B (zh) 基于用户的兴趣区(roi)处理图像数据的***和方法
US11258949B1 (en) Electronic image stabilization to improve video analytics accuracy
WO2018214078A1 (zh) 拍摄控制方法及装置
CN108363946B (zh) 基于无人机的人脸跟踪***及方法
WO2022141376A1 (zh) 一种位姿估计方法及相关装置
CN107509031B (zh) 图像处理方法、装置、移动终端及计算机可读存储介质
WO2022141418A1 (zh) 一种图像处理方法以及装置
WO2017215295A1 (zh) 一种摄像机参数调整方法、导播摄像机及***
WO2018133589A1 (zh) 航拍方法、装置和无人机
WO2022141477A1 (zh) 一种图像处理方法以及装置
WO2022141445A1 (zh) 一种图像处理方法以及装置
WO2017045326A1 (zh) 一种无人飞行器的摄像处理方法
WO2020057609A1 (zh) 图像传输方法、装置、图像发送端及飞行器图传***
WO2022141351A1 (zh) 一种视觉传感器芯片、操作视觉传感器芯片的方法以及设备
WO2022141333A1 (zh) 一种图像处理方法以及装置
CN111880711B (zh) 显示控制方法、装置、电子设备及存储介质
CN109685709A (zh) 一种智能机器人的照明控制方法及装置
JP2023502552A (ja) ウェアラブルデバイス、インテリジェントガイド方法及び装置、ガイドシステム、記憶媒体
CN112640419B (zh) 跟随方法、可移动平台、设备和存储介质
US20120092519A1 (en) Gesture recognition using chroma-keying
WO2022089341A1 (zh) 一种图像处理方法及相关装置
WO2020019130A1 (zh) 运动估计方法及可移动设备
WO2022082440A1 (zh) 确定目标跟随策略的方法、装置、***、设备及存储介质
WO2021253173A1 (zh) 图像处理方法、装置及巡检***

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20938074

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20938074

Country of ref document: EP

Kind code of ref document: A1