WO2019227309A1 - 一种跟踪拍摄方法、设备及存储介质 - Google Patents

一种跟踪拍摄方法、设备及存储介质 Download PDF

Info

Publication number
WO2019227309A1
WO2019227309A1 PCT/CN2018/088862 CN2018088862W WO2019227309A1 WO 2019227309 A1 WO2019227309 A1 WO 2019227309A1 CN 2018088862 W CN2018088862 W CN 2018088862W WO 2019227309 A1 WO2019227309 A1 WO 2019227309A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
image
tracking
target
information
Prior art date
Application number
PCT/CN2018/088862
Other languages
English (en)
French (fr)
Inventor
王根源
岑显龙
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2018/088862 priority Critical patent/WO2019227309A1/zh
Priority to CN202110669027.XA priority patent/CN113395450A/zh
Priority to CN201880010526.4A priority patent/CN110291775B/zh
Priority to EP18921273.1A priority patent/EP3806443A4/en
Publication of WO2019227309A1 publication Critical patent/WO2019227309A1/zh
Priority to US17/105,931 priority patent/US20210084228A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/53Constructional details of electronic viewfinders, e.g. rotatable or detachable
    • H04N23/531Constructional details of electronic viewfinders, e.g. rotatable or detachable being rotatable or detachable
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations

Definitions

  • the present invention relates to the field of control technology, and in particular, to a tracking shooting method, device, and storage medium.
  • tracking shooting is a popular shooting method in recent years.
  • tracking shooting used in intelligent shooting is mainly achieved by connecting the control end of the shooting device. For example, if the user controls the UAV flight shooting through the mobile phone, the user will select the corresponding object in the UAV application software on the mobile phone to track and shoot the object.
  • a control terminal such as a mobile phone usually has a large display screen, which is convenient for a user to perform a frame selection operation.
  • Embodiments of the present invention provide a tracking shooting method, device, and storage medium, which can facilitate users to quickly implement tracking shooting settings, and are particularly suitable for small display screens.
  • an embodiment of the present invention provides a tracking shooting method, including:
  • the camera device is controlled to track and capture the target object to obtain a target image.
  • an embodiment of the present invention provides a tracking shooting device, including: a memory and a processor;
  • the memory is used to store program instructions
  • the processor is configured to determine object feature information used to describe a target object according to the acquired tracking trigger operation; obtain display position information of the target object set on a screen displaying a preview image; and according to the object feature Information and display position information, and control the imaging device to track and capture the target object to obtain a target image.
  • an embodiment of the present invention provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the tracking shooting method according to the first aspect is implemented.
  • the object feature information used to describe the target object is determined by acquiring the tracking trigger operation, and the display position information of the target object is acquired, and the camera device is controlled according to the acquired object feature information and display position information.
  • the target image is obtained by tracking and shooting the target object, so that the tracking and shooting setting can be quickly implemented, which is particularly suitable for a small display screen and improves the flexibility of tracking and shooting.
  • FIG. 1 is a schematic flowchart of a tracking shooting method according to an embodiment of the present invention
  • FIG. 2a is a schematic interface diagram of a click operation on a screen displaying a preview image according to an embodiment of the present invention
  • 2b is a schematic diagram of an interface for acquiring a target object from a preview image according to an embodiment of the present invention
  • 2c is a schematic diagram of an interface for zooming in on a screen displaying a preview image according to an embodiment of the present invention
  • 2d is a schematic diagram of an interface for selecting a frame of a preview image after being enlarged according to an embodiment of the present invention
  • 3a is a schematic diagram of an interface for selecting a frame on a screen displaying a preview image according to an embodiment of the present invention
  • 3b is a schematic diagram of another interface for obtaining a target object from a preview image according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an interface of a target image according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of another tracking shooting method according to an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of another tracking and shooting method according to an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of another tracking and shooting method according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a tracking and shooting device according to an embodiment of the present invention.
  • the tracking shooting method provided by the embodiment of the present invention may be applied to a tracking shooting device.
  • the tracking shooting device may be any one of a smart phone, a tablet computer, a laptop computer, and a wearable device (watch, bracelet).
  • multiple terminal devices wherein the tracking and shooting device may be set on a gimbal of an unmanned aerial vehicle, and a camera device is mounted on the gimbal; or, in other embodiments, the tracking and shooting device may also be It is set on the terminal device that establishes a communication connection with the gimbal of the UAV.
  • the following describes the tracking shooting method for an example of the tracking shooting method for an unmanned aerial vehicle.
  • a tracking trigger operation of a user on a screen displaying a preview image can be acquired through a tracking shooting device.
  • object feature information used to describe a target object can be determined, and the object feature can be determined.
  • the information is sent to an imaging device mounted on the pan / tilt, so that the imaging device can initialize the object characteristic information.
  • the tracking and shooting device may determine a target object according to the initialized object feature information, and control the camera device to track and capture the target object according to the obtained display position information of the target object set on the screen displaying the preview image by the user.
  • the target image may determine a target object according to the initialized object feature information, and control the camera device to track and capture the target object according to the obtained display position information of the target object set on the screen displaying the preview image by the user.
  • the tracking and shooting device may determine the tracking information of the target object on the target image according to the obtained object feature information describing the target object, and According to the display position information and the tracking information, a control instruction for adjusting the PTZ is generated.
  • the tracking and shooting device may send the control instruction to the pan / tilt, so that the pan / tilt controls the rotation of the pan / tilt according to the control instruction, thereby controlling the camera to adjust the shooting angle by controlling the rotation of the pan / tilt, so as to realize camera shooting.
  • the target image obtained by the device is further adjusted to improve the accuracy of tracking shooting.
  • the tracking trigger operation obtained by the tracking and shooting device may be a point selection operation or a frame selection operation.
  • the clicking operation may be any one or more of a click operation, a double-click operation, and a long-press operation obtained on a screen displaying a preview image, or may be an application APP on a screen displaying a preview image. Any one or more of the obtained single-click operation, double-click operation, and long-press operation.
  • the frame selection operation may be a frame selection operation obtained on a screen displaying a preview image, or may be a frame selection operation obtained on an application APP on a screen displaying a preview image.
  • the screen displaying the preview image may be a screen of the tracking shooting device, or a screen of a mobile terminal such as a mobile phone that establishes a communication connection with the tracking shooting device.
  • the screen size of the tracking shooting device may be a smaller screen, such as a 3x3 cm 2 screen, or a screen of any other size.
  • the embodiment of the present invention does not specifically limit the screen size of the tracking shooting device.
  • FIG. 1 is a schematic flowchart of a tracking shooting method according to an embodiment of the present invention.
  • the method may be performed by a tracking shooting device, wherein the specific explanation of the tracking shooting device is as described above.
  • the method according to the embodiment of the present invention includes the following steps.
  • S101 Determine object feature information used to describe a target object according to the acquired tracking trigger operation.
  • the tracking and shooting device may determine object feature information used to describe the target object according to the acquired tracking trigger operation, where the tracking trigger operation may be a click operation or a frame selection operation.
  • the embodiment of the invention does not specifically limit the form of the trigger operation.
  • the object feature information includes any one or more of the length, width, and coordinate information of the image area, which is not specifically limited in the embodiment of the present invention.
  • the tracking and shooting device may obtain a user's click operation on the screen displaying the preview image, and determine a click image area according to the click operation, so as to obtain a description based on the click image area.
  • the object characteristic information includes any one or more of the length, width, and coordinate information of the selected image area determined by the selection operation.
  • the selection operation may be a single-click operation, a double-click operation, or a long operation. Press operation.
  • FIG. 2a and FIG. 2b can be used as an example for illustration.
  • FIG. 2a is a schematic interface diagram of a click operation on a screen displaying a preview image according to an embodiment of the present invention
  • FIG. 2b is an embodiment of the present invention.
  • the interface diagram shown in FIG. 2a is a preview image captured by the camera and displayed on the screen. The user can automatically determine the image shown in FIG. 2b by clicking on the point 20 on the screen displaying the preview image in FIG. 2a.
  • a menu 22 may be displayed on the screen displaying the preview image.
  • the menu 22 includes options for indicating the display position information, such as center 221, current position 222, custom 223, and cancel. 224. Start 225 options.
  • the tracking and shooting device may obtain the click image area 21 shown in FIG. 2b determined by the click operation, and obtain The object feature information such as the length, width, and coordinate information of the selected image area 21 determines an object in the selected image area 21 as a target object.
  • the tracking and shooting device may determine a target point by obtaining a click operation, perform object estimation detection on the preview image centered on the target point, determine a target object, and determine the target object based on the target object.
  • a click image area is determined, so that object feature information describing a target object in the click image area is obtained according to the click image area.
  • the tracking and shooting device may determine the coordinate position of the target point according to the obtained click operation, and detect whether the preview image exists on the preview image with the preset detection algorithm centered on the target point. If an object exists, a target object is determined, and a selected image area is determined according to the target object, thereby determining object feature information such as the length, width, and coordinate information of the selected image area of the target object.
  • the preset detection algorithm may be any one or more detection algorithms for determining a target object, and the preset detection algorithm is not specifically limited in the embodiment of the present invention. For example, assuming that the preset detection algorithm is a saliency algorithm, through the saliency algorithm, the tracking and shooting device can obtain the size and position information of the most significant object containing coordinate information of the target point, and Identify as the target audience.
  • the tracking and shooting device may obtain the target point determined by the click operation, center the target point, and perform image enlargement processing on the preview image according to a preset ratio to obtain the enlargement processing.
  • the frame selection image area determined by the frame selection operation of the subsequent preview image, and according to the frame selection image area obtain object feature information for describing the target object in the frame selection image area.
  • the click operation obtained by the tracking and shooting device is a single-click operation
  • the tracking and shooting device can obtain the target point determined by the user's single-click operation on the tracking and shooting device, with the target point as the center.
  • the user can perform a frame selection operation on the preview image after the enlargement processing, and the tracking and shooting device can obtain the frame selection image area determined by the user's frame selection operation, and determine the length and width for describing the frame selection image area. , Coordinate information and other object characteristic information.
  • FIG. 2c and FIG. 2d can be used as an example for illustration.
  • FIG. 2c is a schematic diagram of an enlarged process on a screen displaying a preview image according to an embodiment of the present invention.
  • FIG. 2d is an enlarged process provided by an embodiment of the present invention.
  • the user can perform a frame selection operation on the preview image 24 after the enlargement processing as shown in FIG. 2D.
  • the tracking and shooting device can obtain the frame selection image area 25 determined by the user's frame selection operation, and determine a region for describing the
  • the object feature information such as the length, width, and coordinate information of the image area 25 is framed.
  • the screen on which the preview image is displayed may not have a menu but only a preview image.
  • FIG. 2a can be used as an example for illustration. Assuming that the click operation of the point 20 on the screen displaying the preview image in FIG. 2a is obtained as a long-press operation, the tracking shooting setting is triggered, that is, the user long-presses the preview screen in FIG. After the point 20 of the point exceeds a certain time, the tracking and shooting device judges that the user has enabled the tracking shooting, and can obtain the clicked image area 21 determined by the user long-pressing the point 20.
  • the tracking and shooting device may obtain a frame selection operation on a screen displaying a preview image, determine a frame selection image area according to the frame selection operation, and obtain the frame selection image area according to the frame selection image area. Describes object feature information of a target object in the frame-selected image area.
  • the screen displaying the preview graphic may be the screen of the tracking shooting device or the screen of the APP on the tracking device, or the screen of the mobile terminal establishing the communication connection with the tracking shooting device or the screen of the APP on the mobile terminal.
  • the tracking shooting device can obtain a frame selection operation on the screen of the mobile terminal displaying the preview image, and determine the frame selection image area according to the frame selection operation, and calculate the length, width, coordinate information, etc. of the frame selection image area.
  • FIG. 3a and FIG. 3b can be used as an example for explanation.
  • FIG. 3a is a schematic diagram of an interface for selecting a frame on a screen displaying a preview image according to an embodiment of the present invention.
  • FIG. 3b is another The interface diagram of the target object is obtained from the preview image.
  • the interface diagram shown in Figure 3a is a preview image captured by the camera and displayed on the screen. The user can select the preview image shown in Figure 3a by selecting a frame. Area 30, the frame selection image area 31 shown in FIG.
  • the object feature information of the target object in 31 is the length, width, coordinate information, etc. of the frame-selected image area 31 shown in FIG. 3b.
  • S102 Acquire display position information of the target object set on a screen displaying a preview image.
  • the tracking and shooting device may obtain the display position information of the target object set on the screen on which the preview image is displayed, where the screen on which the preview image is displayed may be the screen of the tracking and shooting device, or Track the screen of a mobile terminal where the communication device establishes a communication connection.
  • the acquiring of the display position information may be determined according to a user operation on the screen of the tracking and shooting device that displays the preview image; the display position information may also be obtained according to the user on the screen of the mobile terminal displaying the preview image. Determined by user operation.
  • the setting method of the display position information of the target object may be set by setting a menu on a screen displaying the preview image, and the menu includes multiple position indication information options.
  • the tracking and shooting device may determine the display position information of the target object according to the obtained user's click operation on the position indication information option.
  • the tracking and shooting device may obtain a click operation on a location indication information option included in a menu on the screen of the tracking and shooting device displaying a preview image, and according to the obtained location instruction determined by the obtained click operation Information to determine display position information of the target object.
  • FIG. 2b can be used as an example for illustration.
  • the tracking camera device can obtain a click operation on the position indication information included in the menu 22 set on the screen of the tracking camera device that displays the preview image. Assuming that the user clicks the option 221 in the menu 22, The tracking shooting device may determine the display position information of the target object as the centered position on the screen displaying the preview image according to the obtained centered position indication information determined by the click operation. If the user clicks the Cancel 224 option, the tracking The shooting device may cancel the display position information set for the target object to be the center position on the screen on which the preview image is displayed, and may set the display position information of the target object again.
  • the tracking and shooting device obtains the user's click operation on the current position option in the menu 22, the position indication information of the current position 222 determined by the click operation can be obtained, and the display position information of the target object is determined as The current position of the target object on the screen displaying the preview image.
  • the tracking shooting device can cancel the display position information set for the target object to be the current position on the screen displaying the preview image. , And the user can reset the display position information of the target object.
  • the setting method of the display position information of the target object may also be determined as the target by dragging the selected image area on the screen displaying the preview image, and dragging the selected image area as the target.
  • the display position information of the object may acquire a drag operation on the selected image area, and determine the target image according to the obtained position information of the selected image area after the drag operation is dragged. Display location information.
  • FIG. 2b can be used as an example for description. Assuming that the tracking shooting device obtains the user's click operation on the custom 223 option in the menu 22, the user can drag the selected image area 21 of the target object on the screen displaying the preview image.
  • the tracking and shooting device may directly drag the clicked image area 21 of the target object to an arbitrary position without obtaining a click operation on the custom 223 option in the menu 22, and click the position of the image area 21 after dragging Display position information as the target object.
  • S103 According to the object feature information and the display position information, control the imaging device to track and capture the target object to obtain a target image.
  • the tracking and shooting device may control the imaging device to track and capture the target object to obtain a target image according to the object feature information and display position information.
  • the tracking and shooting device may send the obtained object characteristic information and display position information for describing the target object to the imaging device through a private protocol, so that the imaging device initializes the object characteristic information And after the initialization, track and shoot the target object to obtain a target image according to the obtained display position information.
  • FIG. 4 is a schematic diagram of an interface of a target image provided by an embodiment of the present invention. It is assumed that the object feature information used to describe the target object obtained by the tracking and shooting device is a click image area 21 having a length of 2 cm and a width of 1 cm and the GPS coordinate position of the clicked image area 21. The display position information of the target object in the clicked image area 21 is centered. When the tracking and shooting device acquires the user's click on the start 225 option in the menu 22, the acquired length and width of the clicked image area may be obtained.
  • the GPS coordinate position and display position information are sent to the imaging device, so that the imaging device initializes the object feature information of the target object in the selected image area 21, and after the initialization, according to the obtained display position information, Track and shoot the target object in the selected image area 21 to obtain a target image 41 whose display position on the screen is centered as shown in FIG. 4.
  • the tracking and shooting device may determine all the target images on the target image according to the obtained object feature information for describing the target object.
  • the tracking information of the target object is described, and a control instruction is sent to the PTZ.
  • the control instruction is an instruction for adjusting the PTZ based on the display position information and the tracking information.
  • the tracking and shooting device may transmit the obtained object feature information used to describe the target object to an algorithm operation module, and the algorithm operation module may use the algorithm operation module according to the object features set on the screen displaying the preview image.
  • the information and the object feature information of the target image obtained by the imaging device tracking and shooting, and the tracking information such as the coordinate information of the target object on the target image, the size information of the image area of the target object, and the like are calculated and obtained.
  • the tracking and shooting device may generate a control instruction according to the tracking information and display position information, and send a control instruction to the pan / tilt, so that the pan / tilt controls the rotation of the pan / tilt according to the control instruction.
  • the tracking and shooting device determines the object feature information for describing the target object through the acquired tracking trigger operation, thereby determining the target object, and acquiring the target object set on the screen on which the preview image is displayed.
  • Display position information determine the display position of the target object on the screen, to control the camera device to track and capture the target object to obtain a target image, and after obtaining the target image, the tracking and shooting device may determine the target on the target image
  • the tracking information of the object and sends a control instruction to the PTZ to control the PTZ to rotate according to the control instruction. In this way, tracking and shooting of the target object is realized, and the shooting angle of the camera is adjusted by controlling the rotation of the pan / tilt, so that the camera can track and shoot to obtain a more accurate target image, thereby improving the efficiency of tracking and shooting.
  • FIG. 5 is a schematic flowchart of another tracking and shooting method according to an embodiment of the present invention.
  • the method may be performed by a tracking and shooting device.
  • the specific explanation of the tracking and shooting device is as described above.
  • the method according to the embodiment of the present invention includes the following steps.
  • S501 Acquire a tap operation on a screen displaying a preview image.
  • the tracking and shooting device may obtain a click operation on a screen on which a preview image is displayed, wherein the screen on which the preview image is displayed may be the screen of the tracking and shooting device, and may also be a communication connection with the tracking and shooting device.
  • the screen of the mobile terminal, the screen of the tracking and shooting device may be a smaller screen, such as a size of 3 ⁇ 3 cm 2 , or any other size.
  • the embodiment of the present invention does not specifically limit the screen size of the tracking and shooting device.
  • the click operation includes a single-click operation, a double-click operation, or a long-press operation.
  • the tracking camera device can obtain a single-click operation, a double-click operation, or a long-press operation on the screen of the tracking camera device that displays the preview image. Operation; or, the tracking shooting device may obtain a single-click operation, a double-click operation, or a long-press operation on the screen of a mobile terminal that displays a preview image and establishes a communication connection with the tracking shooting device.
  • S502 Determine a click image area according to the click operation, and obtain object feature information for describing a target object in the click image area according to the click image area.
  • the tracking and shooting device may determine a click image area according to the click operation, and obtain object feature information for describing a target object in the click image area according to the click image area.
  • the tracking and shooting device may determine a clicked image area according to a user's single-click operation, double-click operation, or long-press operation on the screen of the tracking and shooting device, and obtain the description image area according to the clicked image area.
  • the object feature information such as the length, width, and coordinate information of the click image area of the target object in the click image area is described.
  • the tracking and shooting device may obtain a single-click operation, a double-click operation, or a long-press operation on the screen of a mobile terminal that establishes a communication connection with the tracking and shooting device, and obtain the description based on the obtained clicked image area.
  • Object feature information such as the length, width, and coordinate information of the selected image area of the target object in the selected image area. Specific examples are as described above, and will not be repeated here.
  • the tracking and shooting device may determine a target point by acquiring a click operation, and use the target point as a center in the preview. An object estimation and detection is performed on the image, a target object is determined, and a selected image area is determined according to the target object, so as to obtain object feature information for describing the target object in the selected image area.
  • the tracking and shooting device may obtain the coordinate position of the target point determined by the click operation, center on the coordinate position of the target point, detect whether an object exists in the preview image, and determine the target object,
  • the selected image area is determined according to the target object, and object feature information used to describe the target object such as the length, width, and coordinate information of the selected image area is obtained. Specific examples are as described above, and will not be repeated here.
  • the tracking and shooting device may obtain a target point determined by the click operation, center the target point, perform image enlargement processing on the preview image according to a preset ratio, and obtain a preview after the enlargement processing.
  • the frame selection image area determined by the frame selection operation of the image, and according to the frame selection image area, object feature information describing a target object in the frame selection image area is obtained.
  • the tracking and shooting device may obtain the target point determined by the click operation, and center the target point and perform image enlargement processing on the preview image according to a preset ratio, and enlarge the preview image after the enlargement processing. Displayed on the screen of the mobile terminal or the screen of the tracking camera.
  • the user can perform a frame selection operation on the enlarged preview image displayed on the screen, and the frame selection determines the frame selection image area.
  • the tracking and shooting device obtains object feature information for describing a target object in the frame-selected image area according to the acquired frame-selected image area. Specific examples are as described above, and will not be repeated here.
  • S503 Acquire display position information of the target object set on a screen on which a preview image is displayed.
  • the tracking and shooting device may obtain display position information of the target object set on a screen displaying a preview image.
  • the tracking and shooting device may obtain display position information of the target object set on the screen of the tracking and shooting device by the user, or the tracking and shooting device may obtain a mobile terminal where the user establishes a communication connection with the tracking and shooting device Display position information of the target object set on the screen. Specific examples are as described above, and are not repeated here.
  • the tracking and shooting device may obtain a click operation on the position indication information included in the menu set on the screen displaying the preview image, and determine the position indication information according to the obtained position indication information determined by the click operation.
  • the display position information of the target object may be obtained.
  • the tracking and shooting device may obtain a drag operation on the determined clicked image area, and determine the display according to the obtained position information of the clicked image area after the drag operation is dragged. location information.
  • S504 Send the acquired object feature information and display position information used to describe the target object to the imaging device.
  • the tracking and shooting device may send the acquired object feature information and display position information for describing the target object to the imaging device, so that the imaging device initializes the object characteristic information, and After the initialization, according to the obtained display position information, track and shoot the target object to obtain a target image.
  • the imaging device initializes the object characteristic information
  • the imaging device After the initialization, according to the obtained display position information, track and shoot the target object to obtain a target image.
  • Specific examples are as described above, and are not repeated here.
  • the tracking and shooting device determines a clicked image area by acquiring a click operation on a screen on which a preview image is displayed.
  • the click operation can not only acquire the clicked image area from a screen of a normal size, but also The selected image area is automatically obtained from the smaller screen according to the selection operation, and the obtained object feature information and display position information for describing the target object are sent to the imaging device, so that the imaging device performs
  • the object feature information is initialized, and after the initialization, according to the obtained display position information, tracking and shooting of the target object is performed to obtain a target image.
  • FIG. 6 is a schematic flowchart of another tracking and shooting method according to an embodiment of the present invention.
  • the method may be performed by a tracking and shooting device.
  • the specific explanation of the tracking and shooting device is as described above.
  • the method according to the embodiment of the present invention includes the following steps.
  • S601 Acquire a frame selection operation on a screen displaying a preview image.
  • the tracking shooting device may obtain a frame selection operation on a screen displaying a preview image, wherein the screen displaying the preview image may be a screen of the tracking shooting device or establish a communication connection with the tracking shooting device. Screen of your mobile terminal.
  • the tracking shooting device may obtain a frame selection operation on a screen of the tracking shooting device that displays a preview image, or the tracking shooting device may obtain a communication connection with the tracking shooting device that displays a preview image. Box selection operation on the screen.
  • S602 Determine a frame selection image area according to the frame selection operation, and obtain object feature information for describing a target object in the frame selection image area according to the frame selection image area.
  • the tracking and shooting device may determine a frame selection image area according to the obtained frame selection operation, and obtain an object feature for describing a target object in the frame selection image area according to the frame selection image area. information.
  • the tracking and shooting device may obtain a frame selection operation on the screen of the mobile terminal displaying the preview image, determine the frame selection image area according to the frame selection operation, and calculate the length of the frame selection image area.
  • Width, coordinate information and the like are used to describe object feature information of the target object in the frame selection image area.
  • the tracking and shooting device may obtain a frame selection operation on the screen of the tracking and shooting device displaying the preview image, and determine the frame selection image area according to the frame selection operation, and calculate the frame selection image area. Length, width, coordinate information and the like are used to describe object feature information of a target object in the frame selection image area. The specific examples are described above, and will not be repeated here.
  • S603 Acquire display position information of the target object set on a screen displaying the preview image.
  • the tracking and shooting device may obtain display position information of the target object set on a screen displaying a preview image.
  • the tracking and shooting device may obtain display position information of the target object set on the screen of the tracking and shooting device by the user, or the tracking and shooting device may obtain information from the user that establishes a communication connection with the tracking and shooting device. Display position information of the target object set on the screen of the mobile terminal. Specific examples are as described above, and are not repeated here.
  • the tracking and shooting device may obtain a click operation on the position indication information included in the menu set on the screen displaying the preview image, and determine the position indication information according to the obtained position indication information determined by the click operation.
  • the display position information of the target object may be obtained by the tracking and shooting device.
  • FIG. 3b can be used as an example for illustration.
  • the tracking camera device can obtain a click operation on the position indication information option included in the menu 32 on the screen of the tracking camera device that displays the preview image. Assuming that the user clicks the 321 option in the menu 32, then The tracking shooting device may determine the display position information of the target object as the centered position on the screen displaying the preview image according to the obtained centered position indication information determined by the click operation. If the user clicks the Cancel 324 option, the tracking shooting device may cancel the display position information set for the target object and displayed at the center position on the screen displaying the preview image, and the user may reset the display position information of the target object.
  • the tracking shooting device can obtain position indication information of the current position 322 determined by the click operation, and determine the target object
  • the display position information of is the current position of the target object on the screen displaying the preview image.
  • the tracking and shooting device may cancel the display position information displayed for the target object at the current position on the screen where the preview image is displayed, and the user may re-display the display position of the target object. Information.
  • the tracking and shooting device may obtain a drag operation on the determined frame selection image area, and determine the position information of the frame selection image area after the drag operation is acquired, and determine the Display location information.
  • FIG. 3b can be used as an example for description. Assuming that the tracking shooting device obtains the user's click operation on the custom 323 option in the menu 32, the user can drag the frame selection image area 31 of the target object on the screen displaying the preview image. To any position, and use the position of the frame selection image area 31 after dragging as the display position information of the target object.
  • the tracking and shooting device can directly drag the selected image area 31 of the target object to an arbitrary position without obtaining a click operation on the custom 323 option in the menu 32, and click the position of the image area 31 after dragging Display position information as the target object.
  • S604 Send the acquired object feature information and display position information used to describe the target object to the imaging device.
  • the tracking and shooting device may send the acquired object feature information and display position information for describing the target object to the imaging device, so that the imaging device initializes the object characteristic information, and After the initialization, according to the obtained display position information, track and shoot the target object to obtain a target image.
  • the imaging device initializes the object characteristic information
  • the imaging device After the initialization, according to the obtained display position information, track and shoot the target object to obtain a target image.
  • Specific examples are as described above, and are not repeated here.
  • the tracking and shooting device determines a frame selection image area by acquiring a frame selection operation on a screen on which a preview image is displayed, and sends the obtained object feature information and display position information for describing the target object.
  • a camera device is provided, so that the camera device initializes the object characteristic information, and after the initialization, according to the obtained display position information, tracking and shooting of the target object are performed to obtain a target image.
  • FIG. 7 is a schematic flowchart of another tracking and shooting method according to an embodiment of the present invention.
  • the method may be performed by a tracking and shooting device.
  • the specific explanation of the tracking and shooting device is as described above.
  • the method according to the embodiment of the present invention includes the following steps.
  • S701 Determine tracking information of the target object on a target image according to the obtained object feature information for describing the target object.
  • the target after the tracking and shooting device controls the imaging device to track and capture the target object to obtain a target image, the target can be determined on the target image according to the obtained object feature information for describing the target object.
  • Object tracking information The tracking information of the target object includes object feature information such as size information, coordinate information, and the like of the image area occupied by the target object on the target image captured by the camera and the display position information of the target object in the target image.
  • S702 Send a control instruction to the PTZ, the control instruction is an instruction for adjusting the PTZ determined according to the display position information and the tracking information.
  • the tracking and shooting device may send a control instruction to the PTZ, wherein the control instruction is used to adjust the PTZ according to the display position information and the tracking information. instruction.
  • the tracking and shooting device may determine the credibility of the target image according to the object feature information obtained on the preview image and the tracking information determined on the target image, wherein the credibility It is used to indicate the tracking accuracy of the target image obtained by the imaging device tracking and shooting the target object determined on the preview image.
  • the object feature information obtained by the tracking and shooting device on the preview image is the length, width, and GPS coordinate position of the frame selection area. If the tracking and shooting device determines the length, width, and GPS coordinates of the target image obtained by the tracking device by the camera The position is the tracking information, and the tracking and shooting device can calculate the credibility of the target image.
  • the tracking camera device may detect whether the obtained credibility is less than a preset threshold, and when it is detected that the credibility is less than When the threshold is preset, the tracking and shooting device may perform full-image detection on the target image. If a target object is detected in the full image, the detection position information of the target object in the target image may be obtained, and The detection position information and the display position information determine the control instruction for adjusting the PTZ.
  • the tracking and shooting device may perform full-image detection on the target image, and if a target object is detected in the full image, obtain detection position information of the target object in the target image, and according to the detection position information and the Display position information determination control instructions, wherein the control instructions are used to adjust the pan / tilt so that the object feature information of the target object in the target image captured by the imaging device and the object feature information of the target object set in the preview image the same.
  • the tracking and shooting device may determine the rotation angle of the pan / tilt according to the obtained detection position information of the target object in the target image and the display position information, and generate a control carrying the rotation angle.
  • An instruction wherein the control instruction is used to control the pan / tilt to rotate according to the rotation angle.
  • the tracking and shooting device may calculate the object feature information of the target object on the target image and the target position information based on the detected position information of the target object in the target image and the display position information.
  • the object feature information is the same, which further improves the efficiency of tracking shooting.
  • the tracking and shooting device determines the credibility of the target image by acquiring the tracking information on the target image.
  • the credibility is less than a preset threshold
  • the full-image detection is performed on the target image.
  • the rotation angle of the gimbal is determined according to the detection position information and the display position information, and a control instruction carrying the rotation angle is generated to control the gimbal to rotate according to the rotation angle so that The camera mounted on the gimbal adjusts the shooting angle to shoot and obtain the same target image as the object feature information set on the preview image. In this way, the results of tracking shooting can be further detected and adjusted, and the accuracy of tracking shooting is improved.
  • FIG. 8 is a schematic structural diagram of a tracking and shooting device according to an embodiment of the present invention.
  • the tracking shooting device includes: one or more processors 801; one or more input devices 802, one or more output devices 803, and a memory 804.
  • the processor 801, the input device 802, the output device 803, and the memory 804 are connected through a bus 805.
  • the memory 804 is configured to store instructions
  • the processor 801 is configured to execute the instructions stored in the memory 804.
  • the processor 801 is configured to perform the following steps:
  • the camera device is controlled to track and capture the target object to obtain a target image.
  • the tracking trigger operation includes a click operation
  • the processor 801 calls program instructions stored in the memory 804, and is configured to perform the following steps:
  • a click image area is determined according to the click operation, and object feature information used to describe a target object in the click image area is obtained according to the click image area.
  • processor 801 calls a program instruction stored in the memory 804, for performing the following steps:
  • the clicking operation includes a single-click operation or a double-click operation on a screen on which a preview image is displayed.
  • processor 801 calls a program instruction stored in the memory 804, for performing the following steps:
  • processor 801 calls a program instruction stored in the memory 804, for performing the following steps:
  • object feature information for describing a target object in the frame-selected image region is acquired.
  • processor 801 calls a program instruction stored in the memory 804, for performing the following steps:
  • a frame selection image area is determined according to the frame selection operation, and object feature information for describing a target object in the frame selection image area is obtained according to the frame selection image area.
  • the object feature information includes any one or more of length, width, and coordinate information of the image area.
  • processor 801 calls a program instruction stored in the memory 804, for performing the following steps:
  • processor 801 calls a program instruction stored in the memory 804, for performing the following steps:
  • processor 801 calls a program instruction stored in the memory 804, for performing the following steps:
  • the display position information is determined according to the obtained position information of the frame-selected image area dragged by the drag operation.
  • processor 801 calls a program instruction stored in the memory 804, for performing the following steps:
  • processor 801 calls a program instruction stored in the memory 804, for performing the following steps:
  • the control instruction is an instruction for adjusting the PTZ determined according to the display position information and the tracking information.
  • processor 801 calls a program instruction stored in the memory 804, for performing the following steps:
  • the credibility is used to indicate a tracking accuracy rate of a target image obtained by the camera device tracking and shooting a target object determined on the preview image;
  • processor 801 calls a program instruction stored in the memory 804, for performing the following steps:
  • a control instruction is determined according to the detection position information and the display position information.
  • processor 801 calls a program instruction stored in the memory 804, for performing the following steps:
  • the processor 801 may be a central processing unit (CPU), and the processor may also be another general-purpose processor or digital signal processor (DSP). , Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • DSP digital signal processor
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the input device 802 may include a touch panel, a fingerprint sensor (for collecting fingerprint information and orientation information of a user), a microphone, and the like, and the output device 803 may include a display (LCD, etc.), a speaker, and the like.
  • a fingerprint sensor for collecting fingerprint information and orientation information of a user
  • a microphone for collecting fingerprint information and orientation information of a user
  • the output device 803 may include a display (LCD, etc.), a speaker, and the like.
  • the memory 804 may include a read-only memory and a random access memory, and provide instructions and data to the processor 801. A part of the memory 804 may further include a non-volatile random access memory. For example, the memory 804 may also store information of a device type.
  • processor 801 for specific implementation of the processor 801 in this embodiment of the present invention, reference may be made to the description of related content in the foregoing embodiments, and details are not described herein.
  • the tracking and shooting device determines the object characteristic information for describing the target object through the acquired tracking trigger operation, thereby determining the target object, and acquiring the target object set on the screen displaying the preview image.
  • Display position information of the target object to determine the display position of the target object on the screen to control the camera device to track and capture the target object to obtain a target image, and after obtaining the target image, the tracking and shooting device may determine the target image on the target image.
  • the tracking information of the target object and sends a control instruction to the PTZ to control the PTZ to rotate according to the control instruction. In this way, tracking and shooting of the target object is realized, and the shooting angle of the camera is adjusted by controlling the rotation of the pan / tilt, so that the camera can track and shoot to obtain a more accurate target image, thereby improving the efficiency of tracking and shooting.
  • a computer-readable storage medium is also provided in the embodiment of the present invention.
  • the computer-readable storage medium stores a computer program, and the computer program is implemented by a processor to implement the present invention.
  • the tracking and shooting method described in the embodiment corresponding to FIG. 7 can also implement the tracking and shooting device according to the embodiment of the present invention described in FIG. 8, which is not repeated here.
  • the computer-readable storage medium may be an internal storage unit of the device according to any one of the foregoing embodiments, such as a hard disk or a memory of the device.
  • the computer-readable storage medium may also be an external storage device of the device, such as a plug-in hard disk, a Smart Media Card (SMC), and a Secure Digital (SD) card equipped on the device. , Flash card (Flash card) and so on.
  • the computer-readable storage medium may further include both an internal storage unit of the device and an external storage device.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by the device.
  • the computer-readable storage medium may also be used to temporarily store data that has been or will be output.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例提供了一种跟踪拍摄方法、设备及存储介质,其中,该方法包括:根据获取到的跟踪触发操作,确定用于描述目标对象的对象特征信息;获取在显示预览图像的屏幕上设置的所述目标对象的显示位置信息;根据所述对象特征信息和显示位置信息,控制摄像装置跟踪拍摄所述目标对象得到目标图像。通过这种方式,实现了快速实现跟踪拍摄设置,尤其适用于小显示屏,提高了跟踪拍摄的灵活性。

Description

一种跟踪拍摄方法、设备及存储介质 技术领域
本发明涉及控制技术领域,尤其涉及一种跟踪拍摄方法、设备及存储介质。
背景技术
随着智能拍摄设备,特别是云台、无人飞行器、各种相机的普及,一些智能化的拍摄、视频技术也逐渐被广大用户所喜爱。其中,跟踪拍摄是近几年流行起来的一种辅助拍摄手段。目前,应用于智能化拍摄中的跟踪拍摄主要是通过连接拍摄设备的控制端来实现的。例如,用户通过手机控制无人飞行器飞行拍摄,则用户会在手机上的无人飞行器应用软件中,通过框选相应的对象来实现对该对象的跟踪拍摄。此时,手机等控制端通常具有较大的显示屏幕,便于用户进行框选操作。然而,在一些情况中,拍摄设备例如云台等本身具有一块小的显示屏时,用户仍然需要连接手机等外部控制端来完成框选操作从而实现跟踪拍摄,十分不方便。因此,如何能在拍摄设备自带的显示屏上使用户更方便地完成跟踪拍摄的设置,是一个需要解决的问题。
发明内容
本发明实施例提供了一种跟踪拍摄方法、设备及存储介质,可以方便用户快速实现跟踪拍摄的设置,尤其适用于小显示屏。
第一方面,本发明实施例提供了一种跟踪拍摄方法,包括:
根据获取到的跟踪触发操作,确定用于描述目标对象的对象特征信息;
获取在显示预览图像的屏幕上设置的所述目标对象的显示位置信息;
根据所述对象特征信息和显示位置信息,控制摄像装置跟踪拍摄所述目标对象得到目标图像。
第二方面,本发明实施例提供了一种跟踪拍摄设备,包括:存储器和处理器;
所述存储器,用于存储程序指令;
所述处理器,用于根据获取到的跟踪触发操作,确定用于描述目标对象的对象特征信息;获取在显示预览图像的屏幕上设置的所述目标对象的显示位置 信息;根据所述对象特征信息和显示位置信息,控制摄像装置跟踪拍摄所述目标对象得到目标图像。
第三方面,本发明实施例提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现如上述第一方面所述的跟踪拍摄方法。
本发明实施例中,通过获取跟踪触发操作确定用于描述目标对象的对象特征信息,以及获取所述目标对象的显示位置信息,根据获取到的所述对象特征信息和显示位置信息,控制摄像装置跟踪拍摄所述目标对象得到目标图像,从而实现了快速实现跟踪拍摄设置,尤其适用于小显示屏,提高了跟踪拍摄的灵活性。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种跟踪拍摄方法的流程示意图;
图2a是本发明实施例提供的一种在显示预览图像的屏幕上的点选操作的界面示意图;
图2b是本发明实施例提供的一种从预览图像中获取目标对象的界面示意图;
图2c是本发明实施例提供的一种在显示预览图像的屏幕上放大处理的界面示意图;
图2d是本发明实施例提供的一种对放大处理后的预览图像的框选操作的界面示意图;
图3a是本发明实施例提供的一种在显示预览图像的屏幕上的框选操作的界面示意图;
图3b是本发明实施例提供的另一种从预览图像中获取目标对象的界面示意图;
图4是本发明实施例提供的一种目标图像的界面示意图;
图5是本发明实施例提供的另一种跟踪拍摄方法的流程示意图;
图6是本发明实施例提供的又一种跟踪拍摄方法的流程示意图;
图7是本发明实施例提供的又一种跟踪拍摄方法的流程示意图;
图8是本发明实施例提供的一种跟踪拍摄设备的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本发明的部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
本发明实施例提供的跟踪拍摄方法可以应用于跟踪拍摄设备,具体地,该跟踪拍摄设备可以为智能手机、平板电脑、膝上型电脑、穿戴式设备(手表、手环)中的任意一种或多种终端设备,其中,所述跟踪拍摄设备可以设置于无人飞行器的云台上,所述云台上挂载了摄像装置;或者,在其他实施例中,所述跟踪拍摄设备也可以设置在与无人飞行器的云台建立通信连接的终端设备上。下面对应用于无人飞行器的跟踪拍摄方法进行举例说明的跟踪拍摄方法进行举例说明。
本发明实施例,通过跟踪拍摄设备可以获取用户在显示预览图像的屏幕上的跟踪触发操作,根据所述跟踪触发操作,可以确定出用于描述目标对象的对象特征信息,并将所述对象特征信息发送给搭载在云台上的摄像装置,以使所述摄像装置可以对所述对象特征信息进行初始化。该跟踪拍摄设备可以根据初始化的对象特征信息确定出目标对象,并根据获取到的用户在显示预览图像的屏幕上设置的所述目标对象的显示位置信息,控制摄像装置跟踪拍摄所述目标对象得到目标图像。所述跟踪拍摄设备在控制摄像装置拍摄获取到目标图像之后,可以根据获取到的用于描述所述目标对象的对象特征信息,确定在所述目标图像上的所述目标对象的跟踪信息,并根据所述显示位置信息和跟踪信息,生成用于调整云台的控制指令。所述跟踪拍摄设备可以将所述控制指令发送给云台,以使云台根据所述控制指令控制云台的转动,从而通过控制云台的转动 来控制摄像装置的调整拍摄角度,实现对摄像装置拍摄得到的目标图像进行进一步的调整,以提高跟踪拍摄的准确率。
其中,所述跟踪拍摄设备获取到的跟踪触发操作可以是点选操作,也可以是框选操作。所述点选操作可以是在显示预览图像的屏幕上获取到的单击操作、双击操作、长按操作中的任意一种或多种,也可以是在显示预览图像的屏幕上的应用APP上获取到的单击操作、双击操作、长按操作中的任意一种或多种。所述框选操作可以是在显示预览图像的屏幕上获取到的框选操作,也可以是在显示预览图像的屏幕上的应用APP上获取到的框选操作。所述显示预览图像的屏幕可以是所述跟踪拍摄设备的屏幕,也可以是与该跟踪拍摄设备建立通信连接的移动终端如手机的屏幕。所述跟踪拍摄设备的屏幕尺寸可以是较小的屏幕如3x3cm 2的屏幕,也可以是其他任意尺寸的屏幕,本发明实施例对所述跟踪拍摄设备的屏幕尺寸不做具体限定。
请参见图1,图1是本发明实施例提供的一种跟踪拍摄方法的流程示意图,所述方法可以由跟踪拍摄设备执行,其中,所述跟踪拍摄设备的具体解释如前所述。具体地,本发明实施例的所述方法包括如下步骤。
S101:根据获取到的跟踪触发操作,确定用于描述目标对象的对象特征信息。
本发明实施例中,跟踪拍摄设备可以根据获取到的跟踪触发操作,确定用于描述目标对象的对象特征信息,其中,所述跟踪触发操作可以是点选操作,也可以是框选操作,本发明实施例对所述触发操作的形式不做具体限定。所述对象特征信息包括图像区域的长度、宽度、坐标信息中的任意一种或多种,本发明实施例不做具体限定。
在一个实施例中,跟踪拍摄设备可以获取用户在显示预览图像的屏幕上的点选操作,并根据所述点选操作确定出点选图像区域,从而根据所述点选图像区域得到用于描述所述点选图像区域内的目标对象的对象特征信息。其中,所述对象特征信息包括所述点选操作确定的点选图像区域的长度、宽度、坐标信息中的任意一种或多种,所述点选操作可以是单击操作、双击操作或长按操作。
具体可以图2a和图2b为例进行说明,图2a是本发明实施例提供的一种在显示预览图像的屏幕上的点选操作的界面示意图,图2b是本发明实施例提供的一种从预览图像中获取目标对象的界面示意图。如图2a所示的界面示意 图是通过摄像装置拍摄并显示在屏幕上的预览图像,用户可以通过对图2a显示预览图像的屏幕上的点20的点击操作,自动确定出图2b中所示的点选图像区域21,将该点选图像区域21中的图像作为目标对象,并根据所述点选图像区域21得到用于描述所述点选图像区域21内的目标对象的对象特征信息,即如图中21所示的点选图像区域的长度、宽度、坐标信息等。在确定出点选图像区域21时,显示预览图像的屏幕上还可以显示有菜单22,其中,菜单22包括用于指示显示位置信息的选项如居中221、当前位置222、自定义223,以及取消224、开始225选项。假设图2a所示的对预览图像屏幕上的点20的点选操作为单击操作,则跟踪拍摄设备可以获取所述单击操作确定的如图2b所示的点选图像区域21,并获取所述点选图像区域21的长度、宽度、坐标信息等对象特征信息,将所述点选图像区域21内的对象确定为目标对象。
在一个实施例中,所述跟踪拍摄设备可以通过获取点选操作确定目标点,以所述目标点为中心在所述预览图像上进行对象估计检测,确定出目标对象,并根据所述目标对象确定出点选图像区域,从而根据所述点选图像区域得到用于描述所述点选图像区域内的目标对象的对象特征信息。
具体实施过程中,跟踪拍摄设备可以根据获取到的点选操作确定目标点的坐标位置,以所述目标点为中心在所述预览图像上通过预设的检测算法检测所述预览图像上是否存在对象,如果存在,则确定出目标对象,并根据所述目标对象确定出点选图像区域,从而确定出所述目标对象的点选图像区域的长度、宽度、坐标信息等对象特征信息。其中,所述预设的检测算法可以是用于确定出目标对象的任意一种或多种检测算法,本发明实施例对预设的检测算法不做具体限定。例如,假设所述预设的检测算法为显著性算法,通过所述显著性算法,跟踪拍摄设备可以得出包含目标点坐标信息的最具有显著性的一个物体的大小及位置信息,并将其确定为目标对象。
在一个实施例中,所述跟踪拍摄设备可以通过获取所述点选操作确定的目标点,以所述目标点为中心,按照预设比例将所述预览图像进行图像放大处理,获取对放大处理后的预览图像的框选操作确定的框选图像区域,并根据所述框选图像区域,获取用于描述该框选图像区域内的目标对象的对象特征信息。具体实施过程中,假设该跟踪拍摄设备获取到的点选操作为单击操作,则跟踪拍摄设备可以获取用户在所述跟踪拍摄设备上的单击操作确定的目标点,并以目 标点为中心,获取用户在所述跟踪拍摄设备上按照预设比例放大所述预览图像后的图像。用户可以对放大处理后的预览图像进行框选操作,所述跟踪拍摄设备可以获取到用户的框选操作所确定出框选图像区域,并确定出用于描述该框选图像区域的长度、宽度、坐标信息等对象特征信息。
具体可以图2c和图2d为例进行说明,图2c是本发明实施例提供的一种在显示预览图像的屏幕上放大处理的界面示意图,图2d是本发明实施例提供的一种对放大处理后的预览图像的框选操作的界面示意图。假设该跟踪拍摄设备获取到的点选操作为对图2c所示的目标点23的单击操作,跟踪拍摄设备可以获取用户在所述跟踪拍摄设备上的单击操作确定的目标点23的坐标信息,并以目标点23为中心,获取用户在所述跟踪拍摄设备上按照预设比例放大所述预览图像后的图像24。用户可以对放大处理后的预览图像24进行如图2d所示的框选操作,所述跟踪拍摄设备可以获取到用户的框选操作所确定出框选图像区域25,并确定出用于描述该框选图像区域25的长度、宽度、坐标信息等对象特征信息。
在另一些实施例中,在确定出点选图像区域时,显示预览图像的屏幕上还可以不具有菜单而只有预览图像。具体可以图2a为例进行说明,假设获取到对图2a中显示预览图像的屏幕上的点20的点选操作为长按操作,则触发开启跟踪拍摄设置,即用户长按图2a预览屏幕上的点20超过一定时间后,跟踪拍摄设备即判断用户开启了跟踪拍摄,并可以获取用户长按点20确定出的点选图像区域21。
在一个实施例中,所述跟踪拍摄设备可以获取在显示预览图像的屏幕上的框选操作,并根据所述框选操作确定出框选图像区域,以及根据所述框选图像区域得到用于描述所述框选图像区域内的目标对象的对象特征信息。具体实施过程中,显示预览图形的屏幕可以是跟踪拍摄设备的屏幕或者是跟踪设备上APP的屏幕,也可以是与跟踪拍摄设备建立通信连接的移动终端的屏幕或所述移动终端上APP的屏幕。跟踪拍摄设备可以获取在显示预览图像的移动终端的屏幕上的框选操作,并根据所述框选操作确定出框选图像区域,以及计算得到该框选图像区域的长度、宽度、坐标信息等用于描述所述框选图像区域内的目标对象的对象特征信息。
具体可以图3a和图3b为例进行说明,图3a是本发明实施例提供的一种 在显示预览图像的屏幕上的框选操作的界面示意图,图3b是本发明实施例提供的另一种从预览图像中获取目标对象的界面示意图,如图3a所示的界面示意图是通过摄像装置拍摄并显示在屏幕上的预览图像,用户可以对图3a所示的预览图像进行框选操作框选出区域30,确定出如图3b所示的框选图像区域31,将该框选图像区域31中的图像作为目标对象,并根据所述框选图像区域31得到用于描述所述框选图像区域31内的目标对象的对象特征信息即如图3b所示的框选图像区域31的长度、宽度、坐标信息等。
S102:获取在显示预览图像的屏幕上设置的所述目标对象的显示位置信息。
本发明实施例中,跟踪拍摄设备可以获取在显示预览图像的屏幕上设置的所述目标对象的显示位置信息,其中,所述显示预览图像的屏幕可以是跟踪拍摄设备的屏幕,也可以是与跟踪拍摄设备建立通信连接的移动终端的屏幕。所述显示位置信息的获取可以是根据用户在显示预览图像的跟踪拍摄设备的屏幕上的用户操作确定得到的;所述显示位置信息也可以是根据用户在显示预览图像的移动终端的屏幕上的用户操作确定得到的。
所述目标对象的显示位置信息的设置方式可以通过在显示预览图像的屏幕上设置菜单进行设置,所述菜单中包括多个位置指示信息选项。所述跟踪拍摄设备可以根据获取到的用户对所述位置指示信息选项的点击操作,确定出目标对象的显示位置信息。在一个实施例中,所述跟踪拍摄设备可以获取对显示预览图像的跟踪拍摄设备屏幕上的菜单中包括的位置指示信息选项的点击操作,并根据获取到的所述点击操作所确定的位置指示信息,确定出所述目标对象的显示位置信息。
具体可以图2b为例进行说明,跟踪拍摄设备可以获取对显示预览图像的跟踪拍摄设备屏幕上设置的菜单22中包括的位置指示信息的点击操作,假设用户点击菜单22中的居中221选项,则跟踪拍摄设备可以根据获取到的所述点击操作确定的居中的位置指示信息,确定所述目标对象的显示位置信息为在显示预览图像的屏幕上的居中位置,如果用户点击取消224选项,则跟踪拍摄设备可以取消为所述目标对象设置的显示位置信息为在显示预览图像的屏幕上的居中位置,并可以重新对目标对象的显示位置信息进行设置。
又例如,假设跟踪拍摄设备获取到用户对菜单22中当前位置选项的点击 操作,则可以获取到所述点击操作确定的当前位置222的位置指示信息,并确定所述目标对象的显示位置信息为在显示预览图像的屏幕上目标对象的当前位置,同理,如果用户点击取消224选项,则跟踪拍摄设备可以取消为所述目标对象设置的显示位置信息为在显示预览图像的屏幕上的当前位置,且用户可以重新对目标对象的显示位置信息进行设置。
所述目标对象的显示位置信息的设置方式还可以通过获取在显示预览图像的屏幕上的点选图像区域的拖动操作,将拖动所述点选图像区域后的位置信息确定为所述目标对象的显示位置信息。在一个实施例中,跟踪拍摄设备可以获取对所述点选图像区域的拖动操作,并根据获取到的所述拖动操作拖动后的点选图像区域的位置信息,确定所述目标图像的显示位置信息。具体可以图2b为例进行说明,假设跟踪拍摄设备获取到用户对菜单22中自定义223选项的点击操作,则用户可以在显示预览图像的屏幕上拖动所述目标对象的点选图像区域21至任意位置,并将拖动后点选图像区域21的位置作为该目标对象的显示位置信息。又例如,跟踪拍摄设备可以不用获取对菜单22中自定义223选项的点击操作,直接拖动所述目标对象的点选图像区域21至任意位置,并将拖动后点选图像区域21的位置作为该目标对象的显示位置信息。
S103:根据所述对象特征信息和显示位置信息,控制摄像装置跟踪拍摄所述目标对象得到目标图像。
本发明实施例中,跟踪拍摄设备可以根据所述对象特征信息和显示位置信息,控制摄像装置跟踪拍摄所述目标对象得到目标图像。具体实施过程中,跟踪拍摄设备可以通过私有协议将获取到的用于描述所述目标对象的对象特征信息和显示位置信息发送给摄像装置,以使所述摄像装置对所述对象特征信息进行初始化,并在初始化后根据获取到的显示位置信息,对所述目标对象进行跟踪拍摄得到目标图像。
具体可以图2b和图4为例进行说明,图4是本发明实施例提供的一种目标图像的界面示意图。假设跟踪拍摄设备获取到的用于描述所述目标对象的对象特征信息为点选图像区域21的长为2cm、宽为1cm和所述点选图像区域21的GPS坐标位置,如果跟踪拍摄设备获取到该点选图像区域21内的目标对象的显示位置信息为居中,当跟踪拍摄设备获取到用户点击菜单22中的开始225选项时,可以将获取到的所述点选图像区域的长、宽、GPS坐标位置以及显示 位置信息发送给摄像装置,以使所述摄像装置对所述点选图像区域21内的目标对象的对象特征信息进行初始化,并在初始化后根据获取到的显示位置信息,对所述点选图像区域21中的目标对象进行跟踪拍摄,得到如图4所示在屏幕中的显示位置为居中的目标图像41。
在一个实施例中,所述跟踪拍摄设备在控制摄像装置跟踪拍摄所述目标对象得到目标图像之后,可以根据获取到的用于描述所述目标对象的对象特征信息,确定在目标图像上的所述目标对象的跟踪信息,并向云台发送控制指令,所述控制指令是根据所述显示位置信息和跟踪信息确定的用于调整云台的指令。具体实施过程中,所述跟踪拍摄设备可以将获取到的用于描述所述目标对象的对象特征信息传输到算法运算模块,通过该算法运算模块,根据在显示预览图像的屏幕上设置的对象特征信息和摄像装置跟踪拍摄得到的目标图像的对象特征信息,计算得到在目标图像上的所述目标对象的坐标信息、目标对象的图像区域的尺寸信息等跟踪信息。所述跟踪拍摄设备可以根据所述跟踪信息和显示位置信息生成控制指令,并向云台发送控制指令,以使所述云台按照所述控制指令控制云台的转动。
本发明实施例,跟踪拍摄设备通过获取到的跟踪触发操作,确定用于描述目标对象的对象特征信息,从而确定出目标对象,并通过获取在显示预览图像的屏幕上设置的所述目标对象的显示位置信息,确定出目标对象在屏幕上的显示位置,以控制摄像装置跟踪拍摄所述目标对象得到目标图像,以及在得到目标图像之后,该跟踪拍摄设备可以确定在目标图像上的所述目标对象的跟踪信息,并向云台发送控制指令,以控制所述云台按照所述控制指令转动。通过这种方式,实现了对目标对象的跟踪拍摄,通过控制云台的转动调整摄像装置的拍摄角度,以使摄像装置跟踪拍摄得到更准确的目标图像,从而提高跟踪拍摄的效率。
请参见图5,图5是本发明实施例提供的另一种跟踪拍摄方法的流程示意图,所述方法可以由跟踪拍摄设备执行,其中,所述跟踪拍摄设备的具体解释如前所述。具体地,本发明实施例的所述方法包括如下步骤。
S501:获取在显示预览图像的屏幕上的点选操作。
本发明实施例中,跟踪拍摄设备可以获取在显示预览图像的屏幕上的点选操作,其中,所述显示预览图像的屏幕可以是跟踪拍摄设备的屏幕,也可以是 与跟踪拍摄设备建立通信连接的移动终端的屏幕,所述跟踪拍摄设备的屏幕可以是较小尺寸的屏幕如3x3cm 2的尺寸,也可以是其他任意尺寸,本发明实施例对跟踪拍摄设备的屏幕尺寸不做具体限定。
所述点选操作包括单击操作、双击操作或长按操作,在一个实施例中,跟踪拍摄设备可以获取用户在显示预览图像的跟踪拍摄设备的屏幕上的单击操作、双击操作或长按操作;或者,跟踪拍摄设备可以获取用户在显示预览图像的与跟踪拍摄设备建立通信连接的移动终端的屏幕上的单击操作、双击操作或长按操作。
S502:根据所述点选操作确定点选图像区域,并根据所述点选图像区域得到用于描述所述点选图像区域内的目标对象的对象特征信息。
本发明实施例中,跟踪拍摄设备可以根据所述点选操作确定点选图像区域,并根据所述点选图像区域得到用于描述所述点选图像区域内的目标对象的对象特征信息。在一个实施例中,跟踪拍摄设备可以根据用户在跟踪拍摄设备的屏幕上的单击操作、双击操作或长按操作确定出点选图像区域,并根据所述点选图像区域得到用于描述所述点选图像区域内的目标对象的点选图像区域的长度、宽度、坐标信息等对象特征信息。或者,跟踪拍摄设备可以获取到与所述跟踪拍摄设备建立通信连接的移动终端的屏幕上的单击操作、双击操作或长按操作,并根据获取到的所述点选图像区域得到用于描述所述点选图像区域内的目标对象的点选图像区域的长度、宽度、坐标信息等对象特征信息。具体举例如前所述,此处不再赘述。
所述跟踪拍摄设备确定目标对象的对象特征信息的方式有多种,其中,在一个实施例中,跟踪拍摄设备可以通过获取点选操作确定目标点,以所述目标点为中心在所述预览图像上进行对象估计检测,确定出目标对象,并根据所述目标对象确定点选图像区域,从而获取用于描述该点选图像区域内的目标对象的对象特征信息。具体实施过程中,所述跟踪拍摄设备可以获取点选操作确定的目标点的坐标位置,以所述目标点的坐标位置为中心,检测所述预览图像中是否存在对象,并确定出目标对象,并根据所述目标对象确定点选图像区域,获取用于描述该点选图像区域的长度、宽度、坐标信息等目标对象的对象特征信息,具体举例如前所述,此处不再赘述。
在一个实施例中,跟踪拍摄设备可以获取所述点选操作确定的目标点,以 所述目标点为中心,按照预设比例将所述预览图像进行图像放大处理,获取对放大处理后的预览图像的框选操作确定的框选图像区域,根据所述框选图像区域,获取用于描述该框选图像区域内的目标对象的对象特征信息。具体实施过程中,跟踪拍摄设备可以获取所述点选操作确定的目标点,并以所述目标点为中心,按照预设比例将所述预览图像进行图像放大处理,将放大处理后的预览图像显示在移动终端屏幕或跟踪拍摄设备屏幕上。用户可以对屏幕上显示的放大处理后的预览图像进行框选操作,框选确定出框选图像区域。跟踪拍摄设备根据获取到的所述框选图像区域,得到用于描述该框选图像区域内的目标对象的对象特征信息。具体举例如前所述,此处不再赘述。
S503:获取在显示预览图像的屏幕上设置的所述目标对象的显示位置信息。
本发明实施例中,跟踪拍摄设备可以获取在显示预览图像的屏幕上设置的所述目标对象的显示位置信息。在一个实施例中,跟踪拍摄设备可以获取用户在跟踪拍摄设备屏幕上设置的所述目标对象的显示位置信息,或者,跟踪拍摄设备可以获取用户在与所述跟踪拍摄设备建立通信连接的移动终端屏幕上设置的所述目标对象的显示位置信息。具体地举例说明如前所述,此处不再赘述。
在一个实施例中,跟踪拍摄设备可以获取对在显示预览图像的屏幕上设置的菜单中包括的位置指示信息的点击操作,并根据获取到的所述点击操作确定的位置指示信息,确定所述目标对象的显示位置信息。具体举例说明如前所述,此处不再赘述。
在一个实施例中,跟踪拍摄设备可以获取对所述确定的点选图像区域的拖动操作,根据获取到的所述拖动操作拖动后的点选图像区域的位置信息,确定所述显示位置信息。具体举例说明如前所述,此处不再赘述。
S504:将获取到的用于描述所述目标对象的对象特征信息和显示位置信息发送给摄像装置。
本发明实施例中,跟踪拍摄设备可以将获取到的用于描述所述目标对象的对象特征信息和显示位置信息发送给摄像装置,以使所述摄像装置对所述对象特征信息进行初始化,并在初始化后根据获取到的显示位置信息,对所述目标对象进行跟踪拍摄得到目标图像。具体地举例说明如前所述,此处不再赘述。
本发明实施例,跟踪拍摄设备通过获取在显示预览图像的屏幕上的点选操 作,确定点选图像区域,通过所述点选操作不仅可以从正常尺寸的屏幕上获取点选图像区域,也可以从较小屏幕上自动根据点选操作获取点选图像区域,并将获取到的用于描述所述目标对象的对象特征信息和显示位置信息发送给摄像装置,以使所述摄像装置对所述对象特征信息进行初始化,并在初始化后根据获取到的显示位置信息,实现对所述目标对象进行跟踪拍摄得到目标图像。
请参见图6,图6是本发明实施例提供的又一种跟踪拍摄方法的流程示意图,所述方法可以由跟踪拍摄设备执行,其中,所述跟踪拍摄设备的具体解释如前所述。具体地,本发明实施例的所述方法包括如下步骤。
S601:获取在显示预览图像的屏幕上的框选操作。
本发明实施例中,跟踪拍摄设备可以获取在显示预览图像的屏幕上的框选操作,其中,所述显示预览图像的屏幕可以是跟踪拍摄设备的屏幕,也可以是与跟踪拍摄设备建立通信连接的移动终端的屏幕。在一个实施例中,所述跟踪拍摄设备可以获取在显示预览图像的跟踪拍摄设备的屏幕上的框选操作,或者,跟踪拍摄设备可以获取在显示预览图像的与所述跟踪拍摄设备建立通信连接的屏幕上的框选操作。
S602:根据所述框选操作确定框选图像区域,并根据框选图像区域得到用于描述所述框选图像区域内的目标对象的对象特征信息。
本发明实施例中,跟踪拍摄设备可以根据获取到的所述框选操作确定框选图像区域,并根据所述框选图像区域得到用于描述所述框选图像区域内的目标对象的对象特征信息。在一个实施例中,跟踪拍摄设备可以获取在显示预览图像的移动终端的屏幕上的框选操作,并根据所述框选操作确定出框选图像区域,以及计算得到该框选图像区域的长度、宽度、坐标信息等用于描述所述框选图像区域内的目标对象的对象特征信息。在一个实施例中,跟踪拍摄设备可以获取在显示预览图像的跟踪拍摄设备的屏幕上的框选操作,并根据所述框选操作确定出框选图像区域,以及计算得到该框选图像区域的长度、宽度、坐标信息等用于描述所述框选图像区域内的目标对象的对象特征信息。具体举例说明如前所述,此处不再赘述。
S603:获取在显示预览图像的屏幕上设置的所述目标对象的显示位置信息。
本发明实施例中,跟踪拍摄设备可以获取在显示预览图像的屏幕上设置的 所述目标对象的显示位置信息。在一个实施例中,所述跟踪拍摄设备可以获取用户在跟踪拍摄设备屏幕上设置的所述目标对象的显示位置信息,或者,跟踪拍摄设备可以获取用户在与所述跟踪拍摄设备建立通信连接的移动终端屏幕上设置的所述目标对象的显示位置信息。具体地举例说明如前所述,此处不再赘述。
在一个实施例中,跟踪拍摄设备可以获取对在显示预览图像的屏幕上设置的菜单中包括的位置指示信息的点击操作,并根据获取到的所述点击操作确定的位置指示信息,确定所述目标对象的显示位置信息。
具体可以图3b为例进行说明,跟踪拍摄设备可以获取对显示预览图像的跟踪拍摄设备屏幕上的菜单32中包括的位置指示信息选项的点击操作,假设用户点击菜单32中的居中321选项,则跟踪拍摄设备可以根据获取到的所述点击操作确定的居中的位置指示信息,确定出所述目标对象的显示位置信息为在显示预览图像的屏幕上的居中位置。如果用户点击取消324选项,则跟踪拍摄设备可以取消为所述目标对象设置的显示在显示预览图像的屏幕上的居中位置的显示位置信息,且用户可以重新对目标对象的显示位置信息进行设置。
又例如,假设跟踪拍摄设备获取到用户对菜单32中的当前位置选项的点击操作,则该跟踪拍摄设备可以获取到所述点击操作确定的当前位置322的位置指示信息,并确定所述目标对象的显示位置信息为在显示预览图像的屏幕上目标对象的当前位置。同理,如果用户点击取消324选项,则跟踪拍摄设备可以取消为所述目标对象设置的在显示预览图像的屏幕上的当前位置进行显示的显示位置信息,且用户可以重新对目标对象的显示位置信息进行设置。
在一个实施例中,跟踪拍摄设备可以获取对所述确定的框选图像区域的拖动操作,并根据获取到的所述拖动操作拖动后的框选图像区域的位置信息,确定所述显示位置信息。具体可以图3b为例进行说明,假设跟踪拍摄设备获取到用户对菜单32中自定义323选项的点击操作,则用户可以在显示预览图像的屏幕上拖动所述目标对象的框选图像区域31至任意位置,并将拖动后框选图像区域31的位置作为该目标对象的显示位置信息。又例如,跟踪拍摄设备可以不用获取对菜单32中自定义323选项的点击操作,直接拖动所述目标对象的点选图像区域31至任意位置,并将拖动后点选图像区域31的位置作为该目标对象的显示位置信息。
S604:将获取到的用于描述所述目标对象的对象特征信息和显示位置信息发送给摄像装置。
本发明实施例中,跟踪拍摄设备可以将获取到的用于描述所述目标对象的对象特征信息和显示位置信息发送给摄像装置,以使所述摄像装置对所述对象特征信息进行初始化,并在初始化后根据获取到的显示位置信息,对所述目标对象进行跟踪拍摄得到目标图像。具体地举例说明如前所述,此处不再赘述。
本发明实施例,跟踪拍摄设备通过获取在显示预览图像的屏幕上的框选操作,确定出框选图像区域,并将获取到的用于描述所述目标对象的对象特征信息和显示位置信息发送给摄像装置,以使所述摄像装置对所述对象特征信息进行初始化,并在初始化后根据获取到的显示位置信息,实现对所述目标对象进行跟踪拍摄得到目标图像。
请参见图7,图7是本发明实施例提供的又一种跟踪拍摄方法的流程示意图,所述方法可以由跟踪拍摄设备执行,其中,所述跟踪拍摄设备的具体解释如前所述。具体地,本发明实施例的所述方法包括如下步骤。
S701:根据获取到的用于描述所述目标对象的对象特征信息,确定在目标图像上的所述目标对象的跟踪信息。
本发明实施例中,跟踪拍摄设备在控制摄像装置跟踪拍摄所述目标对象得到目标图像之后,可以根据获取到的用于描述所述目标对象的对象特征信息,确定在目标图像上的所述目标对象的跟踪信息。其中,所述目标对象的跟踪信息包括摄像装置跟踪拍摄得到的目标图像上所述目标对象所占图像区域的尺寸信息、坐标信息等对象特征信息,以及目标对象在目标图像中的显示位置信息。
S702:向云台发送控制指令,所述控制指令是根据所述显示位置信息和跟踪信息确定的用于调整云台的指令。
本发明实施例中,跟踪拍摄设备可以在获取到目标图像的跟踪信息后,向云台发送控制指令,其中所述控制指令是根据所述显示位置信息和跟踪信息确定的用于调整云台的指令。在一个实施例中,所述跟踪拍摄设备可以根据在预览图像上获取到的对象特征信息和在目标图像上确定的跟踪信息,确定所述目标图像的可信度,其中,所述可信度用于表示摄像装置跟踪拍摄在预览图像上确定的目标对象得到的目标图像的跟踪准确率。例如,假设跟踪拍摄设备在预 览图像上获取到的对象特征信息为框选区域的长度、宽度和GPS坐标位置,如果跟踪拍摄设备确定出摄像装置跟踪拍摄得到的目标图像的长度、宽度和GPS坐标位置即跟踪信息,则跟踪拍摄设备可以计算出所述目标图像的可信度。
在一个实施例中,所述跟踪拍摄设备在根据所述可信度确定所述控制指令时,可以检测获取到的所述可信度是否小于预设阈值,当检测到所述可信度小于预设阈值时,所述跟踪拍摄设备可以对所述目标图像进行全图检测,如果检测到全图中存在目标对象,则可以获取所述目标对象在目标图像中的检测位置信息,并根据所述检测位置信息和所述显示位置信息确定所述用于调整云台的控制指令。
例如,假设跟踪拍摄设备确定出目标图像的GPS坐标位置与在预览图像上获取到的框选区域的GPS坐标位置差距较大,以使计算得到的所述可信度小于预设阈值,则所述跟踪拍摄设备可以对所述目标图像进行全图检测,如果检测到全图中存在目标对象,则获取所述目标对象在目标图像中的检测位置信息,并根据所述检测位置信息和所述显示位置信息确定控制指令,其中,所述控制指令用于调整云台以使摄像装置拍摄得到的目标图像中的目标对象的对象特征信息与在所述预览图像中设置的目标对象的对象特征信息相同。
在一个实施例中,所述跟踪拍摄设备可以根据获取到的所述目标对象在目标图像中的检测位置信息和所述显示位置信息确定云台的转动角度,并生成携带所述转动角度的控制指令,其中所述控制指令用于控制所述云台按照所述转动角度转动。具体实施过程中,所述跟踪拍摄设备可以根据获取到的所述目标对象在目标图像中的检测位置信息和所述显示位置信息,计算出所述目标图像上的目标对象的对象特征信息与在预览图像上设置的目标对象的对象特征信息的差异,并根据所述差异确定出云台的转动角度,以及生成携带所述转动角度的控制指令,以控制所述云台按照所述转动角度转动,使得挂载在云台上的摄像装置跟随所述云台的转动调整拍摄角度,从而使所述摄像装置拍摄得到的目标图像上的目标对象的对象特征信息与在预览图像上设置的目标对象的对象特征信息相同,进一步提高了跟踪拍摄的效率。
本发明实施例中,跟踪拍摄设备通过获取目标图像上的跟踪信息,确定出所述目标图像的可信度,当所述可信度小于预设阈值时,对所述目标图像进行 全图检测,如果检测结果中存在目标对象,则根据检测位置信息和显示位置信息确定云台转动角度,并生成携带所述转动角度的控制指令,以控制所述云台按照所述转动角度转动,以使挂载在云台上的摄像装置调整拍摄角度拍摄得到与在预览图像上设置的对象特征信息相同的目标图像。通过这种方式,可以进一步对跟踪拍摄的结果进行检测和调整,提高了跟踪拍摄的准确率。
请参见图8,图8是本发明实施例提供的一种跟踪拍摄设备的结构示意图。具体的,所述跟踪拍摄设备包括:一个或多个处理器801;一个或多个输入设备802,一个或多个输出设备803和存储器804。上述处理器801、输入设备802、输出设备803和存储器804通过总线805连接。存储器804用于存储指令,处理器801用于执行存储器804存储的指令。其中,当程序指令被执行时,处理器801用于执行如下步骤:
根据获取到的跟踪触发操作,确定用于描述目标对象的对象特征信息;
获取在显示预览图像的屏幕上设置的所述目标对象的显示位置信息;
根据所述对象特征信息和显示位置信息,控制摄像装置跟踪拍摄所述目标对象得到目标图像。
进一步地,所述跟踪触发操作包括点选操作;
所述处理器801调用存储器804中存储的程序指令,用于执行如下步骤:
获取在显示预览图像的屏幕上的点选操作;
根据所述点选操作确定点选图像区域,并根据所述点选图像区域得到用于描述所述点选图像区域内的目标对象的对象特征信息。
进一步地,所述处理器801调用存储器804中存储的程序指令,用于执行如下步骤:
进一步地,所述点选操作包括在显示预览图像的屏幕上的单击操作或双击操作。
进一步地,所述处理器801调用存储器804中存储的程序指令,用于执行如下步骤:
获取所述点选操作确定的目标点;
以所述目标点为中心在所述预览图像上进行对象估计检测,确定出目标对象;
根据所述目标对象确定点选图像区域;
获取用于描述该点选图像区域内的目标对象的对象特征信息。
进一步地,所述处理器801调用存储器804中存储的程序指令,用于执行如下步骤:
获取所述点选操作确定的目标点;
以所述目标点为中心,按照预设比例将所述预览图像进行图像放大处理;
获取对放大处理后的预览图像的框选操作确定的框选图像区域;
根据所述框选图像区域,获取用于描述该框选图像区域内的目标对象的对象特征信息。
进一步地,所述处理器801调用存储器804中存储的程序指令,用于执行如下步骤:
获取在显示预览图像的屏幕上的框选操作;
根据所述框选操作确定框选图像区域,并根据框选图像区域得到用于描述所述框选图像区域内的目标对象的对象特征信息。
进一步地,所述对象特征信息包括图像区域的长度、宽度、坐标信息中的任意一种或多种。
进一步地,所述处理器801调用存储器804中存储的程序指令,用于执行如下步骤:
获取对在显示预览图像的屏幕上设置的菜单中包括的位置指示信息的点击操作;
根据获取到的所述点击操作确定的位置指示信息,确定所述目标对象的显示位置信息。
进一步地,所述处理器801调用存储器804中存储的程序指令,用于执行如下步骤:
获取对所述确定的点选图像区域的拖动操作;
根据获取到的所述拖动操作拖动后的点选图像区域的位置信息,确定所述显示位置信息。
进一步地,所述处理器801调用存储器804中存储的程序指令,用于执行如下步骤:
获取对所述确定的框选图像区域的拖动操作;
根据获取到的所述拖动操作拖动后的框选图像区域的位置信息,确定所述 显示位置信息。
进一步地,所述处理器801调用存储器804中存储的程序指令,用于执行如下步骤:
将获取到的用于描述所述目标对象的对象特征信息和显示位置信息发送给摄像装置,以使所述摄像装置对所述对象特征信息进行初始化,并在初始化后根据获取到的显示位置信息,对所述目标对象进行跟踪拍摄得到目标图像。
进一步地,所述处理器801调用存储器804中存储的程序指令,用于执行如下步骤:
根据获取到的用于描述所述目标对象的对象特征信息,确定在目标图像上的所述目标对象的跟踪信息;
向云台发送控制指令,所述控制指令是根据所述显示位置信息和跟踪信息确定的用于调整云台的指令。
进一步地,所述处理器801调用存储器804中存储的程序指令,用于执行如下步骤:
根据在预览图像上获取到的对象特征信息和在目标图像上确定的跟踪信息,确定所述目标图像的可信度;
其中,所述可信度用于表示摄像装置跟踪拍摄在预览图像上确定的目标对象得到的目标图像的跟踪准确率;
根据所述可信度确定所述控制指令。
进一步地,所述处理器801调用存储器804中存储的程序指令,用于执行如下步骤:
当所述可信度小于预设阈值时,对所述目标图像进行全图检测;
如果检测结果中存在目标对象,则获取所述目标对象在目标图像中的检测位置信息;
根据所述检测位置信息和所述显示位置信息确定控制指令。
进一步地,所述处理器801调用存储器804中存储的程序指令,用于执行如下步骤:
根据检测位置信息和显示位置信息确定云台转动角度;
生成携带所述转动角度的控制指令,所述控制指令用于控制所述云台按照所述转动角度转动。
应当理解,在本发明实施例中,所称处理器801可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
输入设备802可以包括触控板、指纹采传感器(用于采集用户的指纹信息和指纹的方向信息)、麦克风等,输出设备803可以包括显示器(LCD等)、扬声器等。
该存储器804可以包括只读存储器和随机存取存储器,并向处理器801提供指令和数据。存储器804的一部分还可以包括非易失性随机存取存储器。例如,存储器804还可以存储设备类型的信息。
本发明实施例的所述处理器801的具体实现可参考上述各个实施例中相关内容的描述,在此不赘述。
本发明实施例中,跟踪拍摄设备通过获取到的跟踪触发操作,确定用于描述目标对象的对象特征信息,从而确定出目标对象,并通过获取在显示预览图像的屏幕上设置的所述目标对象的显示位置信息,确定出目标对象在屏幕上的显示位置,以控制摄像装置跟踪拍摄所述目标对象得到目标图像,以及在得到目标图像之后,该跟踪拍摄设备可以确定在目标图像上的所述目标对象的跟踪信息,并向云台发送控制指令,以控制所述云台按照所述控制指令转动。通过这种方式,实现了对目标对象的跟踪拍摄,通过控制云台的转动调整摄像装置的拍摄角度,以使摄像装置跟踪拍摄得到更准确的目标图像,从而提高跟踪拍摄的效率。
在本发明的实施例中还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现本发明图1、图5、图6或图7所对应实施例中描述的跟踪拍摄方法方式,也可实现图8所述本发明所对应实施例的跟踪拍摄设备,在此不再赘述。
所述计算机可读存储介质可以是前述任一实施例所述的设备的内部存储 单元,例如设备的硬盘或内存。所述计算机可读存储介质也可以是所述设备的外部存储设备,例如所述设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述计算机可读存储介质还可以既包括所述设备的内部存储单元也包括外部存储设备。所述计算机可读存储介质用于存储所述计算机程序以及所述设备所需的其他程序和数据。所述计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
以上所揭露的仅为本发明部分实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (31)

  1. 一种跟踪拍摄方法,其特征在于,包括:
    根据获取到的跟踪触发操作,确定用于描述目标对象的对象特征信息;
    获取在显示预览图像的屏幕上设置的所述目标对象的显示位置信息;
    根据所述对象特征信息和显示位置信息,控制摄像装置跟踪拍摄所述目标对象得到目标图像。
  2. 根据权利要求1所述的方法,其特征在于,所述跟踪触发操作包括点选操作;所述根据获取到的跟踪触发操作,确定用于描述目标对象的对象特征信息,包括:
    获取在显示预览图像的屏幕上的点选操作;
    根据所述点选操作确定点选图像区域,并根据所述点选图像区域得到用于描述所述点选图像区域内的目标对象的对象特征信息。
  3. 根据权利要求2所述的方法,其特征在于,
    所述点选操作包括在显示预览图像的屏幕上的单击操作、双击操作或长按操作。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述点选图像区域得到用于描述所述点选图像区域内的目标对象的对象特征信息,包括:
    获取所述点选操作确定的目标点;
    以所述目标点为中心在所述预览图像上进行对象估计检测,确定出目标对象;
    根据所述目标对象确定点选图像区域;
    获取用于描述该点选图像区域内的目标对象的对象特征信息。
  5. 根据权利要求2所述的方法,其特征在于,所述根据所述点选图像区域得到用于描述所述点选图像区域内的目标对象的对象特征信息,包括:
    获取所述点选操作确定的目标点;
    以所述目标点为中心,按照预设比例将所述预览图像进行图像放大处理;
    获取对放大处理后的预览图像的框选操作确定的框选图像区域;
    根据所述框选图像区域,获取用于描述该框选图像区域内的目标对象的对象特征信息。
  6. 根据权利要求1所述的方法,其特征在于,所述跟踪触发操作包括框选操作;所述根据获取到的跟踪触发操作,确定用于描述目标对象的对象特征信息,包括:
    获取在显示预览图像的屏幕上的框选操作;
    根据所述框选操作确定框选图像区域,并根据框选图像区域得到用于描述所述框选图像区域内的目标对象的对象特征信息。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,
    所述对象特征信息包括图像区域的长度、宽度、坐标信息中的任意一种或多种。
  8. 根据权利要求1所述的方法,其特征在于,所述获取在显示预览图像的屏幕上设置的所述目标对象的显示位置信息,包括:
    获取对在显示预览图像的屏幕上设置的菜单中包括的位置指示信息的点击操作;
    根据获取到的所述点击操作确定的位置指示信息,确定所述目标对象的显示位置信息。
  9. 根据权利要求2所述的方法,其特征在于,所述获取在显示预览图像的屏幕上设置的所述目标对象的显示位置信息,包括:
    获取对所述确定的点选图像区域的拖动操作;
    根据获取到的所述拖动操作拖动后的点选图像区域的位置信息,确定所述显示位置信息。
  10. 根据权利要求4或6所述的方法,其特征在于,所述确定在显示预览 图像的屏幕上设置的所述目标对象的显示位置信息,包括:
    获取对所述确定的框选图像区域的拖动操作;
    根据获取到的所述拖动操作拖动后的框选图像区域的位置信息,确定所述显示位置信息。
  11. 根据权利要求1所述的方法,其特征在于,所述根据所述对象特征信息和显示位置信息,控制摄像装置跟踪拍摄所述目标对象得到目标图像,包括:
    将获取到的用于描述所述目标对象的对象特征信息和显示位置信息发送给摄像装置,以使所述摄像装置对所述对象特征信息进行初始化,并在初始化后根据获取到的显示位置信息,对所述目标对象进行跟踪拍摄得到目标图像。
  12. 根据权利要求1所述的方法,其特征在于,所述控制摄像装置跟踪拍摄所述目标对象得到目标图像之后,包括:
    根据获取到的用于描述所述目标对象的对象特征信息,确定在目标图像上的所述目标对象的跟踪信息;
    向云台发送控制指令,所述控制指令是根据所述显示位置信息和跟踪信息确定的用于调整云台的指令。
  13. 根据权利要求12所述的方法,其特征在于,所述向云台发送控制指令,包括:
    根据在预览图像上获取到的对象特征信息和在目标图像上确定的跟踪信息,确定所述目标图像的可信度;
    其中,所述可信度用于表示摄像装置跟踪拍摄在预览图像上确定的目标对象得到的目标图像的跟踪准确率;
    根据所述可信度确定所述控制指令。
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述可信度确定控制指令,包括:
    当所述可信度小于预设阈值时,对所述目标图像进行全图检测;
    如果检测结果中存在目标对象,则获取所述目标对象在目标图像中的检测 位置信息;
    根据所述检测位置信息和所述显示位置信息确定控制指令。
  15. 根据权利要求14所述的方法,其特征在于,所述根据所述检测位置信息和所述显示位置信息确定控制指令,包括:
    根据检测位置信息和显示位置信息确定云台转动角度;
    生成携带所述转动角度的控制指令,所述控制指令用于控制所述云台按照所述转动角度转动。
  16. 一种跟踪拍摄设备,其特征在于,包括存储器和处理器;
    所述存储器,用于存储程序指令;
    所述处理器,用于根据获取到的跟踪触发操作,确定用于描述目标对象的对象特征信息;获取在显示预览图像的屏幕上设置的所述目标对象的显示位置信息;根据所述对象特征信息和显示位置信息,控制摄像装置跟踪拍摄所述目标对象得到目标图像。
  17. 根据权利要求16所述的设备,其特征在于,所述跟踪触发操作包括点选操作;
    所述处理器,用于获取在显示预览图像的屏幕上的点选操作;根据所述点选操作确定点选图像区域,并根据所述点选图像区域得到用于描述所述点选图像区域内的目标对象的对象特征信息。
  18. 根据权利要求17所述的设备,其特征在于,
    所述点选操作包括在显示预览图像的屏幕上的单击操作、双击操作或长按操作。
  19. 根据权利要求17所述的设备,其特征在于,
    所述处理器,用于获取所述点选操作确定的目标点;以所述目标点为中心在所述预览图像上进行对象估计检测,确定出目标对象;根据所述目标对象确定点选图像区域;获取用于描述该点选图像区域内的目标对象的对象特征信 息。
  20. 根据权利要求17所述的设备,其特征在于,
    所述处理器,还用于获取所述点选操作确定的目标点;以所述目标点为中心,按照预设比例将所述预览图像进行图像放大处理;获取对放大处理后的预览图像的框选操作确定的框选图像区域;根据所述框选图像区域,获取用于描述该框选图像区域内的目标对象的对象特征信息。
  21. 根据权利要求16所述的设备,其特征在于,所述跟踪触发操作包括框选操作;
    所述处理器,用于获取在显示预览图像的屏幕上的框选操作;根据所述框选操作确定框选图像区域,并根据框选图像区域得到用于描述所述框选图像区域内的目标对象的对象特征信息。
  22. 根据权利要求16-21任一项所述的设备,其特征在于,
    所述对象特征信息包括图像区域的长度、宽度、坐标信息中的任意一种或多种。
  23. 根据权利要求16所述的设备,其特征在于,
    所述处理器,用于获取对在显示预览图像的屏幕上设置的菜单中包括的位置指示信息的点击操作;根据获取到的所述点击操作确定的位置指示信息,确定所述目标对象的显示位置信息。
  24. 根据权利要求17所述的设备,其特征在于,
    所述处理器,用于获取对所述确定的点选图像区域的拖动操作;根据获取到的所述拖动操作拖动后的点选图像区域的位置信息,确定所述显示位置信息。
  25. 根据权利要求19或21所述的设备,其特征在于,
    所述处理器,用于获取对所述确定的框选图像区域的拖动操作;根据获取 到的所述拖动操作拖动后的框选图像区域的位置信息,确定所述显示位置信息。
  26. 根据权利要求16所述的设备,其特征在于,
    所述处理器,用于将获取到的用于描述所述目标对象的对象特征信息和显示位置信息发送给摄像装置,以使所述摄像装置对所述对象特征信息进行初始化,并在初始化后根据获取到的显示位置信息,对所述目标对象进行跟踪拍摄得到目标图像。
  27. 根据权利要求16所述的设备,其特征在于,
    所述处理器,还用于根据获取到的用于描述所述目标对象的对象特征信息,确定在目标图像上的所述目标对象的跟踪信息;向云台发送控制指令,所述控制指令是根据所述显示位置信息和跟踪信息确定的用于调整云台的指令。
  28. 根据权利要求27所述的设备,其特征在于,
    所述处理器,用于根据在预览图像上获取到的对象特征信息和在目标图像上确定的跟踪信息,确定所述目标图像的可信度;其中,所述可信度用于表示摄像装置跟踪拍摄在预览图像上确定的目标对象得到的目标图像的跟踪准确率;根据所述可信度确定所述控制指令。
  29. 根据权利要求28所述的设备,其特征在于,
    所述处理器,用于当所述可信度小于预设阈值时,对所述目标图像进行全图检测;如果检测结果中存在目标对象,则获取所述目标对象在目标图像中的检测位置信息;根据所述检测位置信息和所述显示位置信息确定控制指令。
  30. 根据权利要求29所述的设备,其特征在于,
    所述处理器,用于根据检测位置信息和显示位置信息确定云台转动角度;生成携带所述转动角度的控制指令,所述控制指令用于控制所述云台按照所述转动角度转动。
  31. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至15任一项所述方法。
PCT/CN2018/088862 2018-05-29 2018-05-29 一种跟踪拍摄方法、设备及存储介质 WO2019227309A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
PCT/CN2018/088862 WO2019227309A1 (zh) 2018-05-29 2018-05-29 一种跟踪拍摄方法、设备及存储介质
CN202110669027.XA CN113395450A (zh) 2018-05-29 2018-05-29 一种跟踪拍摄方法、设备及存储介质
CN201880010526.4A CN110291775B (zh) 2018-05-29 2018-05-29 一种跟踪拍摄方法、设备及存储介质
EP18921273.1A EP3806443A4 (en) 2018-05-29 2018-05-29 TRACK PHOTOGRAPHY METHOD AND DEVICE AND STORAGE MEDIUM
US17/105,931 US20210084228A1 (en) 2018-05-29 2020-11-27 Tracking shot method and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/088862 WO2019227309A1 (zh) 2018-05-29 2018-05-29 一种跟踪拍摄方法、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/105,931 Continuation US20210084228A1 (en) 2018-05-29 2020-11-27 Tracking shot method and device, and storage medium

Publications (1)

Publication Number Publication Date
WO2019227309A1 true WO2019227309A1 (zh) 2019-12-05

Family

ID=68001270

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/088862 WO2019227309A1 (zh) 2018-05-29 2018-05-29 一种跟踪拍摄方法、设备及存储介质

Country Status (4)

Country Link
US (1) US20210084228A1 (zh)
EP (1) EP3806443A4 (zh)
CN (2) CN110291775B (zh)
WO (1) WO2019227309A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114629869A (zh) * 2022-03-18 2022-06-14 维沃移动通信有限公司 信息生成方法、装置、电子设备及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110913132B (zh) * 2019-11-25 2021-10-26 维沃移动通信有限公司 对象跟踪方法及电子设备
CN110933314B (zh) * 2019-12-09 2021-07-09 Oppo广东移动通信有限公司 追焦拍摄方法及相关产品
CN113793260B (zh) * 2021-07-30 2022-07-22 武汉高德红外股份有限公司 半自动修正目标跟踪框的方法、装置及电子设备
TWI795987B (zh) * 2021-11-08 2023-03-11 致伸科技股份有限公司 雲台裝置
CN117251081A (zh) * 2022-08-31 2023-12-19 腾讯科技(深圳)有限公司 拾取对象的检测方法、装置、计算机设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796611A (zh) * 2015-04-20 2015-07-22 零度智控(北京)智能科技有限公司 移动终端遥控无人机实现智能飞行拍摄的方法及***
CN105100728A (zh) * 2015-08-18 2015-11-25 零度智控(北京)智能科技有限公司 无人机视频跟踪拍摄***及方法
CN105578034A (zh) * 2015-12-10 2016-05-11 深圳市道通智能航空技术有限公司 一种对目标进行跟踪拍摄的控制方法、控制装置及***
CN106331511A (zh) * 2016-11-16 2017-01-11 广东欧珀移动通信有限公司 智能终端跟踪拍摄的方法和装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831439B (zh) * 2012-08-15 2015-09-23 深圳先进技术研究院 手势跟踪方法及***
EP3060966B1 (en) * 2014-07-30 2021-05-05 SZ DJI Technology Co., Ltd. Systems and methods for target tracking
US10095942B2 (en) * 2014-12-15 2018-10-09 Reflex Robotics, Inc Vision based real-time object tracking system for robotic gimbal control
CN106303195A (zh) * 2015-05-28 2017-01-04 中兴通讯股份有限公司 拍摄设备及跟踪拍摄方法和***
CN107209854A (zh) * 2015-09-15 2017-09-26 深圳市大疆创新科技有限公司 用于支持顺畅的目标跟随的***和方法
US20170242432A1 (en) * 2016-02-24 2017-08-24 Dronomy Ltd. Image processing for gesture-based control of an unmanned aerial vehicle
CN105759839B (zh) * 2016-03-01 2018-02-16 深圳市大疆创新科技有限公司 无人机视觉跟踪方法、装置以及无人机
CN105825524B (zh) * 2016-03-10 2018-07-24 浙江生辉照明有限公司 目标跟踪方法和装置
CN105931263B (zh) * 2016-03-31 2019-09-20 纳恩博(北京)科技有限公司 一种目标跟踪方法及电子设备
CN106161953A (zh) * 2016-08-12 2016-11-23 零度智控(北京)智能科技有限公司 一种跟踪拍摄方法和装置
CN106485736B (zh) * 2016-10-27 2022-04-12 深圳市道通智能航空技术股份有限公司 一种无人机全景视觉跟踪方法、无人机以及控制终端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796611A (zh) * 2015-04-20 2015-07-22 零度智控(北京)智能科技有限公司 移动终端遥控无人机实现智能飞行拍摄的方法及***
CN105100728A (zh) * 2015-08-18 2015-11-25 零度智控(北京)智能科技有限公司 无人机视频跟踪拍摄***及方法
CN105578034A (zh) * 2015-12-10 2016-05-11 深圳市道通智能航空技术有限公司 一种对目标进行跟踪拍摄的控制方法、控制装置及***
CN106331511A (zh) * 2016-11-16 2017-01-11 广东欧珀移动通信有限公司 智能终端跟踪拍摄的方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3806443A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114629869A (zh) * 2022-03-18 2022-06-14 维沃移动通信有限公司 信息生成方法、装置、电子设备及存储介质
CN114629869B (zh) * 2022-03-18 2024-04-16 维沃移动通信有限公司 信息生成方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
EP3806443A4 (en) 2022-01-05
EP3806443A1 (en) 2021-04-14
US20210084228A1 (en) 2021-03-18
CN113395450A (zh) 2021-09-14
CN110291775B (zh) 2021-07-06
CN110291775A (zh) 2019-09-27

Similar Documents

Publication Publication Date Title
WO2019227309A1 (zh) 一种跟踪拍摄方法、设备及存储介质
WO2021008456A1 (zh) 图像处理方法、装置、电子设备及存储介质
US11108953B2 (en) Panoramic photo shooting method and apparatus
US9712745B2 (en) Method and apparatus for operating camera function in portable terminal
WO2020107266A1 (zh) 手持云台及其拍摄控制方法
JP6246246B2 (ja) 適応型カメラアレイを有するデバイス
EP3188467A1 (en) Method for image capturing using unmanned image capturing device and electronic device supporting the same
CN108495032B (zh) 图像处理方法、装置、存储介质及电子设备
WO2022002053A1 (zh) 拍照方法、装置及电子设备
WO2018157464A1 (zh) 一种图像显示方法及电子设备
CN109302632B (zh) 获取直播视频画面的方法、装置、终端及存储介质
WO2018166069A1 (zh) 拍照预览方法、图形用户界面及终端
KR102078198B1 (ko) 전자 장치 및 전자 장치의 3차원 모델링을 위한 촬영방법
WO2021238564A1 (zh) 显示设备及其畸变参数确定方法、装置、***及存储介质
US9088720B2 (en) Apparatus and method of displaying camera view area in portable terminal
CN111586279B (zh) 确定拍摄状态的方法、装置、设备及存储介质
US20200084385A1 (en) Display control apparatus, imaging apparatus, and control method
WO2022033272A1 (zh) 图像处理方法以及电子设备
US11706378B2 (en) Electronic device and method of controlling electronic device
CN111385525A (zh) 视频监控方法、装置、终端及***
WO2022061541A1 (zh) 控制方法、手持云台、***及计算机可读存储介质
US11950030B2 (en) Electronic apparatus and method of controlling the same, and recording medium
WO2022041013A1 (zh) 控制方法、手持云台、***及计算机可读存储介质
CN111757146B (zh) 视频拼接的方法、***及存储介质
CN110233966B (zh) 一种图像生成方法及终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18921273

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018921273

Country of ref document: EP

Effective date: 20210111