WO2022143022A1 - 基于图像采集装置的控制方法、云台的控制方法及装置 - Google Patents

基于图像采集装置的控制方法、云台的控制方法及装置 Download PDF

Info

Publication number
WO2022143022A1
WO2022143022A1 PCT/CN2021/135818 CN2021135818W WO2022143022A1 WO 2022143022 A1 WO2022143022 A1 WO 2022143022A1 CN 2021135818 W CN2021135818 W CN 2021135818W WO 2022143022 A1 WO2022143022 A1 WO 2022143022A1
Authority
WO
WIPO (PCT)
Prior art keywords
image acquisition
pan
control
acquisition device
tilt
Prior art date
Application number
PCT/CN2021/135818
Other languages
English (en)
French (fr)
Inventor
王协平
王振动
楼致远
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202180086440.1A priority Critical patent/CN116783568A/zh
Publication of WO2022143022A1 publication Critical patent/WO2022143022A1/zh
Priority to US18/215,871 priority patent/US20230341079A1/en

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16MFRAMES, CASINGS OR BEDS OF ENGINES, MACHINES OR APPARATUS, NOT SPECIFIC TO ENGINES, MACHINES OR APPARATUS PROVIDED FOR ELSEWHERE; STANDS; SUPPORTS
    • F16M11/00Stands or trestles as supports for apparatus or articles placed thereon ; Stands for scientific apparatus such as gravitational force meters
    • F16M11/20Undercarriages with or without wheels
    • F16M11/2007Undercarriages with or without wheels comprising means allowing pivoting adjustment
    • F16M11/2035Undercarriages with or without wheels comprising means allowing pivoting adjustment in more than one direction
    • F16M11/2042Undercarriages with or without wheels comprising means allowing pivoting adjustment in more than one direction constituted of several dependent joints
    • F16M11/205Undercarriages with or without wheels comprising means allowing pivoting adjustment in more than one direction constituted of several dependent joints the axis of rotation intersecting in a single point, e.g. gimbals
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/02Bodies
    • G03B17/12Bodies with means for supporting objectives, supplementary lenses, filters, masks, or turrets
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D3/00Control of position or direction
    • G05D3/12Control of position or direction using feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/768Addressed sensors, e.g. MOS or CMOS sensors for time delay and integration [TDI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control

Definitions

  • Embodiments of the present invention relate to the technical field of movable platforms, and in particular, to a control method based on an image acquisition device, and a control method and device for a PTZ.
  • a camera is set on the gimbal, and the camera can transmit image information to the gimbal through the image transmission module on the gimbal, so that the gimbal can perform corresponding control, such as tracking, according to the image information, or, in order to realize functions such as focusing .
  • control such as tracking
  • focusing functions
  • Embodiments of the present invention provide a control method based on an image acquisition device, and a control method and device for a PTZ, which can directly obtain shooting parameters determined by the image acquisition device, and then can control at least one of the PTZ and auxiliary equipment based on the shooting parameters.
  • One is to control, so as to facilitate the control of the PTZ and ensure a good user experience.
  • a first aspect of the present invention is to provide a control method based on an image acquisition device, comprising:
  • the image acquisition device is a camera with a manual lens or an automatic lens, and the shooting parameters can be used to adjust the captured images collected by the image acquisition device;
  • At least one of the PTZ and auxiliary equipment is controlled accordingly, wherein the PTZ is used to support the image acquisition device and/or the auxiliary equipment, and the auxiliary equipment is used to assist The image acquisition device performs corresponding shooting.
  • a second aspect of the present invention is to provide a control device based on an image capture device, comprising:
  • a processor for running a computer program stored in the memory to achieve:
  • the image acquisition device is a camera with a manual lens or an automatic lens, and the shooting parameters can be used to adjust the captured images collected by the image acquisition device;
  • At least one of the PTZ and auxiliary equipment is controlled accordingly, wherein the PTZ is used to support the image acquisition device and/or the auxiliary equipment, and the auxiliary equipment is used to assist The image acquisition device performs corresponding shooting.
  • a third aspect of the present invention is to provide a computer-readable storage medium, wherein the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement the above-mentioned first The control method based on the image acquisition device described in the aspect.
  • a fourth aspect of the present invention is to provide a pan/tilt head, comprising:
  • the main body of the gimbal The main body of the gimbal;
  • the control device based on the image acquisition device according to the third aspect above is provided on the main body of the pan/tilt head.
  • the technical solution provided by this embodiment provides a communication link for acquiring shooting parameters different from the related art, by directly acquiring the shooting parameters determined by the image acquisition device, determining the control parameters based on the shooting parameters, and Based on the control parameters, at least one of the pan-tilt head and the auxiliary equipment is controlled accordingly, so that the shooting parameters determined by the image acquisition device can be directly obtained without any equipment, and the data processing cost is reduced; , the relevant calculation functions of the shooting parameters are placed in the image acquisition device, thereby reducing the demand for the computing power of the gimbal; in addition, after the shooting parameters are obtained, at least one of the gimbal and auxiliary equipment can be calculated based on the shooting parameters.
  • This method realizes effective control of the PTZ without manual operation by the user, ensures a good user experience, further improves the practicability of the method, and is beneficial to market promotion and application.
  • a fifth aspect of the present invention is to provide a control method for a PTZ, comprising:
  • the acquisition position is determined by an image acquisition device, the image acquisition device is a camera with a manual lens or an automatic lens, and the image acquisition device is connected to the pan/tilt head communication connection;
  • the PTZ is controlled according to the control parameters, so as to realize the following operation of the target object.
  • a sixth aspect of the present invention is to provide a control device for a PTZ, the device comprising:
  • a processor for running a computer program stored in the memory to achieve:
  • the acquisition position is determined by an image acquisition device, the image acquisition device is a camera with a manual lens or an automatic lens, and the image acquisition device is connected to the pan/tilt head communication connection;
  • the PTZ is controlled according to the control parameters, so as to realize the following operation of the target object.
  • a seventh aspect of the present invention is to provide a control system for a PTZ, comprising:
  • the control device of the pan/tilt according to the sixth aspect above is provided on the pan/tilt, and is used for communicating with the image acquisition device, and for controlling the pan/tilt through the image acquisition device.
  • An eighth aspect of the present invention is to provide a movable platform, comprising:
  • the control device of the pan/tilt according to the sixth aspect above is disposed on the pan/tilt, and is used for communicating with the image acquisition device, and for controlling the pan/tilt through the image acquisition device.
  • a ninth aspect of the present invention is to provide a computer-readable storage medium, the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used in the fifth aspect.
  • a tenth aspect of the present invention is to provide a control method of a pan-tilt system, wherein the pan-tilt system includes a pan-tilt and an image acquisition device communicatively connected to the pan-tilt, and the method includes:
  • controlling the image acquisition device to acquire an image, and to acquire the acquisition position of the target object in the image, where the acquisition position is determined by the image acquisition device;
  • the pan/tilt is controlled to move according to a control parameter, so as to implement a follow-up operation on the target object, wherein the control parameter is determined based on the collection position.
  • An eleventh aspect of the present invention is to provide a control device for a pan-tilt system, wherein the pan-tilt system includes a pan-tilt and an image acquisition device communicatively connected to the pan-tilt, and the device includes:
  • a processor for running a computer program stored in the memory to achieve:
  • controlling the image acquisition device to acquire an image, and to acquire the acquisition position of the target object in the image, where the acquisition position is determined by the image acquisition device;
  • the pan/tilt is controlled to move according to a control parameter, so as to implement a follow-up operation on the target object, wherein the control parameter is determined based on the collection position.
  • a twelfth aspect of the present invention is to provide a control system for a PTZ, comprising:
  • the control device of the pan-tilt system according to the eleventh aspect above is disposed on the pan-tilt, and is used to communicate with the image acquisition device, and to control the image acquisition device and the pan-tilt respectively.
  • a thirteenth aspect of the present invention is to provide a movable platform, comprising:
  • the control device of the pan-tilt system according to the eleventh aspect above is disposed on the pan-tilt, and is used to communicate with the image acquisition device, and to control the image acquisition device and the pan-tilt respectively.
  • a fourteenth aspect of the present invention is to provide a computer-readable storage medium, the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used in the tenth aspect The control method of the PTZ system.
  • a fifteenth aspect of the present invention is to provide a control method for a pan/tilt, which is used for a pan/tilt, wherein the pan/tilt is communicatively connected with an image acquisition device, and the method includes:
  • the captured image includes a target object
  • the position according to the target object is sent to the image acquisition device, so that the image acquisition device determines the focus position corresponding to the target object based on the position of the target object, and based on the focus position
  • the target object performs a focusing operation.
  • a sixteenth aspect of the present invention is to provide a control device for a PTZ, which is used for a PTZ, the PTZ is communicatively connected with an image acquisition device, and the control device includes:
  • a processor for running a computer program stored in the memory to achieve:
  • the captured image includes a target object
  • the position according to the target object is sent to the image acquisition device, so that the image acquisition device determines the focus position corresponding to the target object based on the position of the target object, and based on the focus position
  • the target object performs a focusing operation.
  • a seventeenth aspect of the present invention is to provide a control system for a PTZ, comprising:
  • the control device of the pan/tilt according to the fifteenth aspect above is provided on the pan/tilt, and is used for communicating with the image acquisition device, and for controlling the image acquisition device through the pan/tilt.
  • An eighteenth aspect of the present invention is to provide a movable platform, comprising:
  • the control device of the pan/tilt according to the fifteenth aspect above is provided on the pan/tilt, and is used for communicating with the image acquisition device, and for controlling the image acquisition device through the pan/tilt.
  • a nineteenth aspect of the present invention is to provide a computer-readable storage medium, the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used for the fourteenth The control method of the PTZ described in the aspect.
  • a twentieth aspect of the present invention is to provide a control method of a pan-tilt system, the pan-tilt system includes a pan-tilt and an image acquisition device communicatively connected to the pan-tilt, and the method includes:
  • controlling the image acquisition device to acquire an image, the image including the target object
  • the pan/tilt is controlled to follow the target object based on the position of the target object, and the image acquisition device is controlled to focus on the target object according to the position of the target object.
  • a twenty-first aspect of the present invention is to provide a control device for a pan-tilt system, the pan-tilt system includes a pan-tilt and an image acquisition device communicatively connected to the pan-tilt, and the control device includes:
  • a processor for running a computer program stored in the memory to achieve:
  • controlling the image acquisition device to acquire an image, the image including the target object
  • the pan/tilt is controlled to follow the target object based on the position of the target object, and the image acquisition device is controlled to focus on the target object according to the position of the target object.
  • a twenty-second aspect of the present invention is to provide a control system for a PTZ, comprising:
  • the control device of the pan-tilt system according to the twenty-first aspect above is disposed on the pan-tilt, and is used to communicate with the image acquisition device and to control the image acquisition device and the pan-tilt respectively.
  • a twenty-third aspect of the present invention is to provide a movable platform, comprising:
  • the control device of the pan-tilt system according to the twenty-first aspect above is disposed on the pan-tilt, and is used to communicate with the image acquisition device and to control the image acquisition device and the pan-tilt respectively.
  • a twenty-fourth aspect of the present invention is to provide a computer-readable storage medium, the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used for the second The control method of the PTZ system described in the tenth aspect.
  • a twenty-fifth aspect of the present invention is to provide a control method of a pan-tilt system, the pan-tilt system comprising a pan-tilt and an image acquisition device communicatively connected to the pan-tilt, the method comprising:
  • the acquisition position of the second object in the acquired image acquired by the image acquisition device is acquired, so that the pan/tilt head is operated by following the first object It is changed to follow the second object based on the acquisition position of the second object, and the image acquisition device is changed from focusing operation on the first object to focusing on the second object based on the position of the second object. the second object to perform a focusing operation.
  • a twenty-sixth aspect of the present invention is to provide a control device for a pan-tilt system
  • the pan-tilt system includes a pan-tilt and an image acquisition device communicatively connected to the pan-tilt
  • the control device includes:
  • a processor for running a computer program stored in the memory to achieve:
  • the acquisition position of the second object in the acquired image acquired by the image acquisition device is acquired, so that the pan/tilt head is operated by following the first object It is changed to follow the second object based on the acquisition position of the second object, and the image acquisition device is changed from focusing operation on the first object to focusing on the second object based on the position of the second object. the second object to perform a focusing operation.
  • a twenty-seventh aspect of the present invention is to provide a control system for a PTZ, comprising:
  • the control device of the pan-tilt system according to the twenty-sixth aspect above is disposed on the pan-tilt, and is used to communicate with the image acquisition device and to control the image acquisition device and the pan-tilt respectively.
  • a twenty-eighth aspect of the present invention is to provide a movable platform, comprising:
  • the control device of the pan-tilt system according to the twenty-sixth aspect above is disposed on the pan-tilt, and is used to communicate with the image acquisition device and to control the image acquisition device and the pan-tilt respectively.
  • a twenty-ninth aspect of the present invention is to provide a computer-readable storage medium, the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used for the second The control method of the PTZ system according to the fifteenth aspect.
  • the control method, device, movable platform, and storage medium of the pan-tilt head provided by the embodiments of the present invention obtain the acquisition position of the target object in the acquired image, and then determine the control parameters for following the target object based on the acquisition position, And control the PTZ according to the control parameters, so as to realize the following operation of the target object.
  • the acquisition position is determined by an image acquisition device, that is, a camera with a manual lens or an automatic lens
  • the PTZ can directly acquire the acquisition position through the image acquisition device, so for the acquisition position
  • the transmission communication chain The method is different from the related art, that is, the execution subject for calculating the collection position is changed, so that the original identification function of the image collection device about the collection position can be reused, thereby reducing the requirement for the computing capability of the PTZ with respect to the collection position.
  • FIG. 1 is a schematic flowchart of a control method based on an image acquisition device provided by an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of acquiring focus information determined by the image acquisition device according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of determining a control parameter based on the shooting parameter according to an embodiment of the present invention
  • FIG. 4 is another schematic flowchart of determining a control parameter based on the shooting parameter according to an embodiment of the present invention.
  • 4a is a schematic diagram of determining a first rotation direction of a follow focus motor according to an embodiment of the present invention
  • 4b is another schematic diagram of determining the first rotation direction of the follow focus motor according to an embodiment of the present invention.
  • 4c is another schematic diagram of determining the first rotation direction of the follow focus motor according to an embodiment of the present invention.
  • FIG. 5 is another schematic flowchart of determining a control parameter based on the shooting parameter provided by an embodiment of the present invention.
  • FIG. 6 is another schematic flowchart of determining a control parameter based on the shooting parameter according to an embodiment of the present invention.
  • FIG. 7 is another schematic flowchart of determining a control parameter based on the shooting parameter according to an embodiment of the present invention.
  • FIG. 8 is another schematic flowchart of determining a control parameter based on the shooting parameter according to an embodiment of the present invention.
  • FIG. 9 is a schematic flowchart of another control method based on an image acquisition device provided by an embodiment of the present invention.
  • FIG. 10 is a schematic flowchart of an automatic focusing method provided by an application embodiment of the present invention.
  • FIG. 11 is a schematic diagram of the principle of consistent display of control objects on a screen provided by an application embodiment of the present invention.
  • FIG. 12 is a schematic schematic diagram of another control method based on an image acquisition device provided by an application embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of a control device based on an image acquisition device provided by an embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of a pan/tilt according to an embodiment of the present invention.
  • FIG. 15 is a schematic structural diagram of a pan-tilt system provided in the prior art.
  • 16 is a schematic flowchart of a method for controlling a pan/tilt according to an embodiment of the present invention
  • FIG. 17 is a schematic structural diagram of a communication connection between a pan/tilt and an image acquisition device according to an embodiment of the present invention.
  • FIG. 18 is a schematic diagram of acquiring the acquisition position of the target object in the acquired image according to an embodiment of the present invention.
  • FIG. 19 is a schematic flowchart of acquiring the acquisition position of the target object in the acquired image according to an embodiment of the present invention.
  • 20 is a schematic flowchart of obtaining a target focus position corresponding to the target object by using an image acquisition device according to an embodiment of the present invention
  • 21 is a schematic diagram 1 of a historical object part corresponding to the historical focus position and a current object part corresponding to the current focus position according to an embodiment of the present invention
  • FIG. 22 is a schematic diagram 2 of a historical object part corresponding to the historical focus position and a current object part corresponding to the current focus position provided by an embodiment of the present invention
  • FIG. 23 is a schematic flowchart of another control method of a pan-tilt head provided by an embodiment of the present invention.
  • 24 is a schematic diagram of a target object being changed according to an embodiment of the present invention.
  • FIG. 25 is a schematic flowchart of calculating a current position prediction value corresponding to the collection position according to an embodiment of the present invention.
  • FIG. 26 is an embodiment of the present invention to determine the acquisition position corresponding to the acquisition position based on the acquisition position, the exposure time, the delay time, the previous reception time, and the previous position prediction value.
  • 27 is a schematic flowchart 1 of determining a control parameter for performing a follow-up operation on the target object based on the predicted value of the current position according to an embodiment of the present invention
  • FIG. 28 is a second schematic flowchart of determining a control parameter for performing a follow-up operation on the target object based on the predicted value of the current position according to an embodiment of the present invention
  • 29 is a schematic flowchart of controlling the pan/tilt based on the motion state of the pan/tilt and the control parameters provided by an embodiment of the present invention
  • FIG. 30 is a schematic flowchart 1 of controlling the pan/tilt according to the control parameter according to an embodiment of the present invention.
  • 31 is a second schematic flowchart of controlling the pan/tilt according to the control parameters according to an embodiment of the present invention.
  • FIG. 32 is a schematic flowchart of another method for controlling a pan/tilt according to an embodiment of the present invention.
  • FIG. 33 is a schematic flowchart of another control method of a pan/tilt according to an embodiment of the present invention.
  • FIG. 34 is a schematic flowchart of another method for controlling a pan/tilt according to an embodiment of the present invention.
  • 35 is a schematic flowchart of a control method of a pan-tilt system provided by an embodiment of the present invention.
  • FIG. 36 is a schematic flowchart of another method for controlling a pan/tilt according to an embodiment of the present invention.
  • FIG. 37 is a schematic flowchart of another control method of a pan-tilt system provided by an embodiment of the present invention.
  • FIG. 39 is a schematic diagram 1 of a principle of a method for controlling a pan-tilt head provided by an application embodiment of the present invention.
  • FIG. 40 is a second schematic diagram of the principle of a method for controlling a pan/tilt according to an application embodiment of the present invention.
  • 41 is a schematic structural diagram of a control device of a pan/tilt according to an embodiment of the present invention.
  • FIG. 42 is a schematic structural diagram of a control device of a pan-tilt system according to an embodiment of the present invention.
  • FIG. 43 is a schematic structural diagram of another control device of a pan/tilt according to an embodiment of the present invention.
  • FIG. 44 is a schematic structural diagram of a control device of another pan-tilt system provided by an embodiment of the present invention.
  • FIG. 45 is a schematic structural diagram of another control device of a pan-tilt system provided by an embodiment of the present invention.
  • 46 is a schematic structural diagram of a control system of a pan/tilt according to an embodiment of the present invention.
  • FIG. 47 is a schematic structural diagram of a control system of a pan/tilt according to an embodiment of the present invention.
  • FIG. 48 is a schematic structural diagram of another pan/tilt control system according to an embodiment of the present invention.
  • FIG. 49 is a schematic structural diagram of another pan/tilt control system provided by an embodiment of the present invention.
  • FIG. 50 is a schematic structural diagram of another pan/tilt control system according to an embodiment of the present invention.
  • 51 is a schematic structural diagram 1 of a movable platform according to an embodiment of the present invention.
  • FIG. 52 is a second schematic structural diagram of a movable platform according to an embodiment of the present invention.
  • FIG. 53 is a third schematic structural diagram of a movable platform according to an embodiment of the present invention.
  • FIG. 54 is a fourth schematic structural diagram of a movable platform according to an embodiment of the present invention.
  • FIG. 55 is a fifth schematic structural diagram of a movable platform according to an embodiment of the present invention.
  • the intelligence of the camera can be achieved through the built-in 3A algorithm of the camera.
  • the 3A can specifically include: Auto Focus (AF), Automatic Exposure (AE) and Auto White Balance (AWB).
  • AF Auto Focus
  • AE Automatic Exposure
  • AVB Auto White Balance
  • the auto focus operation and the automatic exposure (aperture) operation are all realized by the built-in motor and microcontroller of the lens, which will increase the volume of the lens, and the prices of the above-mentioned automatic lenses are relatively expensive, which makes the automatic lens The user group is relatively limited, therefore, the manual lens came into being.
  • the manual lens is usually a purely mechanical device, so the manual lens has the advantage of being affordable, and, in terms of volume, the manual lens saves the built-in motor and microprocessor, which makes the space occupied by the manual lens. become smaller.
  • the use of an external motor can also achieve the functions of focusing and exposure (aperture).
  • the phase focus information of the camera Phase Detection Auto- focus, referred to as PDAF
  • contrast focus information Contrast Detection Auto Focus, referred to as CDAF
  • PDAF Phase Detection Auto- focus
  • CDAF contrast focus information
  • the exposure function can also be achieved by adjusting the aperture.
  • the external control module can also obtain the camera metering data through USB, and the control module controls the color temperature and light intensity of the external fill light according to the metering information, and realizes the function of external exposure control.
  • Hitchcock Dolly zoom is a lens movement method that controls the size of the subject to remain unchanged and the background to zoom in/out. Specifically, it can be achieved by zooming while moving the camera. For cameras that can support the function of intelligent focus tracking, you can select any object in the camera, and the size of the frame will change with the size of the object.
  • the manual lens the user can control the manual lens by manual operation to realize Dolly zoom.
  • the above-mentioned operation of the lens is relatively difficult for the user to control, and the effect of Dollyzoom cannot be guaranteed.
  • the above technologies have the following drawbacks: (1) the traditional autofocus lens is expensive, and the group of lens users is relatively limited; (2) the manual lens cannot control the autofocus and the automatic exposure adjustment of the aperture; (3) manual control The lens movement method of Dollyzoom is relatively difficult to control; (4) In some working conditions, there is a problem of mutual interference between camera anti-shake and gimbal anti-shake; (5) Due to the limitation of hardware, there is an upper limit for the manual exposure operation, which can only be achieved by Increase the ISO to increase the brightness, which will cause the quality of the picture to decrease.
  • the present embodiment provides a control method based on an image acquisition device, a control method and device for a pan/tilt head, and the control method based on the image acquisition device may include: acquiring the shooting parameters determined by the image acquisition device, the shooting parameters It may include: focus information, zoom information, etc., the shooting parameters can be used to adjust the captured image captured by the image capture device, and/or, determined based on user expectations and can be used to implement a preset shooting function of the image capture device; After acquiring the shooting parameters, the shooting parameters can be analyzed and processed to determine the control parameters, and then at least one of the PTZ and the auxiliary equipment can be controlled accordingly based on the control parameters, wherein the PTZ is used to support the image acquisition device.
  • the auxiliary equipment is used to assist the image acquisition device to perform corresponding shooting, for example, the above auxiliary equipment can be used to assist the image acquisition device to achieve follow-focus shooting operations, zoom shooting operations, or fill-light shooting operations, and so on.
  • the technical solution provided by this embodiment is to obtain the shooting parameters determined by the image acquisition device, determine control parameters based on the shooting parameters, and perform corresponding control on at least one of the pan/tilt head and auxiliary equipment based on the control parameters. , which effectively realizes that the shooting parameters determined by the image acquisition device can be directly obtained without any equipment (detection equipment, auxiliary equipment, etc.), which reduces the cost of data processing. parameters to control at least one of the PTZ and the auxiliary equipment, thereby realizing the effective control of the PTZ without manual operation by the user, ensuring a good user experience, and further improving the practicability of the method. Conducive to market promotion and application.
  • FIG. 1 is a schematic flowchart of a control method based on an image acquisition device provided by an embodiment of the present invention; with reference to FIG. 1 , this embodiment provides a control method based on an image acquisition device.
  • the image acquisition device can be set on the movable platform.
  • the image acquisition device can be detachably connected to the movable platform, that is, the user can install the image acquisition device on the movable platform according to the needs of use, or install the image acquisition device on the movable platform. It is detached from the movable platform so that the image acquisition device can be independent of the movable platform.
  • the above-mentioned movable platform may include: passive mobile devices such as handheld PTZs, airborne PTZs, unmanned aerial vehicles, unmanned vehicles, unmanned ships, and mobile robots, or mobile devices with power systems
  • the image acquisition device may include the following: At least one of: a camera with a manual lens, a camera with an automatic lens, a mobile phone, a video camera, and other devices with a shooting function.
  • the pan/tilt as a movable platform is used as an example, and the image acquisition device is set on the pan/tilt as an example.
  • the image acquisition device is a camera, its lens can be detachably connected to its body to adapt to different shooting scenes and achieve different shooting effects.
  • the execution body of the control method based on the image acquisition device may be the control device based on the image acquisition device.
  • the control device based on the image acquisition device may be implemented as software, or A combination of software and hardware. It may be integrated into any one of a pan/tilt, an image acquisition device, and an auxiliary device, or may be independent of any of them.
  • the embodiment of the present application takes the execution subject as a pan/tilt as an example for description. Specifically, the method may include:
  • Step S101 Acquire shooting parameters determined by an image acquisition device, wherein the image acquisition device is a camera with a manual lens or an automatic lens, and the shooting parameters can be used to adjust the captured images collected by the image acquisition device.
  • Step S102 Determine control parameters based on the shooting parameters.
  • Step S103 Control at least one of the pan/tilt head and the auxiliary equipment correspondingly based on the control parameters, wherein the pan/tilt head is used to support the image acquisition device and/or the auxiliary equipment, and the auxiliary equipment is used to assist the image acquisition apparatus to perform corresponding shooting .
  • Step S101 Acquire shooting parameters determined by an image acquisition device, wherein the image acquisition device is a camera with a manual lens or an automatic lens, and the shooting parameters can be used to adjust the captured images collected by the image acquisition device.
  • the control device may acquire shooting parameters through the image acquisition device, and the shooting parameters may include at least one of the following: focus information, zoom information, fill light information, preset object in The ratio information, anti-shake information, etc. in the image, in some instances, the focus information may include the capture position, the above-mentioned shooting parameters can be used to adjust the captured image captured by the image capture device, and/or, determined based on user expectations And can be used to realize the preset shooting function of the image acquisition device. For example, when the shooting parameter is the information on the proportion of the preset object in the image, the composition of the captured image captured by the image capturing device can be adjusted through the above-mentioned shooting parameters.
  • the lens movement operation of Kirk Dolly zoom when the shooting parameters are focus information, follow focus parameters, fill light information or anti-shake information, it can realize the focus shooting operation, follow focus shooting operation, fill light shooting operation and anti-shake information of the image acquisition device. Shake the shooting operation and so on.
  • acquiring the shooting parameters determined by the image acquisition device may include: acquiring sensing information of a sensor other than an image sensor in the image acquisition device, and for the above The sensing information is analyzed and processed to obtain the shooting parameters. At this time, the acquisition of the shooting parameters does not depend on the captured images collected by the image acquisition device; in some instances, acquiring the shooting parameters determined by the image acquisition device may include: acquiring For the captured images collected by the image capturing device, the captured images are analyzed and processed to obtain shooting parameters. At this time, the shooting parameters are obtained by analyzing and processing the images collected by the image capturing device.
  • acquiring the shooting parameters determined by the image capturing device may include: acquiring the captured image captured by the image capturing device, acquiring the execution operation input by the user on the captured image, and obtaining the shooting parameters based on the execution operation and the captured image, at this time , the shooting parameters can be obtained by the user according to the needs, and the input of the image collected by the image acquisition device can be obtained, that is, the shooting parameters are determined based on the user's expectations. In different application scenarios, it can be based on different application requirements. Determine different shooting parameters.
  • acquiring the shooting parameters determined by the image acquisition apparatus may include: establishing a wireless communication link with the image acquisition apparatus based on a wireless communication device, wherein the wireless communication device is provided in a pan/tilt or an auxiliary device; acquiring through the wireless communication link shooting parameters. That is, the control device may acquire the shooting parameters determined by the image capture device based on the communication with the pan-tilt or the auxiliary device and by means of the wireless communication connection between the pan-tilt or the auxiliary device and the image capture device.
  • the above wireless communication device may include at least one of the following: a Bluetooth module, a short-range wireless communication module, and a wireless local area network wifi module.
  • the image acquisition device has a Bluetooth communication connection function, and a wireless communication link can be established between the control device and the image acquisition device through the Bluetooth module. After the image acquisition device acquires the shooting parameters, Then the control device can acquire the shooting parameters through the established wireless communication link, wherein the Bluetooth module can be set on the pan/tilt or the auxiliary device.
  • the wireless communication device includes a wifi module
  • the image acquisition device has a wifi connection function, and a wireless communication link can be established between the control device and the image acquisition device through the wifi module.
  • the The control device can acquire the shooting parameters through the established wireless communication link, wherein the wifi module can be set on the PTZ or auxiliary equipment.
  • the control device can establish a wireless communication link with the image capture device through the short-range wireless communication module.
  • the control terminal can obtain the shooting parameters through the established wireless communication link, wherein the short-range wireless communication module can be set on the pan/tilt or auxiliary equipment.
  • the control device can perform wireless communication with the image acquisition device through the Bluetooth module and the short-range wireless communication module, wherein the short-range wireless communication module is used to obtain information for realizing a Bluetooth connection .
  • the control device can perform wireless communication with the image acquisition device through the wifi module and the short-range wireless communication module, wherein the short-range wireless communication module is used to obtain information for realizing a wifi connection .
  • the control device can perform wireless communication with the image acquisition device through the bluetooth module and the wifi module, wherein, under different bandwidth requirements, one of them can be selected for data transmission, or the bluetooth module can be used for data transmission.
  • the module is used to obtain the information to realize the wifi connection.
  • Step S102 Determine control parameters based on the shooting parameters.
  • the user can selectively set whether to set auxiliary equipment on the PTZ and what type of auxiliary equipment to set according to the usage requirements.
  • the auxiliary equipment can include at least one of the following: focus motor, zoom motor,
  • the auxiliary device can be detachably connected to the gimbal, that is, the user can install the auxiliary device on the gimbal according to the needs of use, or remove the auxiliary device from the gimbal. down, so that the auxiliary equipment can be independent of the gimbal.
  • the PTZ is used to adjust the spatial positions of the image acquisition device and the auxiliary device.
  • the PTZ can be connected to the image acquisition device and the auxiliary device in communication respectively.
  • the image acquisition device may be mechanically coupled with the auxiliary equipment, and the mechanical coupling may include: direct connection, connection through a connector, etc.
  • the image acquisition device and the auxiliary equipment are detachably connected , that is, the image acquisition device and the auxiliary equipment can be connected according to the usage requirements, or the image acquisition device and the auxiliary equipment can be split, so that the image acquisition device and the auxiliary equipment are independent of each other.
  • the shooting parameters can be analyzed and processed, so that the control parameters can be determined.
  • This embodiment does not limit the specific implementation of determining the control parameters.
  • Those skilled in the art can set according to specific application scenarios or application requirements.
  • a mapping relationship between shooting parameters and control parameters is preset, and control parameters corresponding to shooting parameters are determined based on the mapping relationship.
  • a pre-trained machine learning model is obtained, and the shooting parameters are input into the machine learning model, so that the control parameters output by the machine learning model can be obtained.
  • Step S103 Control at least one of the pan/tilt head and the auxiliary equipment correspondingly based on the control parameters, wherein the pan/tilt head is used to support the image acquisition device and/or the auxiliary equipment, and the auxiliary equipment is used to assist the image acquisition apparatus to perform corresponding shooting .
  • control parameters are used to control the PTZ and/or the auxiliary equipment, after the control parameters are acquired, corresponding control operations can be performed on at least one of the PTZ and the auxiliary equipment based on the control parameters.
  • the control parameters include anti-shake information
  • the gimbal can be controlled based on the anti-shake information to implement corresponding stabilization operations.
  • the auxiliary device can be controlled based on the control parameters.
  • the control parameters include parameters related to the zoom motor
  • the zoom motor can be adjusted accordingly based on the parameters related to the zoom motor.
  • the zoom adjustment operation can be realized; when the control parameters include the parameters related to the focus motor, the corresponding control operation can be performed on the focus motor based on the parameters related to the focus motor, so that the focus adjustment operation can be realized;
  • the parameters include supplementary light information
  • a corresponding control operation can be performed on the supplementary light device based on the supplementary light information, so that the supplementary light operation can be realized.
  • the pan-tilt and/or auxiliary equipment may be controlled based on the control parameters. For example, when the control parameters include anti-shake information, the pan-tilt and/or the anti-shake information The image acquisition device of the shake control unit is controlled.
  • the shooting parameters can be acquired by the controller in the image acquisition device, which is different from the image information. That is to say, the shooting parameters are not obtained by analysis and processing of the pan/tilt or auxiliary equipment or other control devices based on the image acquisition device according to the image information or data collected by other devices, and in some usage scenarios, the pan/tilt or the auxiliary equipment or Other control devices based on image acquisition devices can realize corresponding control without acquiring image information or data collected by other devices.
  • the control parameters are determined based on the shooting parameters, and based on the control parameters, at least one of the pan/tilt head and the auxiliary device is controlled.
  • the corresponding control is carried out, so that the shooting parameters determined by the image acquisition device can be directly obtained without any equipment, and the data processing cost is reduced.
  • the cloud At least one of the platform and the auxiliary equipment is used for control, thereby realizing the effective control of the PTZ without manual operation by the user, ensuring a good user experience, further improving the practicability of the method, and being beneficial to the market. promotion and application.
  • the image capture device may be another electronic device other than a camera that can capture images, which is not specifically limited here.
  • the image acquisition device may be a camera including a manual lens.
  • the shooting parameters determined by the image acquisition device can be used to adjust the captured images collected by the image acquisition device.
  • different auxiliary devices may correspond to different shooting parameters, so that the shooting parameters and auxiliary devices determined by the image acquisition device can realize different adjustment operations on the captured images.
  • CMOS Complementary Metal-Oxide-Semiconductor
  • the phase focus information of the designated area can be obtained by the user clicking on the display screen of the image acquisition device, and then the phase-focus ring position curve can be obtained by calibration, and then the position of the focus ring can be directly calculated according to the calibration curve, so that the auxiliary can be controlled.
  • Equipment such as the follow focus motor to rotate the target focus ring position, can achieve fast focusing of manual lenses.
  • the CDAF function can also be realized by obtaining the contrast information of the above-mentioned designated area through auxiliary equipment, such as a follow focus motor. Therefore, when the auxiliary device includes a focus motor for adjusting the focus ring of the image acquisition device, acquiring the shooting parameters determined by the image acquisition device in this embodiment may include: acquiring the focus information determined by the image acquisition device, wherein , the focus information may include at least one of the following: phase focus PDAF information and contrast focus CDAF information.
  • the image acquisition device may correspond to different aperture application modes.
  • the aperture application mode may include a large aperture mode and a small aperture mode.
  • the large aperture mode refers to a working mode in which the aperture value is greater than the set aperture threshold.
  • the small aperture mode can refer to the working mode in which the aperture value is less than or equal to the preset aperture threshold, and in different aperture application modes, the accuracy of PDAF and CDAF are inconsistent.
  • Different focus information is used to realize auto focus operation. Therefore, in order to ensure the accuracy and reliability of the auto-focusing operation, referring to FIG. 2 , this embodiment provides an implementation manner of acquiring focus information.
  • acquiring the focus information determined by the image acquisition device may include:
  • Step S201 Determine the aperture application mode corresponding to the image capturing device.
  • different aperture application modes can correspond to different ranges of aperture values. Therefore, during the operation of the image acquisition device, the image acquisition device can be determined by acquiring the aperture value corresponding to the image acquisition device. The corresponding aperture application mode.
  • different aperture application modes may correspond to different display identifiers (including: indicator lights, icons displayed on the display interface, etc.), and the aperture application mode corresponding to the image capture device may be determined through the identifier information.
  • Step S202 Acquire at least one of phase focus information and contrast focus information determined by the image acquisition device according to the aperture application mode.
  • the aperture application mode can be analyzed and processed, so that at least one of the phase focus information and the contrast focus information determined by the image acquisition device can be acquired, that is, in different application scenarios, the user can Different focus information can be obtained according to usage requirements or design requirements.
  • the phase focus information determined by the image acquisition device can be acquired through the aperture application mode; in other instances, when the contrast focus information can be accurately acquired, the Contrast focus information determined by the image capture device is obtained through the aperture application mode; in some instances, the phase focus information and contrast focus information determined by the image capture device may also be obtained through the aperture application mode.
  • acquiring at least one of phase focus information and contrast focus information determined by the image capture device according to the aperture application mode may include: when the aperture application mode is the first mode, acquiring the contrast focus determined by the image capture device information, or obtain phase focus information and contrast focus information determined by the image acquisition device, wherein the aperture value corresponding to the first mode is less than or equal to the set aperture threshold; and/or, when the aperture application mode is the second mode, The phase focus information determined by the image acquisition device is acquired, wherein the aperture value corresponding to the second mode is greater than the set aperture threshold value.
  • the aperture application mode of the image capture device may include a first mode and a second mode. Since the aperture value corresponding to the first mode is less than or equal to the set aperture threshold, the first mode may be called a small aperture mode, and the second mode may be called a small aperture mode. The aperture value corresponding to the mode is greater than the set aperture threshold, so the second mode can be called a large aperture mode.
  • the image acquisition device is in the first mode, the accuracy of the phase focus information PDFA at this time is limited. Therefore, in order to achieve an accurate focusing operation, the phase focus information and contrast focus information determined by the image acquisition device can be obtained. Phase focus information and contrast focus information for fast AF operation.
  • the phase focus information PDFA at this time can be directly aimed at the target, that is, an accurate focusing operation can be realized through the phase focus information. Therefore, the phase focus information can be determined by the image acquisition device.
  • the image capture device by determining the aperture application mode corresponding to the image capture device, and then acquiring at least one of the phase focus information and the contrast focus information determined by the image capture device according to the aperture application mode, the image capture device is effectively implemented.
  • different focusing information can be determined, thereby ensuring the flexibility and reliability of obtaining the focusing information, and further improving the realization accuracy of focusing operation based on the focusing information.
  • controlling at least one of the pan/tilt head and the auxiliary equipment based on the control parameters may include: controlling the follow focus motor based on the control parameters to realize the follow focus of the image acquisition device operate.
  • the auxiliary equipment when the auxiliary equipment includes a follow focus motor and can obtain the focus information determined by the image acquisition device, in order to realize the control of the follow focus motor, the focus information can be analyzed and processed to obtain the control corresponding to the focus information. parameters, and then the focus motor can be controlled based on the control parameters.
  • the focus information may include phase focus information and/or contrast focus information
  • different focus information when obtaining control parameters corresponding to the focus information, different focus information may have different data processing strategies.
  • this embodiment provides an implementation method for acquiring control parameters, specifically, referring to FIG. Determining control parameters can include:
  • Step S301 Obtain a first mapping relationship between the phase focus information and the position of the focus ring.
  • Step S302 Determine the position of the focus ring corresponding to the phase focus information based on the first mapping relationship.
  • Step S303 Determine the control parameters of the follow focus motor based on the position of the follow focus ring.
  • the image acquisition device may be configured with a corresponding follow focus ring, and a plurality of follow focus ring positions are marked on the follow focus ring, and the follow focus ring positions are used to realize the focus of the focus motor.
  • Control operation Specifically, the follow focus motor can be controlled to drive the gear connected with the follow focus ring through the position of the follow focus ring, so as to realize the focus operation.
  • the first mapping relationship between the phase focus information and the position of the follow focus ring can be obtained, and the first mapping relationship can be obtained directly by the manufacturer of the image acquisition device, or, Obtained through a pre-calibration operation, the first mapping relationship is used to identify the one-to-one relationship between the phase focus information and the position of the focus ring.
  • the phase focus information can be analyzed and processed based on the first mapping relationship, so that the position of the focus ring corresponding to the phase focus information can be determined, and then the position of the focus ring can be determined based on the determined position of the focus ring to determine the control parameters of the follow focus motor, wherein the control parameters may include at least one of the following: rotation stroke, rotation speed, rotation direction, and the like.
  • the control parameters may include at least one of the following: rotation stroke, rotation speed, rotation direction, and the like.
  • a mapping relationship between the position of the focus ring and the control parameters is preconfigured, and the control parameters of the focus motor can be determined based on the mapping relationship and the position of the focus ring.
  • a machine learning model for analyzing and processing the position of the focus ring is preselected and trained, and the position of the focus ring is input to the machine learning model, so that the control parameters of the follow focus motor output by the machine learning model can be obtained.
  • the focus information when the focus information includes phase focus information, a first mapping relationship between the phase focus information and the position of the focus ring is obtained, and then the position of the focus ring corresponding to the phase focus information is determined based on the first mapping relationship , and determine the control parameters of the follow focus motor based on the position of the follow focus ring, thus effectively ensuring the accuracy and reliability of determining the control parameters of the follow focus motor, and further improving the control of the follow focus motor based on the control parameters. Accurate degree of focus operation.
  • this embodiment provides an implementation manner of acquiring control parameters.
  • the determining of the control parameters based on the shooting parameters in this embodiment may include:
  • Step S401 Determine the current motor position corresponding to the focus motor and the contrast focus information.
  • the contrast focus information can be analyzed and processed. Specifically, a mapping relationship between the contrast focus information and the motor position is preset. Through the above mapping relationship, it can be determined that the focus motor is related to the contrast focus information. The corresponding current motor position, thereby effectively ensuring the accurate and reliable determination of the current motor position.
  • Step S402 Acquire a set position range corresponding to the focus motor.
  • the set position range corresponding to the follow focus motor can be obtained, and the set position range is used to identify the interval range in which the motor can perform normal motion.
  • Different follow focus motors can correspond to the same or different set position ranges.
  • the set position range may be a pre-configured parameter stored in the preset area.
  • the set position range of the follow focus motor can be obtained by accessing the preset area.
  • the set position range may be obtained through a calibration operation.
  • obtaining the set position range corresponding to the focus motor may include: obtaining the closest end of the lens of the camera and the The farthest end, where the closest end and the farthest end of the lens are used to limit the range of the focal length of the lens, and the closest end and the farthest end of the lens can be the mechanical limit corresponding to the lens.
  • the farthest end may correspond to the mechanical upper limit of the lens.
  • the first motor position corresponding to the nearest end and the second motor position corresponding to the farthest end After acquiring the nearest end and the farthest end of the lens of the camera, obtain the first motor position corresponding to the nearest end and the second motor position corresponding to the farthest end; determine the setting based on the first motor position and the second motor position
  • the above-mentioned set position range is the position range that the motor can run composed of the first motor position as the lower limit position and the second motor position as the upper limit position, that is, the follow focus motor can operate between the first motor position and the upper limit position.
  • the second motor moves freely within the range formed by the position of the second motor, thereby effectively realizing that the set position range corresponding to the focus motor can be obtained accurately and effectively.
  • step S402 and step S401 in this embodiment is not limited to the above-defined description order, and those skilled in the art can set it according to specific application scenarios or application requirements, for example: step S402 can be set in step S402 S401 is performed before, or step S402 can be performed simultaneously with step S401.
  • Step S403 Determine the first rotation speed and the first rotation direction of the focus motor based on the current motor position and the set position range.
  • the current motor position and the set position range can be analyzed and processed to determine the first rotation speed and the first rotation direction of the focus motor.
  • a machine learning model for analyzing and processing the current motor position and the set position range is pre-trained, and the current motor position and the set position range are input into the machine learning model, so that the first position of the focus motor can be obtained. a rotational speed and a first rotational direction.
  • determining the first rotation speed and the first rotation direction of the focus motor may include: acquiring the current motor position and the upper limit and the lower limit of the set position range, respectively The first distance and the second distance between the values; based on the first distance and the second distance, the first rotation speed and the first rotation direction of the focus motor are determined.
  • the upper limit value and the lower limit value corresponding to the set position range can be determined, and after the upper limit value and the lower limit value are obtained, the current motor position and the upper limit value can be obtained.
  • the first distance between the values and the second distance between the current motor position and the lower limit value, after the first distance and the second distance are obtained, the first distance and the second distance can be analyzed and processed to determine The first rotation speed and first rotation direction of the follow focus motor.
  • determining the first rotational speed and the first rotational direction of the focus motor based on the first distance and the second distance may include: analyzing and comparing the first distance with the second distance, where the first distance is greater than the second distance Distance, at this time, it means that the stroke that the follow focus motor can rotate towards the lower limit value is less than the stroke that can be rotated towards the upper limit value, because the focus operation may require more strokes of the follow focus motor rotation, which can ensure the focus operation. Therefore, the first rotation direction can be determined as a direction close to the upper limit value, and then the first rotation speed of the focus motor can be determined based on the first distance.
  • the first rotation direction may be determined as a direction close to the lower limit value, and then the first rotation speed of the focus motor may be determined based on the second distance.
  • the first rotation direction may be determined as the direction toward the lower limit value or toward the upper limit value, and then the first rotation speed of the focus motor is determined based on the first distance or the second distance .
  • the current motor position is L1, and L1 is within the set position range, and then the first distance S1 between L1 and the upper limit value, and the second distance between L1 and the lower limit value can be obtained.
  • Distance S2 since S1>S2, therefore, the first rotation direction D of the follow focus motor can be determined as a direction close to the upper limit value, and the first rotation speed V of the follow focus motor can be determined based on S1, thereby effectively realizing The accuracy and reliability of determining the first rotation speed and the first rotation direction of the follow focus motor are ensured.
  • the current motor position is L2, L2 is located within the set position range, and then the first distance S1 between L2 and the upper limit value, and the second distance S2 between L2 and the lower limit value can be obtained , since S2>S1, therefore, the first rotation direction D of the follow focus motor can be determined as a direction close to the lower limit value, and the first rotation speed V of the follow focus motor can be determined based on S2, thereby effectively realizing the The accuracy and reliability of determining the first rotational speed and the first rotational direction of the follow focus motor.
  • the focus motor when the focus motor is controlled to move based on the first rotation speed and the first rotation direction, the current motor position corresponding to the focus motor will change, so that a new current motor position can be obtained.
  • the first rotation speed and the first rotation direction can be re-determined in the above-mentioned manner, so that when the focus operation is performed by the follow focus motor, the control obtained in the process of not being adjusted to the in-focus state is realized.
  • the parameters (for example, the first rotation speed and the first rotation direction) can be updated in real time, which can ensure the stability and reliability of the focusing operation.
  • the focus motor can be controlled to move based on the first rotation speed and the first rotation direction, so as to realize the automatic focusing operation.
  • the method in this embodiment may further include: updating the set position range based on the current motor position to obtain the updated position range; based on the updated position range and the motor In the post-rotation position, the first rotation speed is adjusted to obtain the second rotation speed, and the second rotation speed is smaller than the first rotation speed.
  • the first rotation speed may be changed in real time based on the position of the movement of the focus motor.
  • the set position range can be updated based on the current motor position, so that the updated position range can be obtained.
  • one boundary value of the updated position range may be the current motor position, and the other boundary value may be an upper limit value or a lower limit value, that is, the updated position range is smaller than the set position range, specifically the set position range part of.
  • the current motor position of the follow focus motor will change continuously, that is, the follow focus motor changes from the current motor position to the position after the motor rotates.
  • the updated position range and the rotated position of the motor can be analyzed and processed to adjust the first rotation speed and obtain the second rotation speed.
  • the specific implementation method and implementation process of the position range and the position after the motor rotates to obtain the second rotation speed are the same as the above-mentioned "based on the current motor position and the set position range, determine the first rotation speed and first rotation direction of the focus motor".
  • the determined second rotation speed is smaller than the first rotation speed, that is, when the focus operation is performed on the control focus motor, the rotation corresponding to the focus motor The speed will continue to decrease.
  • a speed threshold for identifying the follow focus motor in a normal operating state is preconfigured.
  • the running speed of the follow focus motor is less than or equal to the speed threshold, it means that the follow focus motor can perform normal operation.
  • the running speed of the follow focus motor is greater than the speed threshold, the normal operation of the follow focus motor cannot be guaranteed. Therefore, in order to ensure the stable operation of the autofocus operation, after the first rotation speed is obtained, the first rotation speed can be analyzed and compared with the speed threshold. When the first rotation speed is greater than the speed threshold, it means that the determined first rotation speed When the rotation speed is large, the normal operation of the focus motor cannot be guaranteed.
  • the first rotation speed can be updated to a speed threshold, so as to control the focus motor to move based on the speed threshold.
  • the first rotation speed is less than or equal to the speed threshold, it means that the determined first rotation speed is within the normal speed range, and therefore, the focus motor can be controlled to move based on the first rotation speed.
  • the set position range corresponding to the follow focus motor is obtained, and then based on the current motor position and the set position range, the set position range of the follow focus motor is determined.
  • the first rotation speed and the first rotation direction thereby effectively ensuring the accuracy and reliability of the determination of the first rotation speed and the first rotation direction corresponding to the focus motor. Based on the first rotation speed and the first rotation speed
  • the focus operation can be realized quickly and stably, which further improves the stability and reliability of the method.
  • the focus motor when using the focus motor to perform the focusing operation, the focus motor is engaged with the focus ring of the camera, and the focus motor and the camera are carried on the gimbal as an example.
  • the gimbal also carries a depth detection device.
  • the depth detection device can provide depth information of a certain area or a certain object within the sensing range of the camera, and can directly or indirectly feed back the information to the follow focus motor, which can be based on The depth information drives the focus ring to rotate, so as to realize the focusing operation on a certain area or a certain object within the sensing range of the camera.
  • the auxiliary device may include not only a follow focus motor, but also a zoom motor.
  • the auxiliary device includes a zoom motor for adjusting the zoom ring of the image capture device, this
  • the acquiring of the shooting parameters determined by the image acquisition apparatus in the embodiment may include: acquiring zoom information corresponding to the acquired image.
  • the captured image can be acquired by the image capture device, and then the captured image can be analyzed and processed to obtain zoom information corresponding to the captured image.
  • the zoom information can be obtained through the zoom controller. get.
  • the zoom motor may be controlled based on the zoom information to realize the zoom operation of the image capture device.
  • controlling at least one of the pan/tilt head and the auxiliary device based on the control parameter may include: based on the control parameter The zoom motor is controlled to realize the zoom operation of the image capture device.
  • the auxiliary device includes a zoom motor and can obtain the zoom information determined by the image acquisition device, in order to realize the control of the follow focus motor, the zoom information can be analyzed and processed to obtain the control corresponding to the zoom information. parameters, and then control the zoom motor based on the control parameters.
  • the zoom information is used to adjust the zoom capability of the camera lens. Different zoom information can correspond to different focal length information. Objects appear at different sizes in the image.
  • the determining of the control parameters based on the shooting parameters in this embodiment may include:
  • Step S501 Determine the initial proportion of the setting object in the display screen based on the collected image, wherein the display screen is determined based on the collected image.
  • the captured image can be obtained through the image capture device, and the obtained captured image can include setting objects, and the setting objects can be: people, animals, plants, buildings, vehicles, etc. etc.
  • the acquired image can be analyzed and processed to determine the initial proportion of the setting object in the display screen, and the initial proportion is used to identify the display size feature of the setting object relative to the display screen .
  • the displayed image is determined based on the captured image, and the size of the displayed image may be the same as or different from that of the captured image. Specifically, the displayed image may be at least a part of the captured image. For example, after acquiring a captured image, if the captured image contains a lot of content and the user is only interested in a part of the captured image, the user can select the captured image to include the image of the part of interest based on requirements. The area is determined as the display screen; if the content included in the captured image is relatively small, the entire area of the captured image can be directly determined as the display screen.
  • this embodiment does not limit the specific implementation of determining the initial proportion of the setting object in the display screen, and those skilled in the art can set according to specific application scenarios or application requirements, for example: after acquiring the captured image , the setting object and the display screen may be determined based on the user's input operation, and then the initial proportion of the setting object in the display screen may be determined based on the determined setting object and the display screen.
  • determining the initial proportion of the setting object in the display screen based on the captured image may include: acquiring the object size feature of the setting object and the screen size feature of the display screen; based on the object size feature and the screen size feature, Determine the initial percentage.
  • the set object and the display screen can be determined in response to the execution operation input by the user, or the set object and the display screen can also be identified through a target recognition algorithm, for example: in the captured image Including object 1, object 2 and object 3, since the proportion of object 3 in the entire captured image is greater than the proportion of the other two objects in the captured image, object 3 can be determined as the set object based on the target recognition algorithm.
  • the object size feature of the setting object and the screen size feature of the display screen may be acquired based on the captured image, wherein the object size feature may include at least one of the following: object length size, object width size , the object area size; correspondingly, the screen size feature of the displayed screen may include at least one of the following: screen length size, screen width size, and screen area size.
  • acquiring the object size feature of the setting object may include: identifying contour information of the setting object through an object recognition algorithm or a machine learning model, and determining the object size feature of the setting object based on the contour information.
  • obtaining the object size feature of the setting object may include: obtaining an identification frame corresponding to the setting object, the identification frame may be a preset rectangular frame, square frame, circular frame, etc., and determining the setting object through the identification frame object size feature.
  • the identification frame corresponding to the setting object can be Or outline information is displayed.
  • the object size feature and the screen size feature may be analyzed and processed to determine the initial ratio. Since the object size feature and the screen size feature can have multiple representations, there are also multiple manners for determining the initial proportion.
  • determining the initial ratio based on the object size feature and the screen size feature in this embodiment may include: comparing the object length size with the screen size feature. The ratio between the length dimensions is determined as the initial ratio.
  • the initial ratio P can be TL/(FL-TL) or (FL-TL)/TL. In this case, P can be A value greater than 1 or less than 1.
  • the determining the initial ratio based on the object size feature and the screen size feature in this embodiment may include: comparing the feature width size with the screen size feature. The ratio between the width and size of the screen is determined as the initial ratio.
  • the initial proportion P can be TW/(FW-TW) or (FW-TW)/TW, in this case, P can be A value greater than 1 or less than 1.
  • determining the initial ratio based on the object size feature and the screen size feature in this embodiment may include: comparing the object area size with the screen size feature. The ratio between the screen area sizes is determined as the initial ratio.
  • the initial ratio P can be TS/(FS-TS) or (FS-TS)/TS, and at this time, P can be A value greater than 1 or less than 1.
  • the implementation of determining the initial proportion of the setting object in the display screen is not limited to the above-mentioned statement content, and those skilled in the art can also use other methods to obtain the setting object in the display screen.
  • the initial proportion as long as the accuracy and reliability of acquiring the initial proportion of the setting object in the display screen can be ensured, details are not repeated here.
  • Step S502 Determine control parameters based on the initial ratio and zoom information.
  • the initial ratio and zoom information can be analyzed and processed to determine control parameters.
  • a machine learning model for determining control parameters is pre-trained.
  • the initial ratio and zoom information can be input into the machine learning model, so that the information of the machine learning model can be obtained.
  • determining the control parameter may include: acquiring a second mapping relationship between the movement range of the zoom motor and the zoom stroke, and the relationship between the movement direction of the zoom motor and the zoom direction The third mapping relationship; based on the initial ratio, the zoom information, the second mapping relationship and the third mapping relationship, determine the motion stroke parameter and the motion direction corresponding to the zoom motor.
  • a second mapping relationship between the movement range and the zoom stroke and a third mapping relationship between the movement direction and the zoom direction are pre-calibrated, and the above-mentioned second mapping relationship is used to identify the zoom motor.
  • the one-to-one correspondence between the motion stroke and the zoom value, the third mapping relationship is used to identify the one-to-one correspondence between the movement direction of the zoom motor and the zooming direction, and the above calibrated second mapping relationship and third mapping relationship
  • the relationship can be stored in a preset area or preset device, and the above-mentioned second mapping relationship and third mapping relationship can be obtained by accessing the preset area or preset device, so that the second mapping relationship and the third mapping relationship can be accurately based on the second mapping relationship and the third mapping relationship.
  • the mapping relationship is used to obtain the control parameters corresponding to the zoom motor.
  • the initial ratio, zoom information, the second mapping relationship and the third mapping relationship can be analyzed and processed to determine the motion stroke parameters and the motion direction corresponding to the zoom motor , the above-mentioned motion stroke parameters and motion directions are the control parameters used to control the zoom motor, thereby effectively realizing the accurate and reliable acquisition of the control parameters.
  • the auxiliary device when the auxiliary device includes a zoom motor and can acquire the zoom information determined by the image acquisition device, the initial proportion of the setting object in the display screen is determined by acquiring the image, and then the initial proportion and the zoom information are determined based on the image acquisition device.
  • Control parameters the above-mentioned control parameters may include motion stroke parameters and motion directions corresponding to the zoom motor, effectively realizing the accuracy and reliability of determining the control parameters of the zoom motor, and then facilitating the control of the zoom motor based on the control parameters, In order to realize the zoom operation of the image acquisition device and Hitchcock's lens movement operation.
  • the zoom motor when the zoom motor is used to perform the zoom operation, the zoom motor is engaged with the zoom ring of the camera, and the zoom motor and the camera are carried on the gimbal as an example.
  • the gimbal also carries a depth detection device, such as a time-of-flight TOF.
  • the depth detection device can provide depth information of a certain area or an object within the sensing range of the camera, and can directly or indirectly feed back the information to the zoom motor, which can drive the zoom ring to rotate based on the depth information , so as to realize the zoom operation on a certain area or a certain object within the sensing range of the camera.
  • the auxiliary equipment may not only include a follow focus motor and a zoom motor, but also a supplementary light device (eg, a supplementary light).
  • the auxiliary equipment includes a device for capturing images
  • acquiring the shooting parameters determined by the image capturing apparatus in this embodiment may include: acquiring light detection information determined by the image capturing apparatus.
  • light detection information may be acquired by the image acquisition device, and the light detection information may include at least one of the following: exposure intensity and light color.
  • the image acquisition device may perform identification table analysis, such as white balance detection, on the acquired image to obtain correlated light detection.
  • the captured image is obtained by an image capturing device, and the captured image is analyzed and processed to obtain light detection information.
  • the light-filling device can be controlled based on the light-detection information to realize the light-filling operation of the image acquisition device. : Control the light-filling device based on the control parameters to realize the light-filling operation of the image acquisition device.
  • Control parameters can include:
  • Step S601 Determine the target exposure intensity corresponding to the captured image of the image capturing device
  • Step S602 Based on the exposure intensity and the target exposure intensity, determine the compensation exposure parameters of the light-filling device.
  • the target exposure intensity corresponding to the acquired image of the image acquisition device can be determined.
  • the target exposure intensity can be pre-specified.
  • the user can configure the target exposure intensity according to the environmental information where the image acquisition device is located, and different environmental information can be configured with different target exposure intensities; or, the target exposure intensity can be configured by Automatically determined for the display quality of the acquired image.
  • the exposure intensity and the target exposure intensity can be analyzed and processed to determine the compensated exposure parameter of the light-filling device, and the compensated exposure parameter is the control parameter corresponding to the light-filling device.
  • the light-filling device may be controlled based on the compensating exposure parameters, so as to realize the exposure balance operation of the image capturing device.
  • the supplemental light exposure parameter may be a parameter determined by the difference between the target exposure intensity and the exposure intensity, or the supplemental light exposure parameter may be a parameter determined by the ratio between the exposure intensity and the target exposure intensity .
  • the target exposure intensity corresponding to the captured image of the image capture device is determined, and the compensation exposure parameter of the light-filling device is determined based on the exposure intensity and the target exposure intensity, and then the The supplementary light device is controlled based on the determined compensation exposure parameters, so that the exposure balance operation of the image acquisition device can be realized, thereby ensuring the quality and effect of the image acquired by the image acquisition device, thus effectively improving the practicability of the method.
  • this embodiment provides another implementation method for obtaining control parameters.
  • the determination of the control parameters based on the shooting parameters in this embodiment may include:
  • Step S701 Determine the target scene color corresponding to the captured image of the image capturing device.
  • Step S702 Based on the light color and the target scene color, determine the compensation color parameters of the supplementary light device.
  • the color of the target scene corresponding to the acquired image of the image acquisition device can be determined.
  • the target scene color can be pre-specified, specifically, the user can configure the target scene color according to the environmental information where the image acquisition device is located, and different environmental information can be configured with different target scene colors; or, the target scene color It can be automatically determined by the display quality of the acquired images.
  • the light color and the color of the target scene can be analyzed and processed to determine the compensation color parameter of the light-filling device, and the compensation color parameter is the control parameter corresponding to the light-filling device.
  • the light-filling device can be controlled based on the compensated color parameters, so as to realize the white balance operation of the shooting scene corresponding to the image acquisition device.
  • the compensation color parameter may be a parameter determined by the difference between the target scene color and the light color, or the compensation color parameter may be a parameter determined by the ratio between the light color and the target scene color.
  • the light detection information includes the color of the light
  • the color of the target scene corresponding to the captured image of the image acquisition device is determined, and based on the color of the light and the color of the target scene, the compensation color parameter of the light-filling device is determined, and then The supplementary light equipment can be controlled based on the determined compensation color parameters, so that the white balance operation of the shooting scene corresponding to the image acquisition device can be realized, thereby ensuring the quality and effect of the image collected by the image acquisition device, which effectively improves the practicality of the method.
  • the image capture device may include an optical element and an image sensor, and the optical element is used to reduce the diffraction of light passing through the image capture device.
  • the optical element may be structured to eliminate light diffraction.
  • the optical element can be a diffractive optical element (Diffractive Optical Elements, DOE for short) or a light blocking plate, etc.
  • DOE diffractive Optical Elements
  • the optical element can be a light blocking plate with a light hole in the center
  • the image sensor is set In the imaging plane of the image acquisition device, it is used to receive the image light after the diffraction of the optical element is reduced to generate the acquired image.
  • the image sensor can be a charge-coupled device (Charge-coupled Complementary Metal Oxide Semiconductor (Complementary Metal Oxide Semiconductor, CMOS for short) image sensor.
  • the image capturing device may include not only optical elements and image sensors, but also an anti-shake control unit.
  • the anti-shake control unit may include a lens optical anti-shake OIS unit and/or a body anti-shake IBIS unit, which The anti-shake control unit can perform jitter compensation on at least one of the optical element and the image sensor in the image capturing device based on the shooting parameters, so as to adjust the captured image captured by the image capturing device.
  • the shooting parameters determined by the acquired image acquisition device in this embodiment may include: : Obtain the anti-shake information determined by the anti-shake control unit set in the image acquisition device, and the anti-shake information is determined according to the position change information of the image acquisition device.
  • the user can adjust the position of the image acquisition device by adjusting the posture of the PTZ based on usage requirements or design requirements.
  • the anti-shake control unit in the image capture device can obtain different anti-shake information according to the position change information of the image capture device.
  • the image acquisition device may be provided with a first position sensor, which is used to detect position change information, and/or, a second position sensor may be provided on the pan/tilt, the second position sensor is used to detect the position change information, and the position change information is also used to adjust the spatial position of the image acquisition device, that is, the second position sensor is used to detect the position change information.
  • the sensing information of the position sensor can not only be used to determine the anti-shake information, but also can be used for the stabilization operation of the image acquisition device.
  • controlling at least one of the pan-tilt and the auxiliary device based on the control parameters may include: controlling the pan-tilt based on the control parameters to Realize the stabilization operation of the gimbal.
  • the auxiliary device includes an anti-shake control unit and can obtain the anti-shake information determined by the anti-shake control unit set in the image acquisition device, in order to be able to control the pan/tilt, the anti-shake information can be analyzed. process to obtain control parameters corresponding to the anti-shake information, and then control the PTZ based on the control parameters, which can ensure the display quality and effect of the images collected by the image acquisition device, and further improve the practicability of the method.
  • the method in this embodiment may include: detecting the image acquisition device and the The communication connection status between the PTZs, when the image acquisition device is connected to the PTZ, that is, when the image acquisition device is applied to the PTZ, the IBIS unit set on the PTZ can be activated to realize the anti-shake operation through the IBIS unit. ; When the image acquisition device is disconnected from the PTZ, that is, the image acquisition device is separated from the PTZ, the corresponding lens optical anti-shake OIS unit in the image acquisition device can be activated to realize the anti-shake operation through the OIS unit.
  • determining the control parameters based on the shooting parameters in this embodiment may include:
  • Step S801 Determine the anti-shake sensitivity information of the image acquisition device based on the anti-shake information, where the anti-shake sensitivity information is used to represent the speed of the anti-shake response to the excitation signal of the pan/tilt head.
  • the anti-shake information can be analyzed and processed to determine the anti-shake sensitivity information of the image acquisition device.
  • the determination of the anti-shake sensitivity information of the image acquisition device based on the anti-shake information may include: : Obtain the mapping relationship between the anti-shake information and the anti-shake sensitivity information, and use the mapping relationship and the anti-shake information to determine the anti-shake sensitivity information of the image acquisition device.
  • determining the anti-shake sensitivity information of the image acquisition device based on the anti-shake information may include: acquiring a machine learning model for analyzing and processing the anti-shake information, and inputting the anti-shake information into the machine learning model, so that the anti-shake information can be The anti-shake sensitivity information of the image acquisition device is obtained, and the anti-shake sensitivity information is used to characterize the speed of the anti-shake response of the excitation signal of the gimbal.
  • the anti-shake sensitivity information when the anti-shake sensitivity information is large, it means that the cloud The speed of anti-shake response of the excitation signal (ie input signal) of the platform is relatively fast; when the anti-shake sensitivity information is small, it means that the speed of anti-shake response to the excitation signal of the gimbal is relatively slow.
  • the excitation signal of the gimbal is a 15HZ excitation signal.
  • the anti-shake sensitivity information is large, it means that the anti-shake response speed with the excitation signal of the gimbal is faster; when the anti-shake sensitivity information is small , it means that the speed of anti-shake response with the excitation signal of the gimbal is relatively slow.
  • Step S802 Obtain the current excitation signal of the PTZ.
  • the body anti-shake IBIS unit can perform anti-shake operation based on an excitation signal.
  • the body anti-shake IBIS unit obtained The excitation signal can correspond to different frequency information, and the anti-shake operation and anti-shake effect of the IBIS unit are related to the frequency corresponding to the excitation signal. Therefore, in order to accurately obtain the control parameters corresponding to the anti-shake information, the current excitation signal of the gimbal can be obtained, and the current excitation signal is used to control the gimbal and the body anti-shake IBIS unit on the gimbal to perform normal operation.
  • the current excitation signal may be a control signal configured by the user based on application scenarios or application requirements or a default control signal, in some instances, the current excitation signal may be a 10HZ excitation signal, a 5HZ excitation signal, 15HZ excitation signal or 20HZ excitation signal and so on.
  • Step S803 Determine control parameters corresponding to the pan/tilt based on the current excitation signal and the anti-shake sensitivity information, so as to control the pan/tilt and/or the image acquisition device.
  • the current excitation signal and the anti-shake sensitivity information can be analyzed and processed, and the control parameters corresponding to the gimbal can be determined.
  • Sensitivity information, and determining the control parameters corresponding to the gimbal may include: determining the current sensitivity corresponding to the current excitation signal based on the anti-shake sensitivity information; when the current sensitivity is greater than or equal to the sensitivity threshold, generating a phase corresponding to the gimbal corresponding control parameters.
  • the anti-shake sensitivity information and the current excitation information can be analyzed and processed to determine the current sensitivity corresponding to the current excitation signal. After the current sensitivity is obtained, the current sensitivity can be analyzed and compared with the sensitivity threshold.
  • the image acquisition device at this time is relatively slow in anti-shake response to the excitation signal of the gimbal, that is, If the anti-shake suppression operation is performed based on the lens optical anti-shake OIS unit included in the image acquisition device, a better anti-shake suppression effect can be obtained, so there is no need to generate control parameters corresponding to the pan/tilt, that is, through the image acquisition device.
  • the lens optical image stabilization OIS unit included in the lens performs image stabilization suppression operation.
  • the current sensitivity is greater than or equal to the sensitivity threshold, it means that the image acquisition device at this time has a relatively fast anti-shake response speed to the excitation signal of the gimbal.
  • the anti-shake suppression operation is performed, a better anti-shake suppression effect cannot be obtained. Therefore, the anti-shake IBIS unit on the gimbal can be used to perform the anti-shake suppression operation.
  • generating the control parameters corresponding to the pan/tilt may include: performing a suppression operation on the current excitation signal to obtain a suppressed signal, where the suppressed signal is the control parameter corresponding to the pan/tilt.
  • the anti-shake operation can be effectively suppressed by the gimbal.
  • the current excitation signal of the gimbal can be controlled. Suppression operation, so that the suppressed signal can be obtained, and the suppressed signal is the control parameter corresponding to the gimbal, which can ensure the anti-shake quality and effect of the gimbal.
  • determining the control parameter corresponding to the gimbal based on the current excitation signal and the anti-shake sensitivity information may include: determining the current sensitivity corresponding to the current excitation signal based on the anti-shake sensitivity information; when the current sensitivity is greater than or When equal to the sensitivity threshold, control parameters corresponding to the image capture device are generated to control the anti-shake control unit based on the control parameters corresponding to the image capture device.
  • the anti-shake sensitivity information and the current excitation information can be analyzed and processed to determine the current sensitivity corresponding to the current excitation signal.
  • the current sensitivity can be analyzed and compared with the sensitivity threshold.
  • the current sensitivity is greater than or equal to the sensitivity threshold, it means that the image acquisition device at this time compares the speed of the anti-shake response to the excitation signal of the gimbal. Fast, that is, if the anti-shake suppression operation is performed based on the lens optical image stabilization OIS unit included in the image capture device, a better anti-shake suppression effect cannot be obtained, so there is no need to perform the anti-shake suppression operation through the OIS unit.
  • control parameters corresponding to the image capture device may be generated to control the anti-shake control unit based on the control parameters corresponding to the image capture device.
  • generating the control parameters corresponding to the image capture device may include: generating stop operation parameters corresponding to the anti-shake control unit, where the stop operation parameters are control parameters corresponding to the image capture device.
  • a stop operation parameter corresponding to the anti-shake control unit can be generated, and the stop operation parameter is the image
  • the corresponding control parameters of the acquisition device can ensure the anti-shake quality and effect of the gimbal.
  • the anti-shake sensitivity information of the image acquisition device is determined based on the anti-shake information, the current excitation signal of the gimbal is obtained, and then the control parameters are determined based on the current excitation signal and the anti-shake sensitivity information, thereby effectively ensuring the control parameters.
  • the accuracy and reliability of the determination can also be avoided to reduce the quality and effect of the captured image due to the dithering operation, which further improves the practicability of the method.
  • the gimbal can be affected by external factors (for example: the user's posture of operating the gimbal, such as pace speed, step frequency, user jitter, etc., the vibration factor of the main body used to carry the gimbal, etc.)
  • the jitter may include at least one of the following: jitter in the translation direction (direction parallel to the preset plane) and jitter in the vertical direction (direction perpendicular to the preset plane). Since the pan/tilt carries the image acquisition device, the jitter of the pan/tilt can affect the quality of the image captured by the image acquisition device, which in turn will affect the mirror movement effect of the pan/tilt.
  • this embodiment provides a process for configuring the signal filtering unit.
  • the method in this embodiment may also include:
  • Step S901 Determine the jitter information of the pan/tilt, which carries an image acquisition device.
  • the jitter information of the gimbal can be determined.
  • an inertial measurement unit can be set on the gimbal.
  • determining the jitter information of the gimbal can include: by inertial measurement
  • the measuring unit identifies the user pitch to obtain pitch information. Since the mapping relationship between pitch information and the jitter information of the gimbal is pre-configured, the jitter information of the gimbal can be determined by using the mapping relationship.
  • an environment sensor and an inertial measurement unit may be set on the gimbal, the environment information where the gimbal is located and the user's cadence information are obtained through the environment sensor and the inertial measurement unit, and the environmental information and the cadence information are analyzed and processed, to obtain the jitter information of the gimbal.
  • Step S902 Send the jitter information to the image acquisition device, so that the image acquisition device configures parameters of the signal filtering unit based on the jitter information.
  • the signal filtering unit of the image acquisition device is used to process the cadence information of the user to reduce or eliminate the jitter of the PTZ caused by the cadence information of the user, in order to better ensure the operation of the PTZ After obtaining the jitter information, the jitter information can be sent to the image acquisition device. After the image acquisition device obtains the jitter information, the image acquisition device can perform parameter configuration operations on the signal filtering unit based on the jitter information, so that the The configured signal filtering unit can better process the cadence information of the user, so as to reduce or eliminate the jitter of the pan/tilt caused by the cadence information of the user.
  • the image acquisition apparatus configuring the parameters of the signal filtering unit based on the jitter information may include: the image acquisition apparatus may analyze and process the jitter information, determine the user's cadence information, and determine the signal filtering unit based on the user's cadence information Then, the signal filtering unit can be configured based on the parameter information, and the configured signal filtering unit can reduce or eliminate the jitter of the pan/tilt caused by the user's cadence information.
  • the image acquisition device configures the parameters of the signal filtering unit based on the jitter information, thereby effectively realizing the jitter information based on the determined jitter information.
  • the configuration operation of the signal filtering unit in the image acquisition device is beneficial to ensure that the signal filtering unit can better process the user's cadence information, so as to reduce or eliminate the PTZ jitter caused by the user's cadence information.
  • the shooting parameters may be determined based on captured images and user expectations, and can be used to implement a preset shooting function of the image capturing device, where the preset shooting function may include a focus shooting operation, Intelligent follow operation and so on.
  • the image acquisition device may be communicatively connected to the pan/tilt, and acquiring the shooting parameters determined by the image acquisition device in this embodiment may include: acquiring the acquisition position of the target object in the acquired image. Specifically, the captured image is acquired by the image capturing device, and then the captured image is analyzed and processed to obtain the capturing position of the target object in the captured image. After the acquisition position is acquired, the acquisition position may be analyzed and processed to acquire the control parameter.
  • determining the control parameter based on the shooting parameter may include: based on the acquisition position, determining the control parameter for performing the following operation on the target object .
  • mapping relationship between the acquisition position and the control parameter for implementing the following operation on the target object is preconfigured, and the control parameter for the following operation on the target object can be determined based on the mapping relationship and the acquisition position.
  • the pan-tilt and/or the auxiliary device may be controlled based on the control parameters.
  • the corresponding control of at least one of the pan-tilt and the auxiliary device based on the control parameters may include: according to the control
  • the parameters control the pan/tilt to realize the following operation on the target object.
  • the camera may include a manual lens, a sensor and an external focus motor
  • the sensor may include: a sensor that can support phase focus operation, and a sensor that does not support phase focus operation.
  • the external focus motor can obtain the phase focus information (Phase Detection Auto Focus, PDAF for short) through the sensor's communication interface (such as the USB interface).
  • PDAF Phase Detection Auto Focus
  • the user can obtain the designated area by clicking on the display area, and then can obtain the PDAF information corresponding to the designated area.
  • the pre-calibrated phase-focus ring position curve can be obtained, and the focus ring position can be directly calculated based on the phase-focus ring position curve and PDAF information, so that the external focus motor can be controlled based on the focus ring position to rotate to the target.
  • the position of the focus ring can realize the quick focus operation of the lens.
  • PDAF information is in-camera information, in some instances, PDAF information is not easy to obtain, and in some instances, PDAF information can be obtained by cooperating with camera manufacturers. If the sensor does not support the phase focus operation, the external motor can obtain the contrast focus information (Contrast Detection Auto Focus, CDAF for short) by obtaining the contrast information of the focus area, so that the contrast focus function can be realized.
  • CDAF Contra Focus
  • the automatic focusing method in this application embodiment may include the following steps:
  • Step 1 The external focus motor obtains the mechanical limit of the manual lens through calibration.
  • the mechanical limit includes a near-end position and a far-end position, which are the left value L and the right value R respectively. It should be noted that the near-end position may correspond to the minimum focal length value, and the far-end position may correspond to the maximum focal length value.
  • Step 2 Obtain the current position C of the external focus motor, and calculate the contrast F of the focus area at the current position C (between L and R).
  • Step 3 Calculate the rotational speed and focusing direction of the external focusing motor based on the current position C, the near-end position L, and the far-end position R of the external focusing motor.
  • the rotation speed of the external motor can be calculated. Specifically, in order to accurately obtain the CDAF information, an excitation signal can be obtained, and the excitation signal corresponds to a frequency information, so that the time information corresponding to the frequency information can be obtained, and then the current position C, the near-end position L and the far-end position can be determined.
  • the distance between the end positions R can be determined based on the distance information and the above time information to determine the rotational speed of the external focusing motor (which is positively correlated with the focusing speed).
  • a maximum motor speed can be pre-configured, and the motor speeds used to control the movement of the external focus motor are all less than or equal to the maximum motor speed.
  • Step 4 Control the external focus motor to move based on the rotational speed and the focus direction, and obtain the updated position C ⁇ of the external focus motor.
  • Step 5 Determine and calculate the updated rotational speed S and the updated focusing direction of the external focusing motor based on the updated position C ⁇ .
  • Step 6 Detect whether the updated rotational speed S reaches the convergence threshold.
  • Step 7 When the update speed S reaches the convergence threshold, the focusing operation is realized; when the update speed S does not reach the convergence threshold, the next frame of image is acquired based on the update speed S.
  • Step 8 When the next frame of image does not meet the in-focus condition, calculate the contrast Fn of the diagonal area on the new motor position Cn.
  • Step 9 Determine whether the focus point corresponding to the current position is clearer by judging the numerical values of Fn and F.
  • Step 10 Update the left and right values according to the comparison result, return to calculating the focusing direction and rotation speed, and repeat until the rotation speed satisfies the end condition, thereby realizing the contrast focusing (CDAF) operation.
  • CDAF contrast focusing
  • the camera may include a large aperture mode and a small aperture mode. Since in the large aperture mode (the aperture value is greater than or equal to the preset threshold), the obtained PDAF information has limited accuracy. Therefore, fast AF can be achieved by combining PDAF information with CDAF information. In the small aperture mode (the aperture value is smaller than the preset threshold value), the obtained PDAF information has high precision, and therefore, a fast autofocus operation can be realized directly through the PDAF information.
  • this application example provides a method for automatic Hitchcock Dolly Zoom effect based on camera information. Since Dolly Zoom requires the camera to be equipped with a zoom lens, in order to realize Hitchcock's mirror movement operation, when After confirming the smart follower target, you need to follow the framed smart follower target. The shooting device can move back and forth. When the focal length of the lens remains the same, the proportion of the smart follower target in the screen will change (far large, near small). In order to keep the proportion of the intelligent following target consistent in the picture, the Dolly Zoom effect can be achieved by controlling the zoom. There are two ways to control the zoom in Dolly Zoom: (1) control the zoom through the zoom motor of the gimbal; (2) control the zoom through the camera interface (if the camera has a response interface). Specifically, the method may include the following steps:
  • Step S21 calibrating the movement range of the zoom motor according to the zoom range of the camera.
  • the zoom motor needs to be calibrated in advance for the zoom range, so that the lens travel and the zoom motor travel can be obtained.
  • the corresponding relationship between them can prevent the camera lens from being damaged due to exceeding the zoom range during zoom operation.
  • Step S22 calibrating the movement direction of the zoom motor.
  • the movement direction of the zoom motor depends on the mutual installation relationship between the mechanical structure and the zoom lens, it is necessary to calibrate the relationship between its movement direction and the zoom direction (zoom in and zoom out) in advance. The relationship between them can make the user know whether the displayed picture is increasing or decreasing in the process of controlling the motor to move.
  • Step S23 frame selection of the intelligent follow target, and record the proportion of the initial frame selection target in the screen.
  • the proportion of the initial frame selection target in the screen can be recorded.
  • the proportion of the initial frame selection target in the screen can be determined separately or jointly by three indicators: (1) Intelligent following target frame The ratio of the selection length to the screen length; (2) the ratio of the width of the smart follow target frame selection to the screen width; (3) the ratio of the smart follow target frame selection area to the screen area.
  • the width ratio is relatively stable and reliable.
  • Step S24 controlling the focus motor according to the proportion of the initial frame selection target in the screen, so as to control the target to be displayed consistently on the screen.
  • the initial target ratio of the focus motor or electronic zoom output can be calculated in real time, and the target is controlled to be displayed consistently on the screen based on the initial target ratio.
  • the display screen is obtained by the camera, the initial target ratio is obtained based on the display screen, and the initial target ratio is input to the proportional-integral-differential (Proportional, Integral, Differential, PID for short) control unit , so that the PID control unit can generate a control signal corresponding to the focus motor based on the initial target ratio, and then control the focus motor based on the control signal, so as to ensure that during the movement of the focus motor, the target is within The display on the screen remains the same.
  • the technical solution provided by this application example effectively solves the problem that the manual control of the Dollyzoom's mirror movement method in the prior art is difficult to control, and can ensure the quality and effect of the Dollyzoom's mirror movement operation, further improving the method. practicability.
  • Gimbal Image Stabilization can be set on the gimbal.
  • OIS optical Image Stabilization
  • the range that the OIS can move is usually relatively small.
  • the stabilization unit GIS on the gimbal can be used to suppress the shaking of the picture.
  • this application embodiment provides a control method based on camera anti-shake OIS combined with gimbal anti-shake, which can control the interference of anti-shake on the basis that the gimbal can obtain the OIS information inside the camera Perform corresponding evasion operations.
  • the method may include the following steps:
  • Step 31 Acquire the OIS information of the camera, and obtain the OIS sensitivity function by testing the OIS information of the camera.
  • the camera can be controlled to turn on OIS, and then the gimbal can perform frequency sweep operation under the control of the excitation signal of the set frequency, wherein the excitation signal of the set frequency can include any one of the following: 1 Hz excitation signal, 2Hz excitation signal, 5Hz excitation signal, 10Hz excitation signal or 20Hz excitation signal, etc., so as to obtain the captured images of the camera corresponding to different excitation signals, and then based on the OIS that can know the different excitation signals of the camera Sensitivity function.
  • Step 32 Acquire the current excitation signal of the gimbal.
  • Step 33 Determine whether the current excitation signal has sufficient suppression at the set frequency corresponding to the OIS sensitivity function.
  • Step 34 If the current excitation signal does not have sufficient suppression capability at the set frequency corresponding to the OIS sensitivity function, it means that the OIS in the camera cannot sufficiently suppress the jitter of the gimbal or image acquisition device.
  • the GIS on the stage is used for stabilization operation, or the stabilization operation is performed by combining the OIS in the camera and the GIS on the gimbal.
  • Step 35 If the current excitation signal has sufficient suppression ability at the set frequency corresponding to the OIS sensitivity function, it means that only the OIS in the camera can sufficiently reduce the impact of the jitter on the images collected by the camera. At this time, the cloud can be turned off.
  • the GIS on the stage performs the anti-shake operation; or, the excitation signal of the gimbal can be suppressed, so that the suppressed signal can be obtained, and then the gimbal can be controlled based on the suppressed signal. OIS performs anti-shake operation.
  • the OIS in the camera can also filter out the shaking of the user.
  • the cadence of the user's walking may affect the stable display of the picture.
  • the human body cadence can be recognized by the gimbal imu. , and then feed back the information of identifying the cadence to the camera.
  • the camera can set parameters for the signal filter based on the identified cadence information. After the parameter configuration is completed, the signal filter in the camera can adjust the direction of the translation This can reduce or eliminate the jitter of the gimbal caused by the user's cadence information.
  • the technical solution provided by this application embodiment can solve the problem of anti-shake interference by fusing the information of the gyroscope sensor of the camera, thus effectively ensuring the stabilization effect on the gimbal and the camera, thereby helping to improve the quality of the images collected by the camera. Display quality and display effect, thus ensuring the practicability of the control method.
  • Application example 4 for the gimbal and the camera supported by the gimbal, during the operation of the gimbal and the camera, in a low-light environment, the camera usually adopts the method of automatically adjusting the ISO to achieve the exposure balance of the overall picture.
  • the ISO when the ISO is high to a certain level, it will lead to an increase in the noise of the picture and the deterioration of the picture quality.
  • this application embodiment provides a method for automatically adjusting aperture and external light exposure to achieve exposure balance. Specifically, the method may include the following steps:
  • Step 41 Obtain the actual metering value through the camera.
  • Step 42 Determine the target exposure value corresponding to the camera.
  • Step 43 Input the actual light metering value and the target exposure value to the PID unit, so that control parameters corresponding to the fill light and the aperture motor can be generated.
  • the actual metering value and the target exposure value can be input to the PID control unit, so that the PID control unit can be based on the actual metering value and the target exposure value.
  • the target exposure value generates a control signal corresponding to the fill light and the aperture motor, and then the fill light and the aperture motor can be controlled based on the control signal, so as to ensure the exposure balance operation during the shooting process of the camera, which can Ensure the quality and effect of image acquisition operations.
  • Step 44 Controlling the fill light and the aperture motor based on the control parameters to realize the fill light operation.
  • the aperture of the camera cannot be directly controlled by the bayonet in the exposure mode of aperture priority, because the technical solution in this application can realize aperture control by controlling an external motor, so as to realize the adjustment of the exposure value. operation, which implements a simple PID closed-loop controller. In the same way, when the aperture is opened to the maximum, you can also increase the external light by adding an external fill light to balance the exposure.
  • the technical solution provided by this application example solves the problem of the ISO being not fixed in the manual gear mode through external light control, further improves the practicability of the control method, and is beneficial to market promotion and application.
  • this embodiment provides a focus control method.
  • the execution body of the focus control method may be a focus control device, and the focus control device may be implemented as software or a combination of software and hardware.
  • the focus control method It can be applied to a pan/tilt.
  • the pan/tilt is used to support the image acquisition device and auxiliary equipment.
  • the auxiliary equipment may include a follow focus motor for adjusting the follow focus ring of the image acquisition device.
  • the focus control method can include:
  • Step S1001 acquiring focus information determined by the image acquisition device
  • Step S1002 Based on the focus information, determine control information corresponding to the focus motor;
  • Step S1003 control the focus motor based on the control information to realize the follow focus operation of the image acquisition device.
  • this embodiment provides a zoom control method.
  • the execution body of the zoom control method may be a zoom control device, and the zoom control device may be implemented as software or a combination of software and hardware.
  • the zoom control method It can be applied to a pan/tilt.
  • the pan/tilt is used to support the image capture device and auxiliary equipment, and the auxiliary device may include a zoom motor for adjusting the zoom ring of the image capture device.
  • the zoom in this embodiment Control methods can include:
  • Step S1101 Acquire a captured image collected by an image capturing device and zoom information corresponding to the captured image.
  • Step S1102 Determine the control parameters of the zoom motor based on the captured image and the zoom information.
  • Step S1103 control the zoom motor based on the control parameters to realize the zoom operation of the image capture device.
  • the present embodiment provides a supplementary light control method.
  • the execution body of the supplementary light control method can be a supplementary light control device, and the supplementary light control device can be implemented as software or a combination of software and hardware.
  • the light-filling control method can be applied to a pan/tilt.
  • the pan/tilt is used to support an image acquisition device and auxiliary equipment.
  • the auxiliary equipment may include a light-filling device for performing a supplementary light operation on the image acquisition device.
  • this The fill light control method in the embodiment may include:
  • Step S1201 Acquire the light detection information determined by the image acquisition device.
  • Step S1202 Based on the light detection information, determine a control parameter corresponding to the light supplement device.
  • Step S1203 control the light-filling device based on the control parameters, so as to realize the light-filling operation of the image acquisition device.
  • this embodiment provides an anti-shake control method.
  • the execution body of the anti-shake control method may be an anti-shake control device, and the anti-shake control device may be implemented as software or a combination of software and hardware.
  • the anti-shake control method can be applied to a pan/tilt.
  • the pan/tilt is used to support an image acquisition device, a body anti-shake IBIS unit may be provided on the pan/tilt, and the image acquisition device includes an anti-shake control unit and an anti-shake control unit.
  • the anti-shake control unit can perform jitter compensation on at least one of the optical element and the image sensor in the image capture device based on the shooting parameters, so as to adjust the captured image captured by the image capture device; specifically Yes, the anti-shake control method in this embodiment may include:
  • Step S1301 Acquire the anti-shake information determined by the anti-shake control unit set in the image acquisition device, where the anti-shake information is determined according to the position change information of the image acquisition device.
  • Step S1302 Determine the current excitation signal of the gimbal.
  • Step S1303 Determine control parameters based on the current excitation signal and anti-shake information.
  • Step S1304 Control the gimbal based on the control parameters to realize the stabilization operation of the gimbal.
  • the method in this embodiment may further include: detecting a communication connection state between the image acquisition device and the PTZ, and when the image acquisition device is communicatively connected to the PTZ, that is, the image acquisition device is applied to the PTZ, Then the IBIS unit set on the PTZ can be started to realize the anti-shake operation through the IBIS unit; when the communication connection between the image acquisition device and the PTZ is disconnected, that is, the image acquisition device is separated from the PTZ, the image acquisition device can be activated.
  • the corresponding lens optical anti-shake OIS unit so as to realize the anti-shake operation through the OIS unit.
  • FIG. 13 is a schematic structural diagram of a control device based on an image acquisition device provided by an embodiment of the present invention; with reference to FIG. 13 , the present embodiment provides a control device based on an image acquisition device.
  • the control device is configured to execute the control method based on the image acquisition device shown in FIG. 1.
  • the control device based on the image acquisition device may include:
  • the processor 11 is used for running the computer program stored in the memory 12 to realize:
  • the shooting parameters can be used to adjust the captured images collected by the image acquisition device;
  • At least one of the pan/tilt head and the auxiliary equipment is controlled accordingly, wherein the pan/tilt head is used to support the image acquisition device and/or the auxiliary equipment, and the auxiliary equipment is used to assist the image acquisition apparatus to perform corresponding shooting.
  • the structure of the control device based on the image acquisition device may further include a communication interface 13 for the electronic device to communicate with other devices or a communication network.
  • the apparatus shown in FIG. 13 may execute the method of the embodiment shown in FIG. 1 to FIG. 12 .
  • the apparatus shown in FIG. 13 may execute the method of the embodiment shown in FIG. 1 to FIG. 12 .
  • an embodiment of the present invention provides a computer-readable storage medium, where the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement the above-mentioned based on FIG. 1 to FIG. 12 .
  • a control method of an image acquisition device is a computer-readable storage medium, where the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement the above-mentioned based on FIG. 1 to FIG. 12 .
  • FIG. 14 is a schematic structural diagram of a pan/tilt provided by an embodiment of the present invention. with reference to FIG. 14 , the present embodiment provides a pan/tilt, and the pan/tilt may include:
  • the control device 24 based on the image acquisition device shown in FIG. 13 is disposed on the main body 21 of the pan/tilt head.
  • pan/tilt in this embodiment are similar to those of the control device based on the image acquisition device. For details, refer to the description in the embodiment shown in FIG. 13 , which will not be repeated here.
  • the pan/tilt in this embodiment may further include:
  • the image acquisition device 22 is arranged on the main body 21 of the PTZ;
  • the auxiliary device 23 is arranged on the main body 21 of the pan/tilt head, and is used to assist the image capturing device 22 to perform corresponding shooting.
  • pan/tilt in this embodiment are similar to those of the control device based on the image acquisition device. For details, refer to the description in the embodiment shown in FIG. 13 , which will not be repeated here.
  • the gimbal is not only limited to anti-shake stabilization during video shooting, but can also expand more other operation methods, which is conducive to ensuring the user's video shooting experience.
  • the following two points are more important: one is how to obtain the position information of the subject in the picture; the other is how to control the gimbal Make movement to keep the subject in the composition position, such as the center of the frame.
  • the method for controlling and collocating a third-party camera to realize intelligent follow-up is mainly to introduce an image processing device (including the following image signal processing device) in the PTZ, and the image processing device can pass the High Definition Multimedia Interface (High Definition Multimedia Interface, HDMI for short) or other interfaces to obtain the real-time image of the camera, input the real-time image to the AI machine learning unit (software implementation), and then obtain the real-time position of the object to be photographed in the third-party camera.
  • an image processing device including the following image signal processing device
  • HDMI High Definition Multimedia Interface
  • the camera 100 is used as a third-party load, which can be connected to an image signal processing (Image Signal Processing, ISP for short) device through an HDMI interface, and the image signal processing device It can include: ISP module 1011, buffer 1012, real-time video output device 1013, format converter 1014, machine learning model 1015 and policy processor 1016, the above-mentioned ISP module 1011 can analyze and process the received image, and The processed image data is transmitted to the buffer 1012 for buffering, and the buffered image data can not only be output in real time by the real-time video output device 1013, but also can be buffered by the format converter 1014.
  • ISP Image Signal Processing
  • the format conversion operation is performed, so that the image data after the format conversion operation can be input into the machine learning model 1015 to perform the machine learning operation, so as to identify the to-be-followed subject set by the user.
  • the policy processor 1016 can determine the control parameters of the gimbal according to the strategy, and then the gimbal controller 102 can control the gimbal based on the control parameters of the gimbal, so that the gimbal can be followed
  • the main body performs an intelligent follow-up operation.
  • the video signal transmitted from the HDMI interface has a large delay, which directly causes the effect of the follow-up operation to become very poor, and the delay length corresponding to the HDMI interface of different cameras is different, resulting in Algorithmically difficult to normalize.
  • this embodiment provides a control method, device, movable platform and storage medium of a pan/tilt head.
  • the control method obtains the collection position of the target object in the collected image, and determines the control parameters for following the target object based on the collection position; and then controls the PTZ according to the control parameters, so that the target object can be controlled.
  • Follow-up operation in which, since the acquisition position is determined by the image acquisition device, and the PTZ can directly acquire the acquisition position through the image acquisition device, this effectively reduces the corresponding delay when the PTZ acquires the acquisition position through the image acquisition device. Therefore, the problem of poor follow-up effect caused by the relatively large delay is solved, the quality and effect of the follow-up operation of the gimbal are further ensured, and the stability and reliability of the method are effectively improved.
  • FIG. 16 is a schematic flowchart of a control method of a pan/tilt according to an embodiment of the present invention
  • FIG. 17 is a schematic structural diagram of a communication connection between a pan/tilt and an image acquisition device provided by an embodiment of the present invention; refer to FIG. 16-FIG.
  • this embodiment provides a control method for a pan-tilt, wherein the pan-tilt is communicatively connected with an image acquisition device.
  • the image acquisition device refers to a device with image acquisition capability and image processing capability, such as : Cameras, video cameras, other devices with image acquisition capabilities and image processing capabilities, etc.
  • a communication serial bus USB interface may be provided on the pan/tilt, and the USB interface is used for wired communication connection with the image capture device, that is, the pan/tilt is communicatively connected to the image capture device through the USB interface.
  • the PTZ transmits the position data of the object to be followed with the image acquisition device through the USB interface
  • the tracked object transmitted between the PTZ and the image acquisition device does not need to be additionally installed.
  • the delay time corresponding to the position data of the object is relatively short.
  • the delay time corresponding to the position data of the object to be followed is t1;
  • the delay time corresponding to the position data of the object to be followed is t2, where t2 ⁇ t1 or t2 ⁇ t1.
  • the communication connection mode between the PTZ and the image acquisition device is not limited to the above-defined implementation mode, and those skilled in the art can also set it according to specific application requirements and application scenarios, such as wireless communication, or the ability to It is ensured that when the PTZ and the image acquisition device perform data transmission, the corresponding delay time is relatively short, which is not repeated here.
  • the execution subject of the control method of the pan/tilt may be a control device of the pan/tilt.
  • the control device can be implemented as software, or a combination of software and hardware; when the control device executes the control method of the pan-tilt head, it can solve the following effect caused by the long delay caused by data transmission through the interface. Therefore, the quality and effect of the following operation on the target object can be guaranteed.
  • the method may include:
  • Step S1601 Acquire the acquisition position of the target object in the acquired image, the acquisition position is determined by an image acquisition device, the image acquisition device is a camera with a manual lens or an automatic lens, and the image acquisition device is communicatively connected to the PTZ.
  • Step S1602 Based on the collection position, determine the control parameters used to follow the target object.
  • Step S1603 Control the PTZ according to the control parameters, so as to realize the following operation of the target object.
  • Step S1601 acquiring the acquisition position of the target object in the acquired image, where the acquisition position is determined by the image acquisition device.
  • the image acquisition device can be set on the pan-tilt and used for image acquisition operation. After acquiring the acquired image, the image acquisition device can analyze and process the acquired image to determine the acquisition position of the target object in the acquired image.
  • the collection position of the target object in the captured image may include: key point positions corresponding to the target object in the captured image, or a coverage area corresponding to the target object in the captured image, and so on.
  • the acquisition position can be obtained by the image acquisition device by directly sampling the pixels of the image, or it can be obtained by processing the sampling results of the pixels of the image.
  • the center position of the following frame can be determined by the vertex and the size of the following frame, and the center position is used as the sampling position of the target object in the collected image.
  • the collection position may be determined or represented according to the relevant information of the position of the following frame in the collected image.
  • the following frame may be obtained when the user performs a touch operation on the display screen of the image acquisition device. For example, when the user selects an object on the display screen of the image acquisition device, a frame selecting at least part of the object can be generated. follow the box.
  • the acquisition position of the target object in the acquired image can be actively or passively transmitted to the PTZ through the USB interface, so that the PTZ can obtain the target object in the acquired image. collection location in .
  • Step S1602 Based on the collection position, determine the control parameters used to follow the target object.
  • the acquisition position can be analyzed and processed to determine a control parameter for following the target object, and the control parameter can include at least one of the following: attitude information, angular velocity information, acceleration information, control bandwidth, etc.
  • determining the control parameters may include: calculating a current position prediction value corresponding to the acquisition position; determining, based on the current position prediction value, for following the target object Control parameters for the operation.
  • the acquired image can be analyzed and processed, so that the acquisition position of the target object in the acquired image can be obtained, and the target object can be displayed in the acquired image.
  • the collection position is transmitted to the PTZ. Since it takes a certain amount of time for the image acquisition device to acquire the acquisition position and to transmit the acquisition position to the PTZ, there is also a certain delay time when the PTZ directly acquires the acquisition position from the image transmission acquisition device. In order to reduce the influence of the delay time on the intelligent follow-up operation, the predicted value of the current position corresponding to the collection position may be calculated based on the above-mentioned delay time. It can be understood that the current position prediction value and the collection position are different positions.
  • the predicted value of the current position After the predicted value of the current position is obtained, the predicted value of the current position can be analyzed and processed to determine the control parameters used to follow the target object. Therefore, the accuracy and reliability of the determination of the control parameters are effectively ensured.
  • Step S1603 Control the PTZ according to the control parameters, so as to realize the following operation of the target object.
  • the PTZ can be controlled based on the control parameters, so that the following operation of the target object can be realized.
  • the acquisition position is calculated by the pan/tilt side, and an image transmission module and an image signal processing device need to be additionally installed on the pan/tilt side to acquire the acquired image of the image acquisition device and analyze the acquired image. Processing and follow-up operation.
  • the image signal processing function of the image acquisition device is multiplexed, so that the pan/tilt side can not only realize the following operation of the target object without additional signal processing device, but also can realize the following operation without setting the image transmission module.
  • the bandwidth required for the transmission of the captured image is much larger than the bandwidth required for the transmission of the acquisition location, so the reduction of the transmission data can reduce the delay of the following operation to a certain extent, Thus, the following efficiency and accuracy of the target object are effectively improved.
  • an image transmission module can also be set on the PTZ side to display the acquired images on the PTZ side, and an image signal processing device can also be set up to adapt to different For example, some image acquisition devices from which the acquisition position cannot be obtained are adapted.
  • the image acquisition device when the image acquisition device is provided with an image signal processing device, its processing capability can be attributed to the image signal processing device on the pan/tilt side.
  • the recognition capability of the machine learning model of the image acquisition device may be due to the recognition capability of the machine learning model of the PTZ. Based on this, in the case where the transmission data from the image acquisition device to the PTZ side is reduced and the recognition ability of the image acquisition device is due to the recognition ability of the PTZ, if the PTZ needs to follow the operation, the motor in the PTZ is required.
  • the data transmission time 1 from the image acquisition device to the controller of the PTZ is shortened accordingly, and because there is no need to wait for the PTZ side to identify the acquisition device , the data transmission time 2 from the controller of the gimbal to the motor of the gimbal is also shortened accordingly, thereby reducing the data transmission time on the two nodes at the same time, and reducing the delay of the gimbal to realize the follow operation function.
  • the pan/tilt when the target object is followed, the pan/tilt may correspond to different motion states, for example, the pan/tilt may be in uniform motion, uniform acceleration motion, uniform deceleration motion, and the like.
  • the gimbal with different motion states can have different control strategies.
  • controlling the pan-tilt according to the control parameters may include: acquiring the motion state of the pan-tilt corresponding to the target object; and controlling the pan-tilt based on the motion state and control parameters of the pan-tilt.
  • the motion state of the pan/tilt can be determined according to the motion state of the target object, for example, if the target object is moving at a uniform speed, the pan/tilt may be in uniform motion; if the target object is in uniform acceleration motion, the pan/tilt may be in uniform acceleration motion; The target object is in uniform deceleration movement, and the gimbal can be in uniform deceleration movement.
  • the motion state of the gimbal is related to the following duration, for example, during initial following, it may be a uniform acceleration motion; it may also be related to the following state, for example, when the tracking target is lost, it may be a uniform acceleration motion.
  • the motion state of the pan/tilt corresponding to the target object may be acquired.
  • this embodiment does not limit the specific acquisition method of the motion state of the pan/tilt corresponding to the target object, and those skilled in the art can set it according to specific application requirements and design requirements.
  • Frame acquisition images analyze and process the multi-frame acquisition images to determine the moving speed corresponding to the pan/tilt, and determine the motion state of the pan/tilt corresponding to the target object based on the moving speed.
  • the motion state of the pan/tilt can include one of the following: One: uniform acceleration motion, uniform deceleration motion, uniform speed motion, etc.
  • an inertial measurement unit may be provided on the gimbal, and the motion state of the gimbal corresponding to the target object is obtained through the inertial measurement unit, and the like. After the motion state of the gimbal is obtained, the gimbal can be controlled based on the motion state and control parameters of the gimbal to realize the following operation of the target object, thereby effectively improving the quality and quality of the following operation of the target object. efficiency.
  • the control method of the pan-tilt is to obtain the acquisition position of the target object in the acquired image, and then determine the control parameters for following the target object based on the acquisition position, and control the pan-tilt according to the control parameters,
  • the following operation of the target object can be realized.
  • the acquisition position is determined by the image acquisition device, and the PTZ can directly acquire the acquisition position through the image acquisition device, the corresponding delay time when the PTZ acquires the acquisition position through the image acquisition device is effectively reduced, so that the The problem of poor following effect caused by the relatively large delay is solved, the quality and effect of the following operation on the target object are further ensured, and the stability and reliability of the method are effectively improved.
  • FIG. 19 is a schematic flowchart of obtaining a collection position of a target object in a captured image according to an embodiment of the present invention; on the basis of the above embodiment, with continued reference to FIG. 19 , this embodiment provides a
  • the implementation manner of acquiring the acquisition location in the acquired image, specifically, the acquisition location of the acquisition target object in the acquired image in this embodiment may include:
  • Step S1901 Acquire a target focus position corresponding to the target object through an image acquisition device.
  • Step S1902 Determine the target focus position as the capture position of the target object in the captured image.
  • the focusing operation of the image acquisition device and the following operation of the pan/tilt or drone are two completely independent operations.
  • the tracking object of the gimbal or the drone cannot be adjusted in time based on the change of the focusing object, so the quality and effect of the following operation cannot be guaranteed.
  • an image acquisition device can be mounted on the UAV through the gimbal, then the control parameters of the UAV and/or the gimbal can be adjusted to realize the following operation. .
  • the present embodiment provides an image
  • the focusing operation of the acquisition device and the follow-up operation of the gimbal or drone are technical solutions for associated operations. Specifically, in the field of camera technology, when the acquisition position of the target object in the acquired image is acquired by the image acquisition device, and the focus point of the image acquisition device for the target object is different from the acquisition position, in this way, the pan-tilt is controlled based on the acquisition position. When the object follows the operation, it is easy to cause the target object obtained by the image acquisition device to be out of focus.
  • the target focus position corresponding to the target object can be acquired through the image acquisition device. It is understandable that Yes, the above-mentioned target focus position may be a focus position selected by a user or an automatically recognized focus position.
  • the target focus position corresponding to the target object can be directly determined as the capture position of the target object in the captured image, that is, the focus position corresponding to the target object and the target object in the captured image are obtained.
  • the acquisition position is consistent, thereby effectively avoiding the situation of the target object being out of focus.
  • determining the target focus position as the capture position of the target object in the captured image may include: acquiring a preset area range corresponding to the target focus position , the preset area range is directly determined as the acquisition position of the target object in the acquired image.
  • the preset area range corresponding to the target focus position may be at least a partial coverage area corresponding to the target object in the captured image, and includes the target focus position.
  • the focus position corresponding to the target object and the target object in the captured image The acquisition positions in the images are basically the same, so it can also avoid the situation where the target object is out of focus.
  • the target focus position corresponding to the target object is acquired by the image capture device, and then the target focus position is determined as the capture position of the target object in the captured image, thereby effectively realizing the focus position corresponding to the target object It is basically the same as the acquisition position of the target object in the acquired image, thereby effectively preventing the target object from appearing out of focus, and further improving the quality and effect of the following operation on the target object.
  • the focusing and following operations on the target object are two independent operations, that is, on the image acquisition device side, analysis can be performed based on the acquired image to obtain the target focusing position of the target object and focus.
  • analysis can be performed based on the collected images to obtain the real-time position deviation of the target object and follow.
  • the controller of the image acquisition device and the controller of the pan/tilt side perform image analysis and processing respectively, which not only increases the consumption of computing resources, but also may lead to inconsistencies in the image analysis and processing, resulting in the problem of jumping points.
  • the target focus position is used to realize the following operation, the image analysis processing capability of the image acquisition device is multiplexed, and the problem of jumping points is solved.
  • FIG. 20 is a schematic flowchart of obtaining a target focus position corresponding to a target object through an image acquisition device according to an embodiment of the present invention; on the basis of the above embodiment, with continued reference to FIG. 20 , this embodiment provides a
  • the implementation manner of acquiring the target focus position specifically, in this embodiment, the acquisition of the target focus position corresponding to the target object by the image acquisition device may include:
  • Step S2001 Acquire a historical focus position and a current focus position corresponding to the target object through an image acquisition device.
  • Step S2002 Determine a target focus position corresponding to the target object based on the historical focus position and the current focus position.
  • the target object when the image acquisition device is used for the image acquisition operation of the target object, since the target object may be in a moving state, for example, the target object is in a state of uniform speed movement, uniform acceleration movement state, uniform deceleration movement state, etc., and the different target objects
  • the moving state easily causes the corresponding focus position to change during the image acquisition operation.
  • the historical focus position and current focus position corresponding to the target object corresponding to the image acquisition device can be used.
  • the historical focus position refers to the focus position corresponding to the historical image frame obtained by the image capture device
  • the current focus position refers to the focus position corresponding to the current image frame obtained by the image capture device.
  • determining the target focus position corresponding to the target object based on the historical focus position and the current focus position may include: determining a historical object part corresponding to the historical focus position and a current object part corresponding to the current focus position; According to the historical object part and the current object part, the target focus position corresponding to the target object is determined.
  • multiple focus positions (including historical focus positions and current focus positions) corresponding to the multiple frames of images can be determined.
  • the focus positions can be the same or different.
  • a historical image corresponding to the historical focus position and a current image corresponding to the current focus position can be determined, and then the historical image is analyzed and processed based on the historical focus position to determine the historical image corresponding to the historical focus position.
  • the historical object part corresponding to the location can be used to analyze and process the historical images to determine the target object contour and target object type in the historical image, and then determine the historical focus position and the target object contour and target object type in the historical image.
  • the corresponding relationship between the historical object parts corresponding to the historical focus positions is determined.
  • the current image can also be analyzed and processed based on the current focus position to determine the current object part corresponding to the current focus position.
  • the historical object part and the current object part can be analyzed and processed to determine the target focus position corresponding to the target object.
  • an image recognition algorithm or a pre-trained machine learning model can be used to analyze and identify the acquired image to identify at least one object included in the acquired image and the region where the object is located.
  • an image recognition algorithm or a pre-trained machine learning model can be used to analyze and identify the acquired image to identify at least one object included in the acquired image and the region where the object is located.
  • certain focus positions are part of the area where an object is located, it can be determined that some of the focus positions above correspond to the same object .
  • certain focus positions are part of regions where different objects are located, it can be determined that the aforementioned certain focus positions correspond to different objects.
  • the distance information between any two focusing positions can be determined, and when the distance information is less than or equal to a preset threshold, it can be determined that the two focusing positions correspond to the same object If the distance information is greater than the preset threshold, it can be determined that the above two focus positions correspond to different parts of the same object.
  • the historical focus position and the current focus position After obtaining the historical focus position and the current focus position, it can be determined whether the historical focus position and the current focus position correspond to the same target object, and when the historical focus position and the current focus position correspond to the same target object, the historical focus position and the current focus position can be determined. Whether the position corresponds to the same part of the same object.
  • the above-mentioned information After the above-mentioned information is determined, the above-mentioned information can be transmitted to the PTZ, so that the PTZ can perform a follow-up control operation based on the above-mentioned information, thereby ensuring the quality and effect of the intelligent follow-up operation.
  • mapping relationship between the focusing position and the focusing object and the focusing position of the focusing object, and have respective attribute information
  • the attribute information may have a corresponding identification
  • the mapping relationship and attribute information may be obtained through the image acquisition device. Send it to the PTZ, so that the PTZ can make corresponding judgments based on the information and make corresponding execution strategies.
  • determining the target focus position corresponding to the target object according to the historical object part and the current object part may include: when the historical object part and the current object part are different parts of the same target object, obtaining the historical object part and the current object part. The relative position information between the current object parts; the current focus position is adjusted based on the relative position information, and the target focus position corresponding to the target object is obtained.
  • the historical object part and the current object part can be analyzed and processed.
  • the historical object part and the current object part are different parts of the same target object, it means that the historical image and the current image
  • the historical object part corresponding to the historical image frame is the eyes of the character A
  • the current object part corresponding to the current image frame is the shoulder of the character A.
  • the relative position information between the historical object part and the current object part can be obtained, for example, the relative position between the eyes of the person A and the shoulder of the person A information; after the relative position information is acquired, the current focus position can be adjusted based on the relative position information to obtain the target focus position corresponding to the target object.
  • adjusting the current focus position based on the relative position information, and obtaining the target focus position corresponding to the target object may include: when the relative position information is greater than or equal to a preset threshold, adjusting the current focus position based on the relative position information, Obtain the target focus position corresponding to the target object; when the relative position information is smaller than the preset threshold, determine the current focus position as the target focus position corresponding to the target object.
  • the relative position information can be analyzed and compared with the preset threshold.
  • the relative position information is greater than or equal to the preset threshold, it means that when the image acquisition device is used to focus on a target object, The focus positions for the same target object are different at different times, and then the current focus position can be adjusted based on the relative position information to obtain the target focus position corresponding to the target object.
  • the relative position information is less than the preset threshold, it means that when the image acquisition device is used to focus on a target object, the focus position on a target object is basically unchanged at different times, and the current focus position can be determined as the target object.
  • the target focus position corresponding to the object is the relative position information is greater than or equal to the preset threshold.
  • the corresponding historical object part can be determined based on the historical focus position, and the corresponding current object part can be determined based on the current focus position.
  • the historical object part and the current object part can be analyzed and processed to determine the target focus position corresponding to the target object.
  • the relative position information d1 between part 1 and part 2 can be obtained, and then the relative position information d1 and the preset threshold are analyzed.
  • the relative position information d1 is less than the preset threshold, it means that when the image acquisition device is used to focus on the above-mentioned person, the focus position changes slightly, and then the current focus position can be determined as the target corresponding to the target object. Focus position.
  • the relative position information d2 between part 3 and part 4 can be obtained, and then the relative position information d2 can be obtained.
  • Analysis and comparison with the preset threshold value when the relative position information d2 is greater than the preset threshold value, it means that when the image acquisition device is used to focus on the above-mentioned person, the focus position changes greatly, and then the current focus can be based on the relative position information.
  • the position is adjusted to obtain the target focus position corresponding to the target object, that is, when the target object to be followed has not changed, but the focus position has changed, the current position can be adjusted based on the relative positional relationship between various parts in the target object.
  • the focus position is automatically adjusted, which can effectively avoid image jumps.
  • the historical object part and the current object part may be analyzed and processed to determine the target focus position corresponding to the target object.
  • determining the target focal position corresponding to the target object according to the historical object part and the current object part may include: when the historical object part and the current object part are different parts of the same target object, performing the composition position based on the current focal position update, and obtain the first updated composition position; and perform a following operation on the target object based on the first updated composition position.
  • the composition position can be updated based on the current focus position to obtain the first updated composition position.
  • the preset composition target position is the center position of the screen, at this time, in order to avoid the image shaking due to the change of the target object, the preset composition position can be updated based on the current focus position, that is, the current focus position can be updated.
  • the focus position is determined as the first updated composition position. After the first updated composition position is acquired, the target object can be followed based on the first updated composition position, thereby ensuring the quality and efficiency of the target object following operation.
  • the historical focus position and the current focus position corresponding to the target object are acquired by the image acquisition device, and then the target focus position corresponding to the target object is determined based on the historical focus position and the current focus position, thereby effectively ensuring The accuracy and reliability of determining the target focus position is facilitated to follow the target object based on the target focus position, which further improves the practicability of the method.
  • FIG. 23 is a schematic flowchart of another pan/tilt control method provided by an embodiment of the present invention; on the basis of the foregoing embodiment, with continued reference to FIG. 23 , the method in this embodiment may further include:
  • Step S2301 Detecting whether the target object for the following operation has changed.
  • Step S2302 When the target object is changed from the first object to the second object, acquire the acquisition position of the second object in the acquired image.
  • Step S2303 Update the composition position based on the capture position of the second object in the captured image, obtain a second updated composition position corresponding to the second object, and perform the second object based on the second updated composition position.
  • Step S2303 Update the composition position based on the capture position of the second object in the captured image, obtain a second updated composition position corresponding to the second object, and perform the second object based on the second updated composition position.
  • the target object in the follow operation it can be detected in real time whether the target object in the follow operation is changed. Specifically, the historical focus position and the current focus position can be obtained, the historical target object corresponding to the historical focus position and the current target object corresponding to the current focus position can be identified, and whether the historical target object and the current target object have changed.
  • the collection position of the second object in the captured image can be obtained, and then the composition position can be updated based on the collection position of the second object in the captured image, A second updated composition position corresponding to the second object is obtained.
  • the collection position of the second object in the captured image may be determined as a second updated composition position corresponding to the second object, and then a follow operation may be performed on the second object based on the second updated composition position, In this way, the situation of image shaking due to the change of the target object can be effectively avoided, and the quality and efficiency of the control of the PTZ can be further improved.
  • FIG. 25 is a schematic flowchart of calculating the predicted value of the current position corresponding to the collection position according to an embodiment of the present invention; on the basis of the above embodiment, with continued reference to FIG. 25 , this embodiment provides a calculation and collection method
  • the implementation of the current position prediction value corresponding to the position, specifically, the calculation of the current position prediction value corresponding to the collection position in this embodiment may include:
  • Step S2501 Determine the delay time corresponding to the collection position, where the delay time is used to indicate the time required for the pan-tilt head to obtain the collection position via the image collection device.
  • determining the delay time corresponding to the acquisition location may include: acquiring the exposure time corresponding to the acquired image; when the pan/tilt platform acquires the acquisition location, determining the current reception time corresponding to the acquisition location; The time interval between the receiving time and the exposure time is determined as the delay time corresponding to the collection position.
  • the exposure time t n corresponding to the acquired image can be recorded, and the recorded exposure time t n corresponding to the acquired image can be stored in a preset area, so that the The PTZ can acquire the exposure time t n corresponding to the acquired image through the image acquisition device.
  • the image acquisition device transmits the acquisition position of the target object in the acquired image to the PTZ
  • the PTZ acquires the acquisition position
  • the current reception time t n+1 corresponding to the acquisition position can be determined.
  • Step S2502 Based on the delay time and the collection position, determine the current position prediction value corresponding to the collection position.
  • determining the predicted value of the current position corresponding to the collection position based on the delay time and the collection position may include: when the pan-tilt head obtains the previous collection position of the target object in the previous collection image, determining the current position prediction value corresponding to the previous collection position.
  • the previous reception time corresponding to the acquisition position determine the predicted value of the previous position corresponding to the previous acquisition position; calculate and collect according to the acquisition position, exposure time, delay time, previous reception time and predicted value of the previous position The current location prediction corresponding to the location.
  • the image acquisition device When the image acquisition device acquires multiple frames of images, it can determine multiple acquisition locations corresponding to the target object in the multi-frame images, and when the multiple acquisition locations are transmitted to the PTZ, the PTZ can acquire multiple acquisition locations , the multiple collection positions may include: the previous collection position and the current collection position.
  • the PTZ acquires the previous collection position
  • the previous reception time corresponding to the previous collection position can be determined
  • the predicted value of the previous position corresponding to the previous collection position can also be determined.
  • the specific implementation manner of determining the position prediction value is similar to the specific implementation manner of determining the current position prediction value in the above-mentioned embodiment. For details, reference may be made to the above statement, which will not be repeated here.
  • calculating the current position prediction value corresponding to the acquisition position based on the acquisition position, the exposure time, the delay time, the previous reception time, and the previous position prediction value may include: based on the acquisition position, the exposure time, the delay time The time, the previous reception time and the previous position prediction value are used to determine the position adjustment value corresponding to the collection position; the sum of the position adjustment value and the collection position is determined as the current position prediction value corresponding to the collection position.
  • the acquisition position, exposure time, delay time, previous reception time, and predicted value of the previous position can be calculated.
  • Perform analysis processing to determine the position adjustment value ⁇ x corresponding to the collection position.
  • the sum of the position adjustment value and the collection position can be determined as the current position prediction value corresponding to the collection position
  • the accuracy and reliability of determining the current position prediction value corresponding to the collection position are effectively improved.
  • the delay time corresponding to the collection position by determining the delay time corresponding to the collection position, and then determining the current position prediction value corresponding to the collection position based on the delay time and the collection position, since the current position prediction value considers the current position prediction value corresponding to the collection position delay time, thus effectively ensuring the accuracy and reliability of the determination of the predicted value of the current position; in addition, when using different image acquisition devices and/or different transmission interfaces to transmit the acquisition position, different image acquisitions from the above can be obtained.
  • the delay time corresponding to the device and/or different transmission interfaces thereby effectively solving the problem of the delay time corresponding to the data transmission between different image acquisition devices and/or different transmission interfaces existing in the prior art. For problems with different lengths, the normalization of the algorithm is realized, which further improves the quality and efficiency of the following operation on the target object.
  • 26 is a schematic flowchart of determining the position adjustment value corresponding to the collection position based on the collection position, the exposure time, the delay time, the previous reception time, and the previous position prediction value provided by the embodiment of the present invention;
  • this embodiment provides a method for determining and collecting the predicted value of the current position.
  • Step S2601 Determine the moving speed corresponding to the target object based on the acquisition position, the previous position prediction value, the exposure time, and the previous reception time.
  • determining the moving speed corresponding to the target object may include: acquiring the position difference between the collection position and the previous position prediction value and the exposure time The time difference between the time and the previous receiving time; the ratio between the position difference and the time difference is determined as the moving speed corresponding to the target object.
  • the position difference between the acquisition position and the previous position prediction value can be obtained and the time difference between the exposure time and the previous reception time (t n -t n-1 ), the ratio between the position difference and the time difference can be determined as the moving speed corresponding to the target object, that is
  • Step S2602 Determine the product value between the moving speed and the delay time as the position adjustment value corresponding to the collection position.
  • the product value between the movement speed and the delay time can be obtained, and the above product value is determined as the position adjustment value corresponding to the collection position, that is, the position adjustment value:
  • the moving speed corresponding to the target object is determined based on the collection position, the predicted value of the previous position, the exposure time and the previous reception time, and then the product value between the moving speed and the delay time is determined as the value corresponding to the collection
  • the position adjustment value corresponding to the position effectively ensures the accuracy and reliability of the determination of the position adjustment value, and further improves the accuracy of calculating the current position prediction value corresponding to the collection position based on the position adjustment value.
  • FIG. 27 is a schematic flow chart 1 of determining a control parameter for a follow-up operation of a target object based on the predicted value of the current position according to an embodiment of the present invention.
  • the example provides an implementation manner of determining the control parameters used for the following operation on the target object.
  • determining the control parameters used for the following operation on the target object may include:
  • Step S2701 Determine the position deviation between the current position prediction value and the composition target position.
  • Step S2702 Based on the position deviation, determine the control parameters used to follow the target object.
  • composition position is the position where the target object is expected to be continuously located in the image during the following operation of the target object.
  • the composition position can be It refers to the center position of the image, that is, the target object is continuously located in the center position of the image, which can ensure the quality and effect of the following operation on the target object.
  • determining the control parameters used for the following operation of the target object may include: acquiring a picture field angle corresponding to the captured image; The control parameter for the object to follow.
  • obtaining the field of view angle corresponding to the captured image by the image capture device may include: obtaining, by the image capture device, focal length information corresponding to the captured image; and determining the field of view angle of the screen corresponding to the captured image according to the focal length information. After the picture field angle corresponding to the captured image is acquired, the picture field angle and the position deviation can be analyzed and processed to determine the control parameters used for the following operation of the target object.
  • the size of the control parameter is negatively related to the size of the field of view of the screen, that is, when the field of view of the screen increases, the size of the target object located in the image becomes smaller, and the control parameter (for example, the pan/tilt) The rotation speed) can be decreased with the increase of the field of view of the picture.
  • the control parameters eg, the rotation speed of the gimbal
  • the control parameters can be increased with the decrease of the field of view of the screen.
  • determining the control parameters used for the follow-up operation of the target object based on the position deviation may include: acquiring a gimbal attitude corresponding to the acquisition position through an inertial measurement unit IMU disposed on the gimbal; The positional deviation is converted into the geodetic coordinate system by the attitude and the field of view angle of the screen, and the control parameters used for the following operation of the target object are obtained, which also realizes the accurate and reliable determination of the control parameters.
  • this embodiment by determining the positional deviation between the predicted value of the current position and the composition position, and then determining the control parameters for the follow-up operation of the target object based on the positional deviation, this not only effectively ensures the accuracy of the determination of the control parameters reliability, further improving the practicability of the method.
  • FIG. 28 is a second schematic flowchart of determining the control parameters used for the follow-up operation of the target object based on the predicted value of the current position provided by the embodiment of the present invention.
  • the example provides another implementation manner of determining the control parameters used for the following operation on the target object.
  • determining the control parameters used for the following operation on the target object may include: :
  • Step S2801 Acquire a follow mode corresponding to the pan/tilt head, and the follow mode includes any one of the following: a single-axis follow mode, a two-axis follow mode, and a full follow mode.
  • Step S2802 Based on the predicted value of the current position and the following mode, determine the control parameters used for the following operation of the target object.
  • the follow model corresponding to the gimbal may include one of the following: a single-axis follow mode, a two-axis follow mode, and a full follow mode. It can be understood that those skilled in the art can adjust the control mode of the PTZ based on different application scenarios and application requirements, which will not be repeated here.
  • the gimbal in different follow modes can correspond to different control parameters.
  • the control parameters can be the same as the single axis of the gimbal.
  • the yaw axis can be controlled to move based on the target attitude.
  • the control parameters can correspond to the two axes of the gimbal, for example, the yaw axis and the pitch axis can be controlled to move based on the target attitude.
  • control parameters can correspond to the three axes of the gimbal, for example, the yaw axis, the pitch axis and the roll axis can be controlled to move based on the target attitude.
  • determining the control parameters for the following operation of the target object may include: based on the current position prediction value, determining the candidate control parameters for the following operation of the target object; Among the candidate control parameters, target control parameters corresponding to the following mode are determined.
  • the alternative control parameters for the following operation of the target object can be determined based on the corresponding relationship between the predicted value of the current position and the control parameters. It can be understood that the alternative control parameters The number of parameters may be multiple. For example, when the gimbal is a three-axis gimbal, the alternative control parameters may include control parameters corresponding to the yaw axis, the pitch axis, and the roll axis.
  • target control parameters corresponding to the following model may be determined from the candidate control parameters, wherein the target control parameters may be at least a part of the candidate control parameters.
  • determining the target control parameter corresponding to the following mode may include: when the following mode is the single-axis following mode, determining the single-axis following mode corresponding to the single-axis following mode in the alternative control parameters axis control parameters, and set other alternative control parameters to zero; when the following mode is dual-axis following mode, the dual-axis control parameters corresponding to the dual-axis following mode can be determined in the alternative control parameters, and other alternative control parameters can be determined.
  • the alternative control parameters are set to zero; when the follow mode is the full follow mode, the alternative control parameters are determined as the control parameters of the three axes corresponding to the full follow mode.
  • FIG. 29 is a schematic flowchart of controlling the pan/tilt based on the motion state and control parameters of the pan/tilt according to an embodiment of the present invention; on the basis of the above embodiment, with continued reference to FIG. 29 , the present embodiment provides a
  • the implementation manner of controlling the pan-tilt, specifically, the control of the pan-tilt based on the motion state and control parameters of the pan-tilt in this embodiment may include:
  • Step S2901 Acquire duration information corresponding to the following operation on the target object.
  • control device of the PTZ can be provided with a timer, and the timer can be used to time the duration information corresponding to the follow operation of the target object. Therefore, the timer can be used to follow the target object.
  • the duration information corresponding to the operation can be provided with a timer, and the timer can be used to time the duration information corresponding to the follow operation of the target object. Therefore, the timer can be used to follow the target object. The duration information corresponding to the operation.
  • Step S2902 When the duration information is less than the first time threshold, update the control parameters based on the motion state of the PTZ, obtain the updated control parameters, and control the PTZ based on the updated control parameters.
  • control parameters are used to control the PTZ.
  • updating the control parameters based on the motion state of the PTZ, and obtaining the updated control parameters may include: determining, based on the motion state of the PTZ, an update coefficient corresponding to the control parameter, where the update coefficient is less than 1; The product value of the update coefficient and the control parameter is determined as the updated control parameter.
  • determining the update coefficient corresponding to the control parameter may include: when the motion state of the gimbal is the first specific motion state (meaning to start following the target object), for example, when the motion is uniformly accelerated, An update coefficient corresponding to the control parameter is determined based on the ratio between the duration information and the first time threshold, where the update coefficient at this time is less than 1.
  • the product value of the update coefficient and the control parameter can be determined as the updated control parameter, that is, when the duration information t ⁇ the first time threshold T, the updated control parameter can be determined based on the following formula: Among them, En is the control parameter, and the updated control parameter is The start time of the duration information t is the time when the target object starts to follow.
  • the gimbal when the gimbal starts to follow a target object, when the gimbal obtains the control parameters used to follow the target object, in order to prevent the gimbal from suddenly following the target object,
  • the updated control parameters corresponding to the control parameters can be obtained, and the updated control parameters are the transition control parameters from 0 to the control parameters, that is, when the duration information is less than the first time threshold.
  • the gimbal is controlled based on the updated control parameters, thereby realizing the slow start operation, that is, the gimbal can be controlled to slowly adjust to the control parameters, thereby ensuring the quality and effect of the following operation on the target object.
  • the motion state of the gimbal is the second specific motion state (meaning that it starts to end following the target object)
  • the second specific motion state meaning that it starts to end following the target object
  • the ratio determines the update coefficient corresponding to the control parameter, and the update coefficient at this time is less than 1.
  • the product value of the update coefficient and the control parameter can be determined as the updated control parameter, that is, when the duration information t ⁇ the first time threshold T, the updated control parameter can be determined based on the following formula:
  • En is the control parameter
  • the updated control parameter is The start time of the duration information t is the time when the target object starts to be followed.
  • the gimbal when the gimbal starts to stop the following operation for a target object, when the gimbal obtains the control parameters for stopping the following operation on the target object, in order to avoid the gimbal suddenly stop following the target object, then
  • the duration information is less than the first time threshold, an updated control parameter corresponding to the control parameter can be obtained, and the updated control parameter is the transition control parameter from the control parameter to 0, that is, when the duration information is less than
  • the gimbal is controlled based on the updated control parameters, thereby realizing the slow stop operation, that is, the gimbal can be controlled to slowly adjust to 0, thereby ensuring the quality and effect of the stop-following operation on the target object.
  • Step S2903 When the duration information is greater than or equal to the first time threshold, use the control parameter to control the pan-tilt head.
  • the control parameters are directly used to control the PTZ, that is, when the duration information t ⁇ the first time threshold T, update After the control parameters are the same as the control parameters The gimbal can then be controlled using the control parameters.
  • the start time of the duration information t is the time when the target object starts to be followed.
  • the control parameter when the duration information is greater than or equal to the first time threshold, the control parameter may be configured as 0.
  • the start time of the duration information t is the time when it is determined that the target object is lost.
  • the control parameters are updated based on the motion state of the PTZ, and the updated control parameters are obtained , and control the PTZ based on the updated control parameters; when the duration information is greater than or equal to the first time threshold, use the control parameters to control the PTZ, thus effectively implementing the slow start strategy to control the PTZ , which further ensures the quality and efficiency of the following operation on the target object.
  • the effect of using the slow stop strategy to control the pan/tilt is similar, and details are not described here.
  • Fig. 30 is a schematic diagram 1 of a flow chart of controlling a pan/tilt according to a control parameter according to an embodiment of the present invention; on the basis of any of the above embodiments, with continued reference to Fig. 30, this embodiment provides a method according to the control parameter
  • the implementation manner of controlling the pan-tilt, specifically, the control of the pan-tilt according to the control parameters in this embodiment may include:
  • Step S3001 Acquire a follow state corresponding to the target object.
  • the target object when the target object is followed, the target object may correspond to different following states.
  • the following states corresponding to the target object may include at least one of the following: keep following state, lost state. It can be understood that, when the target object corresponds to different following states, different control parameters can be used to control the pan-tilt, so as to ensure the safety and reliability of the control of the pan-tilt.
  • this embodiment does not limit the specific implementation manner of acquiring the following state corresponding to the target object.
  • Those skilled in the art can obtain the tracking state corresponding to the target object according to specific application requirements and design requirements.
  • the following state corresponding to the object specifically, when there is a target object in the image collected by the image acquisition device, it can be determined that the following state corresponding to the target object is the follow-up state; in the image collected by the image acquisition device When there is no target object in the target object, it can be determined that the following state corresponding to the target object is a lost state.
  • acquiring the follow state corresponding to the target object may include: detecting whether the target object performing the follow operation changes; when the target object is changed from the first object to the second object, it may also be determined that the first object is Lost state.
  • Step S3002 Control the pan-tilt head based on the following state and control parameters.
  • controlling the PTZ based on the following state and the control parameters may include: when the target object is in the lost state, acquiring information about the lost duration corresponding to the process of following the target object; The parameters are updated to obtain the updated control parameters; the PTZ is controlled based on the updated control parameters.
  • the lost duration information corresponding to the following operation of the target object can be obtained through a timer, and then the control parameters can be updated according to the lost duration information to obtain the updated control parameters.
  • updating the control parameters according to the loss duration information, and obtaining the updated control parameters may include: when the loss duration information is greater than or equal to the second time threshold, updating the control parameters to zero; when the loss duration information is less than the first time threshold When there are two time thresholds, the ratio between the lost duration information and the second time threshold is obtained, the difference between 1 and the ratio is determined as the update coefficient corresponding to the control parameter, and the product value of the update coefficient and the control parameter is determined are the updated control parameters.
  • the loss duration information can be analyzed and compared with the second time threshold.
  • the loss duration information is greater than or equal to the second time threshold, it means that the target object to follow is in a lost state.
  • the time is long, and the control parameters can be updated to zero, that is, when the lost duration information t ⁇ the second time threshold T, the control parameters can be updated to zero.
  • the loss duration information is less than the second time threshold, it means that the following target object is in the lost state for a short time, and then the ratio between the lost duration information and the second time threshold can be obtained, and the ratio between 1 and the ratio can be obtained.
  • the difference value is determined as the update coefficient corresponding to the control parameter, and then the product value of the update coefficient and the control parameter can be determined as the updated control parameter, that is, when the lost duration information t ⁇ the second time threshold T, then it can be will control parameters
  • the tracking state corresponding to the target object is obtained, and then the pan-tilt is controlled based on the following state and control parameters, thereby effectively ensuring the accuracy and reliability of the control of the pan-tilt.
  • FIG. 31 is a second schematic flowchart of controlling the pan/tilt according to control parameters according to an embodiment of the present invention; on the basis of any of the above embodiments, with continued reference to FIG. 31 , this embodiment provides another method for controlling the cloud platform Specifically, the control of the PTZ according to the control parameters in this embodiment may include:
  • Step S3101 Obtain the object type of the target object.
  • Step S3102 Control the PTZ according to the object type and control parameters.
  • the target object when using the pan/tilt to follow the target object, the target object may correspond to different object types, and the above-mentioned object types may include one of the following: a stationary object, a high-height moving object, and a low-height moving object And so on, and in order to ensure the quality of the following operations for different target objects, when the following operations are performed on different target objects, the PTZ can be controlled according to the object type and control parameters.
  • controlling the pan-tilt according to the object type and the control parameters may include: adjusting the control parameters according to the object type to obtain the adjusted parameters; and controlling the pan-tilt based on the adjusted parameters.
  • adjusting the control parameters according to the object type, and obtaining the adjusted parameters may include: when the target object is a stationary object, reducing the control bandwidth corresponding to the gimbal in the yaw direction and the gimbal corresponding to the pitch direction Control bandwidth; when the target object is a moving object and the height of the moving object is greater than or equal to the height threshold, increase the control bandwidth of the gimbal in the yaw direction, and reduce the control bandwidth of the gimbal in the pitch direction; When the target object is a moving object and the height of the moving object is less than the height threshold, the control bandwidth corresponding to the yaw direction of the gimbal and the control bandwidth corresponding to the pitch direction of the gimbal are increased.
  • the control bandwidth corresponding to the yaw direction (yaw axis direction) of the gimbal and the pitch of the gimbal can be reduced.
  • the control bandwidth corresponding to the direction (pitch axis direction) can reduce the pan following performance and the pitch following performance.
  • the yaw of the gimbal can be improved.
  • the target object is a moving object
  • the height of the moving object is less than the height threshold
  • you can increase the yaw direction of the gimbal ( The control bandwidth corresponding to the yaw axis direction) and the control bandwidth corresponding to the gimbal in the pitch direction (pitch axis direction) can improve the translation follow performance and pitch follow performance.
  • FIG. 32 is a schematic flowchart of another pan/tilt control method provided by an embodiment of the present invention; on the basis of any of the above-mentioned embodiments, with continued reference to FIG. 32 , the method in this embodiment may further include:
  • Step S3201 Acquire an execution operation input by a user with respect to the image capturing apparatus through a display interface.
  • Step S3202 Control the image capture device according to the execution operation, so that the image capture device determines the capture position.
  • a display interface that can be used for interactive operation by the user is preset, and specifically, the display interface can be a control device of the PTZ (such as a remote control of the PTZ, such as a mobile phone, tablet, wearable device, etc., integrated with the PTZ).
  • the display interface on the display device on the handle of the PTZ), or the display interface may be the display interface on the image acquisition device.
  • the execution operation input by the user on the image acquisition device can be obtained through the display interface (for example, the target object is determined by the operations of clicking, box selection, inputting features, inputting coordinates, etc.), and then the execution operation can be performed according to the operation.
  • the image capture device is controlled so that the image capture device can determine the capture position of the target object in the captured image based on the execution of the operation.
  • the control device of the PTZ may be provided with an application APP for controlling the image acquisition device, and the control device of the PTZ can be operated by operating the control device of the PTZ.
  • the above-mentioned APP is started, and a display interface for controlling the image acquisition device can be displayed on the display.
  • the user can obtain the execution operation input by the user for the image acquisition device through the display interface, and then the image acquisition device can be executed according to the execution operation. Control, so that the image acquisition device determines the acquisition position, so that the user can control the image acquisition device through the control device of the PTZ.
  • the display interface is the display interface on the image capture device
  • the user can obtain the execution operation input by the user on the image capture device through the display interface, and then the image capture device can be controlled according to the execution operation, so that the image capture device can be captured.
  • the device determines the acquisition position, so that the user can control the image acquisition device through the image acquisition device.
  • the execution operation input by the user on the image capture device is obtained through the display interface, and then the image capture device is controlled according to the execution operation, so that the image capture device determines the capture position, thereby effectively realizing the image capture device. control, which further improves the quality and effect of following the target object.
  • Fig. 33 is a schematic flowchart of another pan/tilt control method provided by an embodiment of the present invention; on the basis of any of the above-mentioned embodiments, with continued reference to Fig. 33, the method in this embodiment may further include:
  • Step S3301 Obtain distance information corresponding to the target object through a ranging sensor provided on the image acquisition device.
  • Step S3302 Send the distance information to the image acquisition device, so that the image acquisition device determines the acquisition position of the target object in the acquired image in combination with the distance information.
  • the image acquisition device when the image acquisition device acquires the acquisition position of the target object in the acquired image, in order to improve the accuracy of determining the acquisition position of the target object in the acquired image, the image acquisition device may be provided with a ranging sensor, which measures the distance.
  • the sensor can be communicated with the image acquisition device through the PTZ.
  • the distance information corresponding to the target object can be obtained through the ranging sensor set on the image acquisition device, and then the distance information can be sent to the image acquisition device.
  • the acquisition position of the target object in the acquired image can be determined in combination with the distance information, which effectively improves the accuracy and reliability of determining the acquisition position of the target object in the acquired image. That is, the image acquisition device is made to fuse or calibrate the acquisition position acquired based on image recognition through the acquisition position acquired by the distance information.
  • FIG. 34 is a schematic flowchart of another pan/tilt control method provided by an embodiment of the present invention; on the basis of any of the above-mentioned embodiments, with continued reference to FIG. 34 , the method in this embodiment may further include:
  • Step S3401 Determine a working mode corresponding to the image acquisition device, and the working mode includes one of the following: a first-following and then focusing mode, and a first focusing and then following mode;
  • Step S3402 Use the working mode to control the image capture device.
  • the image acquisition device when the following operation is performed based on the image acquisition device, the image acquisition device may correspond to different working models, and the working modes may include: follow first and then focus mode, first focus and then follow mode, and the above-mentioned first follow and then focus mode refers to
  • the image acquisition device may perform the following operation first, and then perform the focusing operation.
  • the first focus and then follow mode means that when the image capture device needs to perform a follow operation and a focus operation, the image capture device can perform the focus operation first, and then perform the follow operation.
  • the pan-tilt head and the image acquisition device when controlling the pan-tilt head and the image acquisition device to perform the follow operation, when the acquired image can be obtained through the image acquisition device, whether to perform the follow operation based on the acquired image first, or to perform the focus operation on the target object in the acquired image first;
  • the working mode of the image capture device is the first-following-and-focusing mode
  • the following operation can be preferentially performed based on the captured image, and then the focusing operation can be performed on the target object that has undergone the compositional following operation.
  • the working mode of the image capture device is the focus first and then follow mode, the focus operation can be performed on the target object in the captured image first, and then the follow operation can be performed on the target object subjected to the focus operation.
  • an operation interface/operation control for controlling the image capture device is preset, and the working mode of the image capture device can be configured/selected through the operation interface/operation control. After configuring the working mode of the image capture device , the working mode corresponding to the image capturing device can be determined by the working mode identifier, so that the selection, adjustment or configuration of the working mode of the image capturing device can be realized simply and quickly.
  • the working mode can be used to control the image capturing device, thereby effectively realizing that the image capturing device can meet the requirements of different application scenarios, and further improving the control of the image capturing device. Flexible and reliable.
  • FIG. 35 is a schematic flowchart of a control method of a pan-tilt system provided by an embodiment of the present invention; with reference to FIG. 35 , the present embodiment provides a control method of a pan-tilt system, wherein the pan-tilt system includes: a cloud-tilt system A stage and an image acquisition device communicatively connected to the PTZ, wherein the image acquisition device may be a camera with a manual lens or an automatic lens.
  • the image capture device may be integrated on the pan/tilt.
  • the pan/tilt and the image capture device disposed on the pan/tilt may be integrated for sales or maintenance operations.
  • the image capture device and the pan/tilt may be integrated into Sales or maintenance operations.
  • the image capturing device may be separately installed on the pan/tilt, and in this case, the image capturing device and the pan/tilt may be separately sold or maintained.
  • the image acquisition device refers to a device with image acquisition capability and image processing capability, such as cameras, video cameras, other devices with image acquisition capabilities, and the like.
  • the PTZ is provided with a communication serial bus USB interface, and the USB interface is used for wired communication connection with the image acquisition device, that is, the PTZ is connected to the image acquisition device through the USB interface.
  • the interface and the image acquisition device transmit data, the delay time corresponding to the transmission of the data is relatively short.
  • the communication connection mode between the PTZ and the image acquisition device is not limited to the above-defined implementation mode, and those skilled in the art can also set it according to specific application requirements and application scenarios, such as wireless communication, or the ability to It is ensured that when the PTZ and the image acquisition device perform data transmission, the corresponding delay time is relatively short, which is not repeated here.
  • the execution body of the control method of the pan-tilt system may be the control device of the pan-tilt system, and it can be understood that the control device of the pan-tilt system may be implemented as software or a combination of software and hardware; in addition, the pan-tilt system
  • the control device of the PTZ can be set on the PTZ or the image acquisition device. When the control device of the PTZ system is set on the image acquisition device, the PTZ and the image acquisition device can be integrated products.
  • the control device executes the control method of the pan-tilt system, it can solve the problem of poor follow-up effect caused by the long delay time generated by the image processing device on the pan-tilt side acquiring the acquisition position, thereby ensuring that the target object is followed
  • the quality and effectiveness of the operation may include:
  • Step S3501 Control the image acquisition device to acquire images, and acquire the acquisition position of the target object in the acquired image, where the acquisition position is determined by the image acquisition device.
  • Step S3502 Control the image acquisition device to transmit the acquisition position to the PTZ.
  • Step S3503 Control the PTZ to move according to the control parameters, so as to realize the following operation of the target object, wherein the control parameters are determined based on the collection position.
  • Step S3501 Control the image acquisition device to acquire images, and acquire the acquisition position of the target object in the image, where the acquisition position is determined by the image acquisition device.
  • the image capture device can be controlled to perform the image capture device according to the follow-up requirement. After the image capture device acquires the image, the image capture device can analyze and process the image to determine whether the target object is in the image. collection location. Specifically, the collection position of the target object in the image may include: the position of the key point corresponding to the target object in the image, or the coverage area corresponding to the target object in the image, and so on.
  • Step S3502 Control the image acquisition device to transmit the acquisition position to the PTZ.
  • the acquisition position of the target object in the acquired image can be actively or passively transmitted to the PTZ through the USB interface, so that the PTZ can obtain the target object in the image. collection location.
  • Step S3503 Control the PTZ to move according to the control parameters, so as to realize the following operation of the target object, wherein the control parameters are determined based on the collection position.
  • the acquisition position can be analyzed and processed to determine the control parameters used to control the PTZ, and then the PTZ can be controlled to move according to the control parameters to achieve the target object to follow operate.
  • the method in this embodiment may also include the methods of the above-mentioned embodiments shown in FIG. 2 to FIG. 34 .
  • the methods shown in FIG. 2 to FIG. 34 Relevant description of the embodiment.
  • the execution process and technical effects of the technical solution refer to the descriptions in the embodiments shown in FIG. 2 to FIG. 34 , which will not be repeated here.
  • the image acquisition device is controlled to acquire an image, and the acquisition position of the target object in the image is acquired, and then the acquisition position is transmitted to the pan-tilt, and the pan-tilt is controlled to move according to the control parameters , wherein the control parameters are determined based on the acquisition position, so that the target object can be followed.
  • the PTZ can directly acquire the acquisition position through the image acquisition device, In this way, the delay time corresponding to the acquisition of the acquisition position by the PTZ through the image acquisition device is effectively reduced, thereby solving the problem of poor follow-up effect due to a long delay, and further ensuring the quality of the follow-up operation on the target object And the effect effectively improves the stability and reliability of the method.
  • FIG. 36 is a schematic flowchart of another pan/tilt control method provided by an embodiment of the present invention; with reference to FIG. 36 , this embodiment provides another pan/tilt control method, which is suitable for a pan/tilt.
  • the PTZ is communicatively connected with the image acquisition device.
  • the execution subject of the PTZ control method may be the PTZ control device.
  • the control device can be implemented as software or a combination of software and hardware.
  • the method can include:
  • Step S3601 Acquire a captured image, where the captured image includes the target object.
  • Step S3602 Determine the position of the target object in the captured image, so as to perform a following operation on the target object based on the position of the target object.
  • Step S3603 Send the position of the target object to the image capture device, so that the image capture device determines a focus position corresponding to the target object based on the position of the target object, and performs a focus operation on the target object based on the focus position.
  • Step S3601 Acquire a captured image, where the captured image includes the target object.
  • an image acquisition device is connected to the PTZ in communication, and the above-mentioned image acquisition device can perform an image acquisition device for a target object, so that the acquired image can be acquired, and after the acquired image is acquired by the image acquisition device, the acquired image can be automatically or Passively transmitted to the gimbal, so that the gimbal can obtain the captured image stably.
  • Step S3602 Determine the position of the target object in the captured image, so as to perform a following operation on the target object based on the position of the target object.
  • the acquired image can be analyzed and processed to determine the position of the target object, and the acquired position of the target object is used to implement the following operation of the target object.
  • the captured image can be displayed through the display interface, and then the user can perform operations on the captured image input through the display interface, and the position of the target object can be determined according to the execution operation, that is, the user can perform operations on the target object included in the captured image. Box selection operation, so that the location of the target object can be determined.
  • a preset image processing algorithm may be used to automatically analyze and process the captured image to determine the position of the target object.
  • Step S3603 Send the position of the target object to the image capture device, so that the image capture device determines a focus position corresponding to the target object based on the position of the target object, and performs a focus operation on the target object based on the focus position.
  • the position of the target object can be sent to the image acquisition device, and the image acquisition device acquires the target object's position.
  • the focus position corresponding to the target object can be determined based on the position of the target object, so that when the target object is followed, the following position used to follow the target object and the focus corresponding to the target object are realized.
  • the positions are the same, which effectively avoids the situation that the target object is out of focus due to the inconsistency between the focusing position and the following position, and further improves the quality and effect of the following operation on the target object.
  • the method in this embodiment may also include the methods of the above-mentioned embodiments shown in FIG. 2 to FIG. 34 .
  • the methods shown in FIG. 2 to FIG. 34 Relevant description of the embodiment.
  • the execution process and technical effects of the technical solution refer to the descriptions in the embodiments shown in FIG. 2 to FIG. 34 , which will not be repeated here.
  • the captured image is acquired, and the position of the target object is determined in the captured image, so as to follow the target object based on the position of the target object, and then the position of the target object is sent to the image.
  • the acquisition device so that the image acquisition device determines the focus position corresponding to the target object based on the position of the target object, and performs a focus operation on the target object based on the focus position, thereby effectively ensuring that the following operation is performed on the target object.
  • the following position for the following operation on the target object is the same as the focusing position corresponding to the target object, which effectively avoids the situation that the target object is defocused due to the inconsistency between the focusing position and the following position, thereby effectively improving the accuracy of the target object.
  • the quality and effect of the object's following operation further improves the stability and reliability of the method.
  • FIG. 37 is a schematic flowchart of another control method of a pan-tilt system provided by an embodiment of the present invention.
  • the present embodiment provides a control method of a pan-tilt system, wherein the pan-tilt system includes: The PTZ and the image capture device connected to the PTZ in communication.
  • the image capture device can be integrated on the PTZ.
  • the PTZ and the image capture device installed on the PTZ can be sold or maintained as a whole.
  • the image capturing device may be separately installed on the pan/tilt, and in this case, sales or maintenance operations may be performed separately between the image capturing device and the pan/tilt.
  • the image acquisition device refers to a device with image acquisition capability and image processing capability, such as cameras, video cameras, other devices with image acquisition capabilities, and the like.
  • the PTZ is provided with a communication serial bus USB interface, and the USB interface is used for wired communication connection with the image acquisition device, that is, the PTZ is connected to the image acquisition device through the USB interface.
  • the interface and the image acquisition device transmit the position data of the object to be followed, since no additional image processing device is required on the PTZ side, the delay time corresponding to the position data of the object to be transmitted transmitted between the PTZ and the image acquisition device relatively short.
  • the communication connection mode between the PTZ and the image acquisition device is not limited to the above-defined implementation mode, and those skilled in the art can also set it according to specific application requirements and application scenarios, such as wireless communication, or the ability to It is ensured that when the PTZ and the image acquisition device perform data transmission, the corresponding delay time is relatively short, which is not repeated here.
  • the execution body of the control method of the pan-tilt system may be the control device of the pan-tilt system, and it can be understood that the control device of the pan-tilt system may be implemented as software or a combination of software and hardware; in addition, the pan-tilt system
  • the control device can be set on the PTZ or the image acquisition device.
  • the control device executes the control method of the pan-tilt system, it can solve the problem of poor follow-up effect caused by the long delay caused by the interface transmission data, so as to ensure the quality and effect of the follow-up operation on the target object.
  • the method may include:
  • Step S3701 Control the image acquisition device to acquire an image, and the image includes the target object.
  • Step S3702 Determine the position of the target object in the image.
  • Step S3703 control the pan-tilt head to follow the target object based on the position of the target object, and control the image acquisition device to focus on the target object according to the position of the target object.
  • Step S3701 Control the image acquisition device to acquire an image, and the image includes the target object.
  • the image acquisition device can be controlled to perform the image acquisition device according to the follow-up demand. After the image acquisition device acquires the image, the image can be actively or passively transmitted to the PTZ, so that the PTZ can Get the image.
  • Step S3702 Determine the position of the target object in the image.
  • the image can be analyzed and processed to determine the acquisition position of the target object in the image.
  • the collection position of the target object in the image may include: the position of the key point corresponding to the target object in the image, or the coverage area corresponding to the target object in the image, and so on.
  • the determination of the position of the target object in the captured image can also be implemented by an image capturing device.
  • Step S3703 control the pan-tilt head to follow the target object based on the position of the target object, and control the image acquisition device to focus on the target object according to the position of the target object.
  • the PTZ can be controlled to follow the target object based on the position of the target object.
  • the corresponding position of the target object can also be determined based on the position of the target object.
  • the position of the target object can be the same as the focus position corresponding to the target object, so that when the target object is followed, the following position used to follow the target object corresponds to the target object.
  • the focus position of the target object is the same, which effectively avoids the situation that the target object is out of focus due to the inconsistency between the focus position and the follow position, and further improves the quality and effect of the follow operation on the target object.
  • the method in this embodiment may also include the methods of the above-mentioned embodiments shown in FIG. 2 to FIG. 34 .
  • the methods shown in FIG. 2 to FIG. 34 Relevant description of the embodiment.
  • the execution process and technical effects of the technical solution refer to the descriptions in the embodiments shown in FIG. 2 to FIG. 34 , which will not be repeated here.
  • the image acquisition device is controlled to acquire an image, and the position of the target object is determined in the image, and then the pan-tilt is controlled to follow the target object based on the position of the target object, and according to the target object
  • the position of the image acquisition device controls the focus operation of the target object, thereby effectively ensuring that when the target object is followed, the following position used to follow the target object is the same as the focus position corresponding to the target object. It avoids the situation that the target object is out of focus due to the inconsistency between the focusing position and the following position, thereby effectively improving the quality and effect of the following operation on the target object, and further improving the stability and reliability of the method.
  • FIG. 38 is a schematic flowchart of another control method of a pan-tilt system provided by an embodiment of the present invention; with reference to FIG. 38 , the present embodiment provides another control method of a pan-tilt system, wherein the pan-tilt system includes: : a PTZ and an image acquisition device connected in communication with the PTZ.
  • the execution subject of the control method of the PTZ system can be the control device of the PTZ system. It is understood that the control device of the PTZ system can be implemented as software, Or a combination of software and hardware; in addition, the control device of the PTZ system can be set on the PTZ or the image acquisition device. When the control device of the PTZ system is set on the image acquisition device, the PTZ and the image acquisition device can be integrated.
  • the method in this embodiment may further include:
  • Step S3801 Acquire the acquisition position of the first object in the acquisition image acquired by the image acquisition device, and the acquisition position of the first object is used for the pan-tilt head to follow the first object, and for the image acquisition device to focus on the first object operate.
  • Step S3802 when the first object is changed to the second object, acquire the capture position of the second object in the captured image captured by the image capture device, so that the pan-tilt head changes from the following operation of the first object to the one based on the second object.
  • the capture position performs a follow-up operation on the second object, and the image capture device changes from a focus operation on the first object to a focus operation on the second object based on the position of the second object.
  • Step S3801 Acquire the acquisition position of the first object in the acquisition image acquired by the image acquisition device, and the acquisition position of the first object is used for the pan-tilt head to follow the first object, and for the image acquisition device to focus on the first object operate.
  • an image acquisition operation may be performed for the first object through the image acquisition device, so that an acquired image including the first object can be obtained.
  • the acquired image can be analyzed and processed to determine the acquisition position of the first object in the acquired image, and the determined acquisition position of the first object in the acquired image is used for the PTZ to monitor the first object
  • a follow-up operation is performed, and in addition, the determined capture position of the first object in the captured image is used for the image capture device to perform a focus operation on the first object.
  • the execution subject for determining the capturing position of the first object in the captured image may be an "image capturing device" or a "cloud platform".
  • Step S3802 when the first object is changed to the second object, acquire the capture position of the second object in the captured image captured by the image capture device, so that the pan-tilt head changes from the following operation of the first object to the one based on the second object.
  • the capture position performs a follow-up operation on the second object, and the image capture device changes from a focus operation on the first object to a focus operation on the second object based on the position of the second object.
  • the following object When the following operation is performed on the first object, the following object may be changed, that is, the first object may be changed into the second object.
  • the acquisition position of the second object in the acquired image can be acquired, and then the pan-tilt head can be controlled based on the acquisition position of the second object in the acquired image, thereby effectively realizing the
  • the control pan/tilt changes from the following operation on the first object to the following operation on the second object based on the collection position of the second object.
  • the acquired acquisition position of the second object in the acquired image can also be used for the image acquisition device to perform a focusing operation.
  • the image acquisition device can be changed from a focusing operation on the first object to a focusing operation based on the second object. position to focus on the second object, so that when the second object is followed, the following position used to follow the second object is the same as the focus position corresponding to the second object, which effectively avoids the need for Since the focus position is inconsistent with the following position, the second object is out of focus, which further improves the quality and effect of the following operation on the second object.
  • the implementation manner of acquiring the acquisition position of the second object in the acquired image is similar to the above-mentioned implementation manner of acquiring the acquisition position of the first object in the acquired image. Repeat.
  • the method in this embodiment may also include the methods of the above-mentioned embodiments shown in FIG. 2 to FIG. 34 .
  • the methods shown in FIG. 2 to FIG. 34 Relevant description of the embodiment.
  • the execution process and technical effects of the technical solution refer to the descriptions in the embodiments shown in FIG. 2 to FIG. 34 , which will not be repeated here.
  • the above method can also update the corresponding objects on the image acquisition device side and the pan/tilt side respectively.
  • the user changes the focus operation object from the first object to the second object through a touch operation.
  • the display screen on the side of the image capture device finds that the object of the focusing operation is also changed from the first object to the second object.
  • the specific operation means for changing the object is not limited to the touch operation described above.
  • the acquisition position of the second object in the image acquisition can be acquired.
  • the collection position in the collection image collected by the device so that the pan-tilt head changes from the following operation of the first object to the following operation of the second object based on the collection position of the second object, and the image collection device is changed by the operation of the first object.
  • the focusing operation is changed to focusing on the second object based on the position of the second object, thereby effectively ensuring that when the following object is changed from the first object to the second object, the following operation can be performed on the second object.
  • the second object When the second object performs the following operation, by ensuring that the following position used to perform the following operation on the second object is the same as the focusing position corresponding to the second object, this effectively avoids the inconsistency between the focusing position and the following position.
  • the object appears out of focus, thereby effectively improving the quality and effect of the following operation on the second object, and further improving the stability and reliability of the method.
  • the present invention provides an intelligent following method based on a camera (which can be a third-party camera installed on the gimbal, or a camera integrated on the gimbal).
  • the executive body can include: camera and gimbal.
  • the method in this embodiment includes the following steps:
  • Step 1 Camera plane bias prediction.
  • the camera exposure timestamp of the current image frame is directly obtained by the camera.
  • the camera exposure timestamp of the current image frame can be t n .
  • the camera sends the detection information of the current image frame (which may include the coordinate information of the target object in the image frame) to When the PTZ receives the detection information of the current image frame, the timestamp is t n+1 .
  • the PTZ receives the detection information corresponding to the previous image frame.
  • the time stamp of the detection information of the frame is t n-1 .
  • the target value of the target object in the current image frame obtained by the camera There is a deviation between the time and the time when the PTZ receives the above detection information. Therefore, in order to ensure the quality and effect of the following operation on the target object, it is necessary to consider the influence of the link delay on the intelligent following operation. Specifically, the following steps can be performed:
  • Step 1.1 Obtain the link delay corresponding to the communication link formed by the camera and the gimbal.
  • Step 1.2 Based on the current image frame, obtain the acquisition position of the target object in the current image frame.
  • the camera may analyze and process the current image frame to determine the acquisition position of the target object in the current image frame as (x n , yn ).
  • Step 1.3 Based on the acquisition position of the target object in the current image frame, determine the current position prediction value corresponding to the acquisition position Specifically, it can be achieved based on the following formula:
  • ⁇ t is the link delay corresponding to the communication link formed by the camera and the gimbal
  • t n is the camera exposure timestamp corresponding to the current image frame
  • t n- 1 is the timestamp when the PTZ receives the detection information of the previous image frame.
  • Step 1.4 Predict the value based on the current location corresponding to the acquisition location Determines the camera plane bias.
  • the camera plane deviation is the normalized coordinate value deviation, which is denoted as e x and e y .
  • the composition target can be obtained.
  • the composition target can be denoted as (tgt x , tgt y ), and then determine the camera plane deviation based on the composition target and the predicted value of the current position.
  • the camera plane deviation can be obtained based on the following formula:
  • Step 2 Perform a coordinate transformation operation on the camera plane deviation, and determine the deviation angle used to follow the target object.
  • Step 2.1 Obtain the actual image field of view fov information of the camera and the current attitude information of the gimbal.
  • the focal length information of the camera can be obtained first, and then the actual picture field angle fov information of the camera can be determined based on the focal length information. It should be noted that the above focal length information can be directly It can be obtained through the camera, or it can also be obtained by the user's configuration based on specific application scenarios and application requirements.
  • Step 2.2 Convert the camera plane deviation to the geodetic coordinate system NED (north, east, and earth coordinate system) according to the actual picture field angle fov information and the current attitude information, so as to obtain the deviation angle.
  • NED geodetic coordinate system
  • the camera coordinate system may be denoted as the b system
  • the NED coordinate system may be denoted as the n system.
  • the deviation angle in the geodetic coordinate system can be obtained by the following formula:
  • e x and e y are the coordinate value deviations after normalization on the camera plane
  • FOV x and FOV y are the fov angles corresponding to the camera in the horizontal (x-axis direction) and vertical (y-axis direction), respectively
  • E x , E y , E z are the deviation angles corresponding to each axis in the camera coordinate system
  • the matrix representation is as follows:
  • the attitude of the gimbal and the corresponding rotation matrix can be measured through the IMU And the angle deviation in the NED coordinate system can be obtained according to the following formula:
  • E n is the deviation angle corresponding to the geodetic coordinate system NED, is the rotation matrix corresponding to the gimbal attitude, and E b is the corresponding deviation angle in the camera coordinate system.
  • the gimbal may correspond to different following modes, the following modes may include a single-axis following mode, a dual-axis following mode and a three-axis following mode, and different following modes may correspond to different deviation angles.
  • the obtained deviation angle can correspond to the single axis of the gimbal.
  • the deviation angle corresponds to the yaw axis, and the deviation angle corresponding to the other two axes is adjusted to zero.
  • the obtained deviation angle can correspond to the two axes of the gimbal.
  • the deviation angle corresponds to the yaw axis and the pitch axis, and the deviation angle corresponding to other axes is adjusted to zero .
  • the obtained deviation angle can correspond to the three axes of the gimbal, for example, the deviation angle corresponds to the yaw axis, the pitch axis and the roll axis.
  • Step 3 Control the gimbal based on the deviation angle so as to follow the target object.
  • a pan-tilt controller may be provided on the pan-tilt, and the pan-tilt controller may include three proportional-integral-derivative (Proportion Integral Differential, PID for short) controllers, the specific structure is referring to FIG. controller, position loop PID controller and velocity loop PID controller.
  • PID Proportion Integral Differential
  • the deviation angle En can be input into the PID controller, so as to obtain the control parameters for controlling the rotation of the pan/tilt motor.
  • Step 4 The gimbal intelligently follows the strategy.
  • the intelligent tracking strategy of the gimbal can include the following three aspects: slow start and stop strategy for following targets or lost targets, adjusting the gimbal controller for different objects, and determining the focus offset according to the historical focus position.
  • slow start and stop strategy for following targets or lost targets
  • adjusting the gimbal controller for different objects for different objects
  • determining the focus offset according to the historical focus position Three major aspects of the PTZ intelligent follow strategy are explained:
  • Step 4.1 Slow start-stop strategy for following targets and missing targets.
  • the slow start and stop strategy includes a uniform acceleration strategy or a uniform deceleration strategy.
  • the acceleration/deceleration time threshold is set to T, and it is known that En is the deviation angle of each axis in the ned coordinate system, and the actual deviation angle actually output to the gimbal controller. Specifically, the actual deviation angle output to the gimbal controller The following relationship exists with the deviation angle En :
  • t is the duration information of starting to follow the target object
  • T is the preset time threshold
  • the user can set the specific time length of the preset time threshold according to the specific application scenario and application requirements, in general, T can be 0.5s or 1s.
  • the corresponding deviation angle En can be obtained.
  • the actual deviation angle of The actual deviation angle That is, it is a transition parameter between 0 and the deviation angle En , so that a slow start to follow the target object can be realized.
  • the duration information of the following operation on the target object is greater than or equal to the preset time threshold, the actual deviation angle can be Determined as the deviation angle En , that is, the target object can be followed stably.
  • T is the preset time threshold.
  • the user can set the specific time length of the preset time threshold according to specific application scenarios and application requirements. In general, T can be 1s, 1.5s or 2s and so on.
  • the actual deviation angle corresponding to the deviation angle En can be obtained.
  • the actual deviation angle That is, it is a transition parameter between 0 and the deviation angle En , so that the following operation of the target object can be slowly ended.
  • the duration information of the following operation on the target object is greater than or equal to the preset time threshold, the actual deviation angle can be If it is determined to be 0, the following operation of the target object can be ended stably.
  • Step 4.2 Adjust the gimbal controller according to the following speed of different objects.
  • the gimbal can adjust the gimbal controller according to the different types of objects it follows, which can include the following categories:
  • the following performance can be achieved by adjusting the corresponding control bandwidth.
  • Step 4.3 Determine the focus offset according to the historical focus position.
  • the historical focus position corresponding to the historical image frame and the current focus position corresponding to the current image frame may be different.
  • the self-test focus of the current focus position and the historical focus position can be obtained. Offset, based on the focus offset to determine the target focus position corresponding to the target object.
  • the focus offset when the focus offset is less than or equal to the preset threshold, it means that the distance between the current focus position and the historical focus position is relatively close, and then the current focus position can be determined as the target focus position corresponding to the target object .
  • the focus offset is greater than the preset threshold, it means that the distance between the current focus position and the historical focus position is relatively far, and then the current focus position can be adjusted based on the focus offset, so that the corresponding target object can be obtained. target focus position.
  • the focus offset is greater than the preset threshold, it means that the distance between the current focus position and the historical focus position is relatively far.
  • it can be detected whether the target object has changed. After the target object has changed, it can be determined based on the changed
  • the target object is updated to the target position of the composition, and the updated target position is obtained, so as to control the gimbal based on the updated target position, so as to realize the following operation of the changed target object.
  • the intelligent following method based on the camera provided by this application embodiment effectively solves the following problems: (1) Solve the problem that the real-time image is transmitted to the image processing device and the delay time is relatively long to determine the acquisition position, resulting in poor follow-up effect. (2) Solve the problem that the gimbal achieves target following, which increases the development cost of additional AI machine learning algorithms and hardware design costs; (3) Solve the problem that the camera follows the target type change and causes the coordinate point to jump; (4) Solve the problem The problem of inconsistency between the focus point and the following point prevents the following target from being out of focus; the quality and effect of the following operation on the target object are further ensured, and the stability and reliability of the method are effectively improved.
  • FIG. 41 is a schematic structural diagram of a control device for a pan/tilt according to an embodiment of the present invention.
  • this embodiment provides a control device for a pan/tilt, wherein the pan/tilt is communicatively connected with an image acquisition device , the control device of the pan/tilt can execute the control method of the pan/tilt corresponding to FIG. 2 .
  • the apparatus in this embodiment may include:
  • memory 412 for storing computer programs
  • the processor 411 is used for running the computer program stored in the memory 412 to realize:
  • the acquisition position is determined by an image acquisition device, the image acquisition device is a camera with a manual lens or an automatic lens, and the image acquisition device is communicated with the PTZ;
  • the PTZ is controlled according to the control parameters to realize the following operation of the target object.
  • the structure of the control apparatus of the PTZ may further include a communication interface 413 for the electronic device to communicate with other devices or a communication network.
  • the processor 411 when the processor 411 acquires the acquisition position of the target object in the acquired image, the processor 411 is configured to: acquire the target focus position corresponding to the target object through the image acquisition device; determine the target focus position as the target object The acquisition location in the acquired image.
  • the processor 411 when the processor 411 obtains the target focus position corresponding to the target object through the image capture device, the processor 411 is configured to: obtain the historical focus position and the current focus position corresponding to the target object through the image capture device; Based on the historical focus position and the current focus position, a target focus position corresponding to the target object is determined.
  • the processor 411 determines the target focus position corresponding to the target object based on the historical focus position and the current focus position
  • the processor 411 is configured to: determine the historical object part corresponding to the historical focus position and the current focus position The current object part corresponding to the focus position; the target focus position corresponding to the target object is determined according to the historical object part and the current object part.
  • the processor 411 determines the target focus position corresponding to the target object according to the historical object part and the current object part
  • the processor 411 is configured to: when the historical object part and the current object part are different from the same target object When the part is located, the relative position information between the historical object part and the current object part is obtained; the current focus position is adjusted based on the relative position information, and the target focus position corresponding to the target object is obtained.
  • the processor 411 when the processor 411 adjusts the current focus position based on the relative position information to obtain the target focus position corresponding to the target object, the processor 411 is configured to: when the relative position information is greater than or equal to a preset threshold, The current focus position is adjusted based on the relative position information to obtain the target focus position corresponding to the target object; when the relative position information is less than the preset threshold, the current focus position is determined as the target focus position corresponding to the target object.
  • the processor 411 determines the target focus position corresponding to the target object according to the historical object part and the current object part, the processor 411 is configured to: when the historical object part and the current object part are different from the same target object When the position is selected, the composition target position is updated based on the current focus position to obtain the first updated composition target position; and the target object is followed based on the first updated composition target position.
  • the processor 411 is configured to: detect whether the target object performing the following operation changes; when the target object changes from the first object to the second object, obtain the acquisition position of the second object in the acquired image; based on the first object The acquisition positions of the two objects in the acquired images update the composition target position to obtain a second updated composition target position corresponding to the second object, so as to follow the second object based on the second updated composition target position.
  • the processor 411 determines a control parameter for performing a following operation on the target object based on the collection position
  • the processor 411 is configured to: calculate a current position prediction value corresponding to the collection position; based on the current position prediction value , to determine the control parameters used to follow the target object.
  • the processor 411 when the processor 411 calculates the predicted value of the current position corresponding to the acquisition position, the processor 411 is configured to: determine a delay time corresponding to the acquisition position, and the delay time is used to instruct the pan-tilt head to acquire the image via the image The time required for the device to obtain the collection position; based on the delay time and the collection position, a current position prediction value corresponding to the collection position is determined.
  • the processor 411 determines the delay time corresponding to the acquisition position
  • the processor 411 is configured to: acquire the exposure time corresponding to the acquired image; The corresponding current receiving time; the time interval between the current receiving time and the exposure time is determined as the delay time corresponding to the collection position.
  • the processor 411 determines the predicted value of the current position corresponding to the acquisition position based on the delay time and the acquisition position
  • the processor 411 is configured to: after the pan-tilt platform obtains that the target object is before the previous acquisition image
  • the previous reception time corresponding to the previous collection position is determined; the prediction value of the previous position corresponding to the previous collection position is determined; according to the collection position, exposure time, delay time, previous reception time and The previous position prediction value, which calculates the current position prediction value corresponding to the collection position.
  • the processor 411 when the processor 411 calculates the current position prediction value corresponding to the acquisition position based on the acquisition position, the exposure time, the delay time, the previous reception time, and the previous position prediction value, the processor 411 is configured to: Based on the acquisition position, exposure time, delay time, previous reception time and previous position prediction value, determine the position adjustment value corresponding to the acquisition position; determine the sum of the position adjustment value and the acquisition position as the value corresponding to the acquisition position The corresponding current position prediction value.
  • the processor 411 determines the position adjustment value corresponding to the acquisition position based on the acquisition position, the exposure time, the delay time, the previous reception time, and the previous position prediction value
  • the processor 411 is configured to: based on The acquisition position, the predicted value of the previous position, the exposure time and the previous reception time are used to determine the moving speed corresponding to the target object; the product value between the moving speed and the delay time is determined as the position adjustment corresponding to the acquisition position value.
  • the processor 411 determines the moving speed corresponding to the target object based on the acquisition position, the previous position prediction value, the exposure time, and the previous reception time
  • the processor 411 is configured to: obtain the acquisition position and the previous The position difference value between the position prediction values and the time difference value between the exposure time and the previous reception time; the ratio between the position difference value and the time difference value is determined as the moving speed corresponding to the target object.
  • the processor 411 determines a control parameter for performing a following operation on the target object based on the predicted value of the current position
  • the processor 411 is configured to: determine the positional deviation between the predicted value of the current position and the target position of the composition; Based on the positional deviation, control parameters for following the target object are determined.
  • the processor 411 determines, based on the position deviation, the control parameters for performing the following operation on the target object
  • the processor 411 is configured to: acquire a picture field angle corresponding to the captured image; and based on the picture field angle and position deviation to determine the control parameters used to follow the target object.
  • the size of the control parameter is inversely related to the size of the frame's field of view.
  • the processor 411 determines a control parameter for performing a following operation on the target object based on the predicted value of the current position
  • the processor 411 is configured to: acquire a following mode corresponding to the pan/tilt, and the following mode includes one of the following One: single-axis follow mode, dual-axis follow mode, full follow mode; based on the current position prediction value and follow mode, determine the control parameters used to follow the target object.
  • the processor 411 determines the control parameter for performing the following operation on the target object based on the current position prediction value and the following mode
  • the processor 411 is configured to: determine the control parameter for the target object to follow based on the current position prediction value The candidate control parameters for the following operation; in the candidate control parameters, the target control parameters corresponding to the following mode are determined.
  • the processor 411 determines the target control parameter corresponding to the following mode in the alternative control parameters
  • the processor 411 is configured to: when the following mode is the single-axis following mode, in the alternative control parameter , determine the control parameters of the single axis corresponding to the single axis follow mode, and set the other alternative control parameters to zero; when the follow mode is the two axis follow mode, in the alternative control parameters, determine the corresponding two axis follow mode.
  • the corresponding two-axis control parameters, and other alternative control parameters are set to zero; when the follow mode is the full follow mode, the alternative control parameters are determined as the three-axis control parameters corresponding to the full follow mode.
  • the processor 411 when the processor 411 controls the PTZ according to the control parameters, the processor 411 is configured to: acquire the motion state of the PTZ corresponding to the target object; Take control.
  • the processor 411 when the processor 411 controls the pan/tilt based on the motion state and control parameters of the pan/tilt, the processor 411 is configured to: acquire duration information corresponding to the following operation on the target object; when the duration information is less than At the first time threshold, the control parameters are updated based on the motion state of the gimbal, the updated control parameters are obtained, and the gimbal is controlled based on the updated control parameters; when the duration information is greater than or equal to the first time threshold , use the control parameters to control the PTZ.
  • the processor 411 when the processor 411 updates the control parameters based on the motion state of the PTZ, and obtains the updated control parameters, the processor 411 is configured to: determine an update corresponding to the control parameters based on the motion state of the PTZ coefficient, wherein the update coefficient is less than 1; the product value of the update coefficient and the control parameter is determined as the updated control parameter.
  • the processor 411 determines the update coefficient corresponding to the control parameter based on the motion state of the PTZ
  • the processor 411 is configured to: when the motion state of the PTZ is a specific motion state, based on the duration information The ratio to the first time threshold determines an update coefficient corresponding to the control parameter.
  • the processor 411 when the processor 411 controls the pan/tilt according to the control parameters, the processor 411 is configured to: acquire a follow state corresponding to the target object; and control the pan/tilt based on the follow state and the control parameter.
  • the processor 411 when the processor 411 obtains the following state corresponding to the target object, the processor 411 is configured to: detect whether the target object performing the following operation changes; when the target object changes from the first object to the second object , the first object is determined to be in the lost state.
  • the processor 411 when the processor 411 controls the gimbal based on the follow state and the control parameter, the processor 411 is configured to: when the target object is in the lost state, acquire the loss corresponding to the follow operation process of the target object Duration information; update the control parameters according to the lost duration information, and obtain the updated control parameters; control the PTZ based on the updated control parameters.
  • the processor 411 when the processor 411 updates the control parameters according to the loss duration information and obtains the updated control parameters, the processor 411 is configured to: when the loss duration information is greater than or equal to the second time threshold, update the control parameters to zero; when the lost duration information is less than the second time threshold, obtain the ratio between the lost duration information and the second time threshold, and determine the difference between 1 and the ratio as the update coefficient corresponding to the control parameter, and update the The product value of the coefficient and the control parameter is determined as the updated control parameter.
  • the processor 411 when the processor 411 controls the pan-tilt according to the control parameters, the processor 411 is configured to: obtain the object type of the target object; and control the pan-tilt according to the object type and the control parameters.
  • the processor 411 controls the pan/tilt according to the object type and the control parameters
  • the processor 411 is configured to: adjust the control parameters according to the object type, and obtain the adjusted parameters; console to control.
  • the processor 411 when the processor 411 adjusts the control parameters according to the object type and obtains the adjusted parameters, the processor 411 is configured to: when the target object is a stationary object, reduce the corresponding control parameters of the gimbal in the yaw direction Control bandwidth and the control bandwidth corresponding to the gimbal in the pitch direction; when the target object is a moving object and the height of the moving object is greater than or equal to the height threshold, the control bandwidth corresponding to the gimbal in the yaw direction is increased, and the cloud Control bandwidth corresponding to the gimbal in the pitch direction; when the target object is a moving object and the height of the moving object is less than the height threshold, increase the control bandwidth corresponding to the gimbal in the yaw direction and the control bandwidth corresponding to the gimbal in the pitch direction bandwidth.
  • the processor 411 is configured to: obtain the execution operation input by the user with respect to the image capturing apparatus through the display interface; and control the image capturing apparatus according to the execution operation, so that the image capturing apparatus determines the capturing position.
  • the processor 411 is configured to: acquire distance information corresponding to the target object through a distance measuring sensor disposed on the image acquisition device; send the distance information to the image acquisition device, so that the image acquisition device determines in combination with the distance information The acquisition position of the target object in the acquired image.
  • the processor 411 is configured to: determine a working mode corresponding to the image capture device, the working mode includes one of the following: a follow-before-focus mode, a focus-first-follow mode; and the image capture device is controlled by using the working mode.
  • the PTZ is provided with a communication serial bus USB interface, and the USB interface is used for wired communication connection with the image capture device.
  • the apparatus shown in FIG. 41 may execute the methods of the embodiments shown in FIGS. 16 to 34 and FIGS. 39 to 40 .
  • FIGS. 16 to 34 and 39 to 40 A description of the embodiments shown in .
  • FIGS. 16 to 34 and FIGS. 39 to 40 A description of the embodiments shown in .
  • FIG. 42 is a schematic structural diagram of a control device of a pan-tilt system provided by an embodiment of the present invention.
  • the present embodiment provides a control device of a pan-tilt system, wherein the pan-tilt system includes a pan-tilt system and an image acquisition device connected in communication with the PTZ, the control device of the PTZ system can execute the control method of the PTZ system corresponding to FIG. 35 .
  • the apparatus in this embodiment may include:
  • memory 422 for storing computer programs
  • the processor 421 is used for running the computer program stored in the memory 422 to realize:
  • the control pan/tilt moves according to the control parameters, so as to realize the following operation of the target object, wherein the control parameters are determined based on the collection position.
  • the structure of the control device of the pan-tilt system may further include a communication interface 423 for the electronic device to communicate with other devices or a communication network.
  • the apparatus shown in FIG. 42 may execute the method of the embodiment shown in FIG. 35 and FIG. 39 to FIG. 40 .
  • the apparatus shown in FIG. 42 may execute the method of the embodiment shown in FIG. 35 and FIG. 39 to FIG. 40 .
  • related instructions For the execution process and technical effects of the technical solution, refer to the descriptions in the embodiments shown in FIG. 35 and FIG. 39 to FIG. 40 , and details are not repeated here.
  • FIG. 43 is a schematic structural diagram of another pan/tilt control device provided by an embodiment of the present invention.
  • the present embodiment provides a pan/tilt control device, which is used for the pan/tilt communication connection
  • There is an image acquisition device, and the control device of the pan/tilt can execute the control method of the pan/tilt corresponding to FIG. 36 .
  • the apparatus in this embodiment may include:
  • memory 432 for storing computer programs
  • the processor 431 is used for running the computer program stored in the memory 432 to realize:
  • the position of the target object is sent to the image acquisition device, so that the image acquisition device determines a focus position corresponding to the target object based on the position of the target object, and performs a focus operation on the target object based on the focus position.
  • the structure of the control device of the PTZ may further include a communication interface 433 for the electronic device to communicate with other devices or a communication network.
  • the apparatus shown in FIG. 43 may execute the method of the embodiment shown in FIG. 36 and FIG. 39 to FIG. 40 .
  • the apparatus shown in FIG. 43 may execute the method of the embodiment shown in FIG. 36 and FIG. 39 to FIG. 40 .
  • related instructions For the execution process and technical effect of this technical solution, refer to the descriptions in the embodiments shown in FIG. 36 , FIG. 39 to FIG. 40 , which will not be repeated here.
  • FIG. 44 is a schematic structural diagram of a control device of another pan-tilt system provided by an embodiment of the present invention.
  • the present embodiment provides another control device of a pan-tilt system, wherein the pan-tilt system includes A pan-tilt and an image acquisition device connected in communication with the pan-tilt, the control device of the pan-tilt system can execute the control method of the pan-tilt system corresponding to FIG. 37 .
  • the apparatus in this embodiment may include:
  • memory 442 for storing computer programs
  • the processor 441 is used for running the computer program stored in the memory 442 to realize:
  • the pan/tilt is controlled to follow the target object, and the image acquisition device is controlled to focus on the target object according to the position of the target object.
  • the structure of the control device of the pan-tilt system may further include a communication interface 443 for the electronic device to communicate with other devices or a communication network.
  • the apparatus shown in FIG. 44 may execute the method of the embodiment shown in FIG. 37 and FIG. 39 to FIG. 40 .
  • the apparatus shown in FIG. 44 may execute the method of the embodiment shown in FIG. 37 and FIG. 39 to FIG. 40 .
  • related instructions For the execution process and technical effect of this technical solution, refer to the descriptions in the embodiments shown in FIG. 37 , FIG. 39 to FIG. 40 , and details are not repeated here.
  • FIG. 45 is a schematic structural diagram of a control device of another pan-tilt system provided by an embodiment of the present invention.
  • the present embodiment provides another control device of a pan-tilt system, wherein the pan-tilt system includes: A pan-tilt and an image acquisition device connected in communication with the pan-tilt, the control device of the pan-tilt system can execute the control method of the pan-tilt system corresponding to FIG. 38 .
  • the apparatus in this embodiment may include:
  • memory 452 for storing computer programs
  • the processor 451 is used for running the computer program stored in the memory 452 to realize:
  • the collection position of the first object is used for the pan/tilt head to follow the first object, and for the image collection device to perform a focus operation on the first object;
  • the acquisition position of the second object in the acquired image acquired by the image acquisition device is acquired, so that the pan/tilt changes from following the first object to the acquisition position based on the second object.
  • a follow-up operation is performed on the second object, and the image capture device is changed from a focusing operation on the first object to a focusing operation on the second object based on the position of the second object.
  • the structure of the control device of the pan-tilt system may further include a communication interface 453 for the electronic device to communicate with other devices or a communication network.
  • the apparatus shown in FIG. 45 may execute the methods of the embodiments shown in FIGS. 38 to 40 .
  • the apparatus shown in FIG. 45 may execute the methods of the embodiments shown in FIGS. 38 to 40 .
  • control device in any of the above embodiments may be independent of the pan/tilt or the image acquisition device, or may be integrated in the pan/tilt or the image acquisition device.
  • FIG. 46 is a schematic structural diagram of a control system for a pan/tilt according to an embodiment of the present invention.
  • the present embodiment provides a control system for a pan/tilt.
  • the control system may include:
  • the control device 62 of the pan/tilt shown in FIG. 41 is disposed on the pan/tilt 61 and used to communicate with the image acquisition device, and to control the pan/tilt 61 through the image acquisition device.
  • control system in this embodiment may further include:
  • the ranging sensor 63 is arranged on the image acquisition device, and is used for acquiring distance information corresponding to the target object;
  • the control device 62 of the pan/tilt is connected in communication with the ranging sensor 63 for sending the distance information to the image acquisition device, so that the image acquisition device can determine the acquisition position of the target object in the acquired image in combination with the distance information.
  • FIG. 47 is a schematic structural diagram of a control system for a pan/tilt according to an embodiment of the present invention.
  • the present embodiment provides a control system for a pan/tilt.
  • the control system for the pan/tilt can be include:
  • the control device 73 of the pan-tilt system corresponding to FIG. 42 is disposed on the pan-tilt 71 and is used to communicate with the image acquisition device 72 and to control the image acquisition device 72 and the pan-tilt 71 respectively.
  • FIG. 48 is a schematic structural diagram of another pan/tilt control system provided by an embodiment of the present invention. with reference to FIG. 48 , the present embodiment provides another pan/tilt control system, specifically, the control system of the pan/tilt Systems can include:
  • the above-mentioned control device 82 of the pan/tilt in FIG. 43 is disposed on the pan/tilt 81 , and is used to communicate with the image capture device and to control the image capture device through the pan/tilt 81 .
  • FIG. 49 is a schematic structural diagram of another pan/tilt control system provided by an embodiment of the present invention. With reference to FIG. 49 , the present embodiment provides another pan/tilt control system.
  • the control system of the pan/tilt Systems can include:
  • the control device 92 of the pan-tilt system shown in FIG. 44 is disposed on the pan-tilt 91 and used to communicate with the image acquisition device, and to control the image acquisition device and the pan-tilt 91 respectively.
  • FIG. 50 is a schematic structural diagram of another pan/tilt control system provided by an embodiment of the present invention. with reference to FIG. 50 , the present embodiment provides another pan/tilt control system, specifically, the control system of the pan/tilt Systems can include:
  • the control device 103 of the pan-tilt system corresponding to FIG. 45 is disposed on the pan-tilt 101 and used to communicate with the image acquisition device 102 and to control the image acquisition device 102 and the pan-tilt 101 respectively.
  • control system of the pan-tilt system shown in FIG. 50 are similar to the specific implementation principle, implementation process and implementation effect of the control device of the pan-tilt system shown in FIG. 45 , and are not described in detail in this embodiment. , please refer to the related description of the embodiment shown in FIG. 45 .
  • control device in the control system of the PTZ in the above-mentioned embodiments may be integrated in the PTZ, and may further include an image acquisition device, and the image acquisition device may be integrated on the PTZ, or may also be integrated with the PTZ. Detachable connection.
  • FIG. 51 is a schematic structural diagram 1 of a movable platform provided by an embodiment of the present invention. Referring to FIG. 51, this embodiment provides a movable platform. Specifically, the movable platform may include:
  • the control device 113 of the pan/tilt shown in FIG. 41 is disposed on the pan/tilt 112 and used to communicate with the image capture device 114 and to control the pan/tilt 112 through the image capture device 114 .
  • the support mechanism 111 varies with the type of the movable platform.
  • the supporting mechanism 111 can be a handle, and when the movable platform is an airborne pan-tilt, the supporting mechanism 111 can be a hand-held pan-tilt.
  • movable platforms include, but are not limited to, the types described above.
  • Fig. 52 is a second schematic structural diagram of a movable platform provided by an embodiment of the present invention; with reference to Fig. 52, this embodiment provides a movable platform.
  • the movable platform may include:
  • the control device 123 of the pan-tilt system shown in FIG. 42 is disposed on the pan-tilt 122 and is used to communicate with the image acquisition device 124 and to control the image acquisition device 124 and the pan-tilt 122 respectively.
  • the support mechanism 121 varies with the type of the movable platform.
  • the supporting mechanism 121 can be a handle, and when the movable platform is an airborne pan-tilt, the supporting mechanism 121 can be a hand-held pan-tilt head.
  • movable platforms include, but are not limited to, the types described above.
  • FIG. 53 is a third schematic structural diagram of a movable platform provided by an embodiment of the present invention. With reference to FIG. 53 , this embodiment provides a movable platform.
  • the movable platform may include:
  • the control device 133 of the pan/tilt in the above-mentioned FIG. 43 is disposed on the pan/tilt 132 and is used to communicate with the image acquisition device 134 and to control the image acquisition device 134 through the pan/tilt 132 .
  • the support mechanism 131 varies with the type of the movable platform.
  • the supporting mechanism 131 can be a handle, and when the movable platform is an airborne pan-tilt, the supporting mechanism 131 can be On the fuselage equipped with the gimbal.
  • movable platforms include, but are not limited to, the types described above.
  • FIG. 54 is a fourth schematic structural diagram of a movable platform provided by an embodiment of the present invention. Referring to FIG. 54, this embodiment provides a movable platform. Specifically, the movable platform may include:
  • the control device 143 of the pan-tilt system shown in FIG. 44 is disposed on the pan-tilt 142 and is used to communicate with the image acquisition device 144 and to control the image acquisition device 144 and the pan-tilt 142 respectively.
  • the support mechanism 141 varies with the type of the movable platform.
  • the supporting mechanism 141 can be a handle, and when the movable platform is an airborne pan-tilt, the supporting mechanism 141 can be a hand-held pan-tilt head.
  • movable platforms include, but are not limited to, the types described above.
  • the specific implementation principle, implementation process and implementation effect of the movable platform shown in FIG. 54 are similar to the specific implementation principle, implementation process and implementation effect of the control device of the PTZ system shown in FIG. 44 , and the parts that are not described in detail in this embodiment , refer to the related description of the embodiment shown in FIG. 44 .
  • FIG. 55 is a schematic structural diagram 5 of a movable platform provided by an embodiment of the present invention.
  • this embodiment provides a movable platform.
  • the movable platform may include:
  • the control device 153 of the pan-tilt system shown in FIG. 45 is disposed on the pan-tilt 152 and used to communicate with the image acquisition device 154 and to control the image acquisition device 154 and the pan-tilt 152 respectively.
  • the support mechanism 151 varies with the type of the movable platform.
  • the support mechanism 151 can be a handle, and when the movable platform is an airborne pan-tilt, the support mechanism 151 can be used as a handle.
  • movable platforms include, but are not limited to, the types described above.
  • the specific implementation principle, implementation process and implementation effect of the movable platform shown in FIG. 55 are similar to the specific implementation principle, implementation process and implementation effect of the control device of the PTZ system shown in FIG. 45 , and the parts that are not described in detail in this embodiment , refer to the related description of the embodiment shown in FIG. 45 .
  • control device in the movable platform in the above-mentioned various embodiments may be integrated into the pan/tilt, and may further include an image acquisition device, and the image acquisition device may also be integrated in the pan/tilt or detachably connected to the pan/tilt. .
  • an embodiment of the present invention provides a computer-readable storage medium, where the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement the above-mentioned FIG. 2 to FIG. 34 and FIG. 39 to the control method of the pan/tilt in FIG. 40 .
  • An embodiment of the present invention provides a computer-readable storage medium, where the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement the above-mentioned FIG. 35 , FIG. 39 to FIG. 40 .
  • the control method of the PTZ system is a computer-readable storage medium, where the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement the above-mentioned FIG. 35 , FIG. 39 to FIG. 40 .
  • An embodiment of the present invention provides a computer-readable storage medium, where the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement the above-mentioned FIG. 36 , FIG. 39 to FIG. 40 . PTZ control method.
  • An embodiment of the present invention provides a computer-readable storage medium, where the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement the above-mentioned FIG. The control method of the PTZ system.
  • An embodiment of the present invention provides a computer-readable storage medium, where the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement the PTZ system shown in FIG. 38 to FIG. 40 above. control method.
  • the disclosed related detection apparatus and method may be implemented in other manners.
  • the embodiments of the detection apparatus described above are only illustrative.
  • the division of the modules or units is only a logical function division.
  • Another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of detection devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solution of the present invention is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions for causing a computer processor (processor) to perform all or part of the steps of the methods described in the various embodiments of the present invention.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)

Abstract

基于图像采集装置(22)的控制方法、云台(21)的控制方法及装置。方法包括:获取图像采集装置(22)确定的拍摄参数,图像采集装置(22)为具有手动镜头或自动镜头的相机(100),拍摄参数能够用于调节图像采集装置(22)采集的采集图像(S101);基于拍摄参数确定控制参数(S102);基于控制参数对云台(21)和辅助设备(23)中的至少一种进行相应的控制,云台(21)用于支撑图像采集装置(22)和/或辅助设备(23),辅助设备(23)用于辅助图像采集装置(22)进行相应的拍摄(S103),有效地实现了无需借助任何设备即可直接获取到图像采集装置(22)确定的拍摄参数,降低了数据处理成本,在获取到拍摄参数之后,可以基于拍摄参数来对云台(21)和/或辅助设备(23)进行控制,从而实现了无需用户进行手动操作即可实现对云台(21)的有效控制。

Description

基于图像采集装置的控制方法、云台的控制方法及装置
交叉引用
本申请要求于2020年12月30日提交的国际专利、申请号为PCT/CN2020/141400、申请名称为“云台的控制方法、装置、可移动平台和存储介质”的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施例涉及可移动平台技术领域,尤其涉及一种基于图像采集装置的控制方法、云台的控制方法及装置。
背景技术
随着科学技术的飞速发展,云台的应用领域越来越广泛,尤其云台可以广泛应用于拍摄领域。例如:云台上设置有相机,相机可以将诸如图像信息通过云台上的图传模块传输给云台,以使得云台依据图像信息进行相应的控制,如跟踪,或者,为了实现诸如对焦功能,对于具有手动镜头的相机而言,通常需要借助于额外的配件来获取诸如深度信息来实现对焦,这不仅对云台的性能要求比较高,也增加了额外的成本。
发明内容
本发明实施例提供了一种基于图像采集装置的控制方法、云台的控制方法及装置,可直接获得图像采集装置所确定的拍摄参数,而后可以基于拍摄参数对云台和辅助设备中的至少一种进行控制,从而方便对云台进行控制,并保证了用户的良好体验。
本发明的第一方面是为了提供一种基于图像采集装置的控制方法,包括:
获取所述图像采集装置确定的拍摄参数,其中,所述图像采集装置为具有手动镜头或自动镜头的相机,所述拍摄参数能够用于调节所述图像采集装置采集的采集图像;
基于所述拍摄参数确定控制参数;
基于所述控制参数对云台和辅助设备中的至少一种进行相应的控制,其中,所述云台用于支撑所述图像采集装置和/或所述辅助设备,所述辅助设备用于辅助所述图像采集装置进行相应的拍摄。
本发明的第二方面是为了提供一种基于图像采集装置的控制装置,包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
获取所述图像采集装置确定的拍摄参数,其中,所述图像采集装置为具有手动镜头或自动镜头的相机,所述拍摄参数能够用于调节所述图像采集装置采集的采集图像;
基于所述拍摄参数确定控制参数;
基于所述控制参数对云台和辅助设备中的至少一种进行相应的控制,其中,所述云台用于支撑所述图像采集装置和/或所述辅助设备,所述辅助设备用于辅助所述图像采集装置进行相应的拍摄。
本发明的第三方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现上述第一方面所述的基于图像采集装置的控制方法。
本发明的第四方面是为了提供一种云台,包括:
云台主体;
上述第三方面所述的基于图像采集装置的控制装置,设置于所述云台主体上。
本实施例提供的技术方案,提供了一种不同于相关技术的用于获取拍摄参数的通信链路,通过直接获取所述图像采集装置确定的拍摄参数,基于所述拍摄参数确定控制参数,并基于所述控制参数对云台和辅助设备中的至少一种进行相应的控制,从而有效地实现了无需借助任何设备即可直接获取到图像采集装置确定的拍摄参数,降低了数据处理成本;同时,拍摄参数的相关计算功能放在了图像采集装置,从而降低了对云台的计算能力的需求;此外,在获取到拍摄参数之后,可以基于拍摄参数来对云台和辅助设备中的至少一种进行控制,从而实现了无需用户进行手动操作即可实现对云台的有效控制,保证了用户的良好体验,进一步提高了该方法的实用性,有利于市场的推广与应用。
本发明的第五方面是为了提供一种云台的控制方法,包括:
获取目标对象在采集图像中的采集位置,所述采集位置是通过图像采集装置所确定的,所述图像采集装置为具有手动镜头或自动镜头的相机,且所述图像采集装置与所述云台通信连接;
基于所述采集位置,确定用于对所述目标对象进行跟随操作的控制参数;
根据所述控制参数对所述云台进行控制,以实现对所述目标对象进行跟随操作。
本发明的第六方面是为了提供一种云台的控制装置,所述装置包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
获取目标对象在采集图像中的采集位置,所述采集位置是通过图像采集装置所确定的,所述图像采集装置为具有手动镜头或自动镜头的相机,且所述图像采集装置与所述云台通信连接;
基于所述采集位置,确定用于对所述目标对象进行跟随操作的控制参数;
根据所述控制参数对所述云台进行控制,以实现对所述目标对象进行跟随操作。
本发明的第七方面是为了提供一种云台的控制***,包括:
云台;
上述第六方面所述的云台的控制装置,设置于所述云台上,且用于与所述图像采集装置通信连接,并用于通过图像采集装置对所述云台进行控制。
本发明的第八方面是为了提供一种可移动平台,包括:
云台;
支撑机构,用于连接所述云台;
上述第六方面所述的云台的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于通过图像采集装置对所述云台进行控制。
本发明的第九方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第五方面所述的云台的控制方法。
本发明的第十方面是为了提供一种云台***的控制方法,其中,所述云台***包括云台和与云台通信连接的图像采集装置,所述方法包括:
控制所述图像采集装置采集图像,并获取目标对象在所述图像中的采集位置,所述采集位置是通过图像采集装置所确定的;
将所述采集位置传输至所述云台;
控制所述云台按照控制参数进行运动,以实现对所述目标对象进行跟随操作,其中,所述控制参数是基于所述采集位置所确定的。
本发明的第十一方面是为了提供一种云台***的控制装置,其中,所述云台***包括云台和与云台通信连接的图像采集装置,所述装置包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
控制所述图像采集装置采集图像,并获取目标对象在所述图像中的采集位置,所述采集位置是通过图像采集装置所确定的;
将所述采集位置传输至所述云台;
控制所述云台按照控制参数进行运动,以实现对所述目标对象进行跟随操作,其中,所述控制参数是基于所述采集位置所确定的。
本发明的第十二方面是为了提供一种云台的控制***,包括:
云台;
上述第十一方面所述的云台***的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
本发明的第十三方面是为了提供一种可移动平台,包括:
云台;
支撑机构,用于连接所述云台;
上述第十一方面所述的云台***的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
本发明的第十四方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第十方面所述的云台***的控制方法。
本发明的第十五方面是为了提供一种云台的控制方法,用于云台,所述云台通信连接有图像采集装置,所述方法包括:
获取采集图像,所述采集图像中包括目标对象;
在所述采集图像中确定所述目标对象的位置,以基于所述目标对象的位置对所述目标对象进行跟随操作;
将根据所述目标对象的位置发送至所述图像采集装置,以使得所述图像采集装置基于所述目标对象的位置,确定与所述目标对象相对应的对焦位置,并基于所述对焦位置对所述目标对象进行对焦操作。
本发明的第十六方面是为了提供一种云台的控制装置,用于云台,所述云台通信连接有图像采集装置,所述控制装置包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
获取采集图像,所述采集图像中包括目标对象;
在所述采集图像中确定所述目标对象的位置,以基于所述目标对象的位置对所述目标对象进行跟随操作;
将根据所述目标对象的位置发送至所述图像采集装置,以使得所述图像采集装置基于所述目标对象的位置,确定与所述目标对象相对应的对焦位置,并基于所述对焦位置对所述目标对象进行对焦操作。
本发明的第十七方面是为了提供一种云台的控制***,包括:
云台;
上述第十五方面所述的云台的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于通过所述云台对所述图像采集装置进行控制。
本发明的第十八方面是为了提供一种可移动平台,包括:
云台;
支撑机构,用于连接所述云台;
上述第十五方面所述的云台的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于通过所述云台对所述图像采集装置进行控制。
本发明的第十九方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第十四方面所述的云台的控制方法。
本发明的第二十方面是为了提供一种云台***的控制方法,所述云台***包括云台和与所述云台通信连接的图像采集装置,所述方法包括:
控制所述图像采集装置采集图像,所述图像包括目标对象;
在所述图像中确定所述目标对象的位置;
基于所述目标对象的位置控制所述云台对所述目标对象进行跟随操作,并根据所述目标对象的位置控制所述图像采集装置对所述目标对象进行对焦操作。
本发明的第二十一方面是为了提供一种云台***的控制装置,所述云台***包括云台和与所述云台通信连接的图像采集装置,所述控制装置包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
控制所述图像采集装置采集图像,所述图像包括目标对象;
在所述图像中确定所述目标对象的位置;
基于所述目标对象的位置控制所述云台对所述目标对象进行跟随操作,并根据所述目标对象的位置控制所述图像采集装置对所述目标对象进行对焦操作。
本发明的第二十二方面是为了提供一种云台的控制***,包括:
云台;
上述第二十一方面所述的云台***的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
本发明的第二十三方面是为了提供一种可移动平台,包括:
云台;
支撑机构,用于连接所述云台;
上述第二十一方面所述的云台***的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
本发明的第二十四方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第二十方面所述的云台***的控制方法。
本发明的第二十五方面是为了提供一种云台***的控制方法,所述云台***包括云台和与所述云台通信连接的图像采集装置,所述方法包括:
获取第一对象在所述图像采集装置采集的采集图像中的采集位置,所述第一对象的采集位置用于所述云台对所述第一对象进行跟随操作,以及用于所述图像采集装置对所述第一对象进行对焦操作;
在所述第一对象改变为第二对象时,获取所述第二对象在所述图像采集装置采集的采集图像中的采集位置,以使得所述云台由对所述第一对象的跟随操作变化为基于所述第二对象的采集位置对所述第二对象进行跟随操作,以及使得所述图像采集装置由对所述第一对象的对焦操作变化为基于所述第二对象的位置对所述第二对象进行对焦操作。
本发明的第二十六方面是为了提供一种云台***的控制装置,所述云台***包括云台和与所述云台通信连接的图像采集装置,所述控制装置包括:
存储器,用于存储计算机程序;
处理器,用于运行所述存储器中存储的计算机程序以实现:
获取第一对象在所述图像采集装置采集的采集图像中的采集位置,所述第一对象的采集位置用于所述云台对所述第一对象进行跟随操作,以及用于所述图像采集装置对所述第一对象进行对焦操作;
在所述第一对象改变为第二对象时,获取所述第二对象在所述图像采集装置采集的采集图像中的采集位置,以使得所述云台由对所述第一对象的跟随操作变化为基于所述第二对象的采集位置对所述第二对象进行跟随操作,以及使得所述图像采集装置由对所述第一对象的对焦操作变化为基于所述第二对象的位置对所述第二对象进行对焦操作。
本发明的第二十七方面是为了提供一种云台的控制***,包括:
云台;
上述第二十六方面所述的云台***的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
本发明的第二十八方面是为了提供一种可移动平台,包括:
云台;
支撑机构,用于连接所述云台;
上述第二十六方面所述的云台***的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
本发明的第二十九方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第二十五方面所述的云台***的控制方法。
本发明实施例提供的云台的控制方法、装置、可移动平台和存储介质,通过获取目标对象在采集图像中的采集位置,而后基于采集位置确定用于对目标对象进行跟随操作的控制参数,并根据控制参数对云台进行控制,从而可以实现对目标对象进行跟随操作。其中,由于采集位置是通过图像采集装置,也即具有手动镜头或自动镜头的相机所确定的,并且云台可以通过图像采集装置直接获取采集位置,这样针对采集位置而言,其传输的通信链路与相关技术不同,即改变了计算采集位置的执行主体,使得能够复用图像采集装置原本的关于采集位置的识别功能,从而降低了对于云台的关于采集位置的计算能力的要求。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1为本发明实施例提供的一种基于图像采集装置的控制方法的流程示意图;
图2为本发明实施例提供的获取所述图像采集装置确定的对焦信息的流程示意图;
图3为本发明实施例提供的基于所述拍摄参数确定控制参数的流程示意图;
图4为本发明实施例提供的另一基于所述拍摄参数确定控制参数的流程示意图;
图4a为本发明实施例提供的确定跟焦电机的第一转动方向的示意图;
图4b为本发明实施例提供的另一种确定跟焦电机的第一转动方向的示意图;
图4c为本发明实施例提供的又一种确定跟焦电机的第一转动方向的示意图;
图5为本发明实施例提供的又一基于所述拍摄参数确定控制参数的流程示意图;
图6为本发明实施例提供的另一基于所述拍摄参数确定控制参数的流程示意图;
图7为本发明实施例提供的又一基于所述拍摄参数确定控制参数的流程示意图;
图8为本发明实施例提供的再一基于所述拍摄参数确定控制参数的流程示意图;
图9为本发明实施例提供的另一种基于图像采集装置的控制方法的流程示意图;
图10为本发明应用实施例提供的一种自动对焦方法的流程示意图;
图11为本发明应用实施例提供的控制目标在画面中显示一致的原理示意图;
图12为本发明应用实施例提供的又一种基于图像采集装置的控制方法的原理示意图;
图13为本发明实施例提供的一种基于图像采集装置的控制装置的结构示意图;
图14为本发明实施例提供的一种云台的结构示意图;
图15为现有技术中提供的云台***的结构示意图;
图16为本发明实施例提供的一种云台的控制方法的流程示意图;
图17为本发明实施例提供的一种云台与图像采集装置进行通信连接的结构示意图;
图18为本发明实施例提供的获取目标对象在采集图像中的采集位置的示意图;
图19为本发明实施例提供的获取目标对象在采集图像中的采集位置的流程示意图;
图20为本发明实施例提供的通过图像采集装置获取与所述目标对象相对应的目标对焦位置的流程示意图;
图21为本发明实施例提供的与所述历史对焦位置相对应的历史对象部位和与所述当前对焦位置相对应的当前对象部位的示意图一;
图22为本发明实施例提供的与所述历史对焦位置相对应的历史对象部位和与所述当前对焦位置相对应的当前对象部位的示意图二;
图23为本发明实施例提供的另一种云台的控制方法的流程示意图;
图24为本发明实施例提供的目标对象发生改变的示意图;
图25为本发明实施例提供的计算与所述采集位置相对应的当前位置预测值的流程示意图;
图26为本发明实施例提供的基于所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,确定与所述采集位置相对应的位置调整值的流程示意图;
图27为本发明实施例提供的基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数的流程示意图一;
图28为本发明实施例提供的基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数的流程示意图二;
图29为本发明实施例提供的基于所述云台的运动状态和所述控制参数对所述云台进行控制的流程示意图;
图30为本发明实施例提供的根据所述控制参数对所述云台进行控制的流程示意图一;
图31为本发明实施例提供的根据所述控制参数对所述云台进行控制的流程示意图二;
图32为本发明实施例提供的又一种云台的控制方法的流程示意图;
图33为本发明实施例提供的另一种云台的控制方法的流程示意图;
图34为本发明实施例提供的又一种云台的控制方法的流程示意图;
图35为本发明实施例提供的一种云台***的控制方法的流程示意图;
图36为本发明实施例提供的另一种云台的控制方法的流程示意图;
图37为本发明实施例提供的另一种云台***的控制方法的流程示意图;
图38为本发明实施例提供的又一种云台***的控制方法的流程示意图;
图39为本发明应用实施例提供的一种云台的控制方法的原理示意图一;
图40为本发明应用实施例提供的一种云台的控制方法的原理示意图二;
图41为本发明实施例提供的一种云台的控制装置的结构示意图;
图42为本发明实施例提供的一种云台***的控制装置的结构示意图;
图43为本发明实施例提供的另一种云台的控制装置的结构示意图;
图44为本发明实施例提供的另一种云台***的控制装置的结构示意图;
图45为本发明实施例提供的又一种云台***的控制装置的结构示意图;
图46为本发明实施例提供的一种云台的控制***的结构示意图;
图47为本发明实施例提供的一种云台的控制***的结构示意图;
图48为本发明实施例提供的另一种云台的控制***的结构示意图;
图49为本发明实施例提供的又一种云台的控制***的结构示意图;
图50为本发明实施例提供的另一种云台的控制***的结构示意图;
图51为本发明实施例提供的一种可移动平台的结构示意图一;
图52为本发明实施例提供的一种可移动平台的结构示意图二;
图53为本发明实施例提供的一种可移动平台的结构示意图三;
图54为本发明实施例提供的一种可移动平台的结构示意图四;
图55为本发明实施例提供的一种可移动平台的结构示意图五。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。
为了能够理解本实施例中技术方案的具体实现过程,下面对相关技术进行说明:
随着科学技术的飞速发展,人们对便携式相机的需求越来越高,相机的体积也向越来越往小型化的趋势进行发展。在相机不断小型化的同时,对相机的智能化需求也越来越高。相机的智能化程度可以通过相机内置的3A算法来实现,3A具体可以包括:自动对焦(Auto Focus,简称AF)、自动曝光(Automatic Exposure,简称AE)和自动白平衡(AWB)。其中,自动对焦操作、自动曝光(光圈)操作都依赖于镜头的内置马达以及微控制器来实现,这样会使得镜头的体积变大,并且,上述自动镜头的价格都比较昂贵,从而使得自动镜头的使用群体比较受限,因此,手动镜头应运而生。
对于手动镜头而言,手动镜头通常为纯机械装置,因此,手动镜头具有价格实惠的优点,并且,在体积上,由于手动镜头节省了内置马达以及微处理器,从而使得手动镜头所占用的空间变得更小一些。此外,对于手动镜头而言,使用外置的电机(代替内置马达)同样可以实现对焦、曝光(光圈)的功能,例如:通过USB连线的方式获取到相机的相位对焦信息(Phase Detection Auto-focus,简称PDAF),或者反差对焦信息(Contrast Detection Auto Focus,简称CDAF),而后利用PDAF信息和CDAF对外置电机进行闭环控制,以实现对焦/跟焦操作。同理,获取到相机的过曝光/欠曝光状态,也可以通过调整光圈来实现曝光功能。同理的,外部控制模块还可以通过USB获取相机测光数据,控制模块根据测光信息控制外部补光灯的色温、光强、实现外部曝光控制的功能。
另外,希区柯克Dolly zoom是一种控制主体的大小不变、背景实现放大/缩小的一种运镜方法,具体可以通过在移动相机的同时进行变焦操作来实现。对于能够支持智能焦点跟随的功能的相机而言,均 可以在相机中框选任意物体,框的大小会随着物体的大小变化而变化。而对于手动镜头而言,用户可以通过手动操作来控制手动镜头来实现Dolly zoom,但是,上述的运镜操作对于用户而言,操控难度比较高,Dollyzoom效果无法保证。
总的来说,上述技术存在以下缺陷:(1)传统自动对焦镜头的价格昂贵、镜头使用群体比较受限制;(2)手动镜头无法控制自动对焦和光圈的自动曝光调整;(3)手动控制Dollyzoom的运镜方式比较难操控;(4)在部分工况下,存在相机防抖与云台防抖互相干涉的问题;(5)受硬件的限制,手动实现曝光操作存在上限,只能通过增加感光度ISO来提升亮度,这样会导致画面的质量降低。
为了解决上述技术问题,本实施例提供了一种基于图像采集装置的控制方法、云台的控制方法及装置,基于图像采集装置的控制方法可以包括:获取图像采集装置确定的拍摄参数,拍摄参数可以包括:对焦信息、变焦信息等等,该拍摄参数能够用于调节图像采集装置采集的采集图像,和/或,为基于用户期望确定并能够用于实现图像采集装置的预设拍摄功能;在获取到拍摄参数之后,可以对拍摄参数进行分析处理,以确定控制参数,而后可以基于控制参数对云台和辅助设备中的至少一种进行相应的控制,其中,云台用于支撑图像采集装置和/或辅助设备,辅助设备用于辅助图像采集装置进行相应的拍摄,例如,上述的辅助设备可以用于辅助图像采集装置实现跟焦拍摄操作、变焦拍摄操作或者补光拍摄操作等等。
本实施例提供的技术方案,通过获取所述图像采集装置确定的拍摄参数,基于所述拍摄参数确定控制参数,并基于所述控制参数对云台和辅助设备中的至少一种进行相应的控制,从而有效地实现了无需借助任何设备(检测设备、辅助设备等等)即可直接获取到图像采集装置确定的拍摄参数,降低了数据处理成本,此外,在获取到拍摄参数之后,可以基于拍摄参数来对云台和辅助设备中的至少一种进行控制,从而实现了无需用户进行手动操作即可实现对云台的有效控制,保证了用户的良好体验,进一步提高了该方法的实用性,有利于市场的推广与应用。
下面结合附图,对本发明中一种基于图像采集装置的控制方法、云台的控制方法及装置的一些实施方式作详细说明。在各实施例之间不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
图1为本发明实施例提供的一种基于图像采集装置的控制方法的流程示意图;参考附图1所示,本实施例提供了一种基于图像采集装置的控制方法。其中,图像采集装置可以设置于可移动平台上,具体的,图像采集装置与可移动平台可拆卸连接,即用户可以根据使用需求将图像采集装置安装在可移动平台上,或者,将图像采集装置从可移动平台上拆卸下来,使得图像采集装置可以独立于可移动平台。上述的可移动平台可以包括:手持云台、机载云台、无人飞行器、无人车、无人船、移动机器人等被动的移动设备或具有动力***的移动设备,图像采集装置可以包括以下至少之一:具有手动镜头的相机、具有自动镜头的相机、手机、摄像机等等具有拍摄功能的设备。为了便于理解说明,突出本技术方案的实现效果,以云台作为可移动平台,图像采集装置设置于云台上为例进行说明,图像采集装置可以与可移动平台通信连接。其中,在图像采集装置为相机时,其镜头可以与其机身可拆卸连接,以适配不同的拍摄场景,达到不同的拍摄效果。
基于上述的结构,本实施例所提供的基于图像采集装置的控制方法的执行主体可以为基于图像采集装置的控制装置,可以理解的是,该基于图像采集装置的控制装置可以实现为软件、或者软件和硬件的组合。其可以集成于云台、图像采集装置、辅助设备中的任一种,也可以独立于其中的任一种,本申请实施例以执行主体为云台为例进行说明。具体的,该方法可以包括:
步骤S101:获取图像采集装置确定的拍摄参数,其中,图像采集装置为具有手动镜头或自动镜头的相机,拍摄参数能够用于调节图像采集装置采集的采集图像。
步骤S102:基于拍摄参数确定控制参数。
步骤S103:基于控制参数对云台和辅助设备中的至少一种进行相应的控制,其中,云台用于支撑图像采集装置和/或辅助设备,辅助设备用于辅助图像采集装置进行相应的拍摄。
下面对上述各个步骤的具体实现原理进行详细说明:
步骤S101:获取图像采集装置确定的拍摄参数,其中,图像采集装置为具有手动镜头或自动镜头的相机,拍摄参数能够用于调节图像采集装置采集的采集图像。
其中,在图像采集装置进行图像采集操作的过程中,控制装置可以通过图像采集装置获取到拍摄参数,该拍摄参数可以包括以下至少之一:对焦信息、变焦信息、补光信息、预设对象在图像中的占比信息、防抖信息等等,在一些实例中,对焦信息可以包括采集位置,上述的拍摄参数能够用于调整图像采集装置采集的采集图像,和/或,为基于用户期望确定并能够用于实现图像采集装置的预设拍摄功能。举例来说,在拍摄参数为预设对象在图像中的占比信息时,则可以通过上述的拍摄参数对图像采集装置所采集的采集图像的构图进行调整,在一些实例中,可以实现希区柯克Dolly zoom的运镜操作;在拍摄参数为对焦信息、跟焦参数、补光信息或者防抖信息时,能够实现图像采集装置的对焦拍摄操作、跟焦拍摄操作、补光拍摄操作以及防抖拍摄操作等等。
另外,本实施例对于拍摄参数的具体获取方式不做限定,在一些实例中,获取图像采集装置确定的拍摄参数可以包括:获取图像采集装置中非图像传感器的传感器的感测信息,对所述感测信息进行分析处理,以获得拍摄参数,此时,拍摄参数的获取并不依赖于图像采集装内置所采集的采集图像;在一些实例中,获取图像采集装置确定的拍摄参数可以包括:获取图像采集装置所采集的采集图像,对采集图像进行分析处理,以获得拍摄参数,此时,拍摄参数是通过对图像采集装置所采集的图像进行分析处理后所获得的。在又一些实例中,获取图像采集装置确定的拍摄参数可以包括:获取图像采集装置所采集的采集图像,获取用户对采集图像所输入的执行操作,基于执行操作和采集图像获得拍摄参数,此时,拍摄参数可以是用户根据需求、对图像采集装置所采集的图像输入的执行操作所获得的,即拍摄参数是基于用户期望所确定的,在不同的应用场景中,可以基于不同的应用需求可以确定不同的拍摄参数。
在一些实例中,获取图像采集装置确定的拍摄参数可以包括:基于无线通信设备与图像采集装置建立无线通信链路,其中,无线通信设备设于云台或辅助设备;通过无线通信链路,获取拍摄参数。也即,控制装置可以基于与云台或辅助设备的通信,并借助云台或辅助设备与图像采集装置之间的无线通信连接,获取到图像采集装置确定的拍摄参数。
上述的无线通信设备可以包括以下任意至少之一:蓝牙模块、近距离无线通信模块、无线局域网wifi模块。
举例来说,在无线通信设备包括蓝牙模块时,图像采集装置具有蓝牙通信连接功能,控制装置与图像采集装置之间可以通过蓝牙模块建立无线通信链路,在图像采集装置获取到拍摄参数之后,则控制装置可以通过所建立的无线通信链路获取到拍摄参数,其中,蓝牙模块可以设置于云台或者辅助设备上。
相类似的,在无线通信设备包括wifi模块时,图像采集装置具有wifi连接功能,控制装置与图像采集装置之间可以通过wifi模块建立无线通信链路,在图像采集装置获取到拍摄参数之后,则控制装置可以通过所建立的无线通信链路获取到拍摄参数,其中,wifi模块可以设置于云台或者辅助设备上。
在无线通信设备包括近距离无线通信模块时,此时,在控制装置与图像采集装置之间近距离接触时,控制装置即可近距离无线通信模块通过与图像采集装置之间建立无线通信链路,在图像采集装置获取到拍摄参数之后,则控制终端可以通过所建立的无线通信链路获取到拍摄参数,其中,近距离无线通信模块可以设置于云台或者辅助设备上。
在无线通信设备包括蓝牙模块和近距离无线通信模块时,控制装置可以通过蓝牙模块和近距离无线通信模块与图像采集装置进行无线通信,其中,近距离无线通信模块用于获取实现蓝牙连接的信息。
在无线通信设备包括wifi模块和近距离无线通信模块时,控制装置可以通过wifi模块和近距离无线通信模块与图像采集装置进行无线通信,其中,近距离无线通信模块用于获取实现wifi连接的信息。
在无线通信设备包括蓝牙模块和wifi模块时,控制装置可以通过蓝牙模块和wifi模块与图像采集装置进行无线通信,其中,在不同的带宽要求下,可以选择其中之一进行数据传输,或者,蓝牙模块用于获取实现wifi连接的信息。
步骤S102:基于拍摄参数确定控制参数。
其中,对于云台而言,用户可以根据使用需求选择性地在云台上是否设置辅助设备、设置什么类型的辅助设备,具体的,辅助设备可以包括以下至少之一:对焦电机、变焦电机、补光设备等等,在云台上设置辅助设备时,辅助设备可以与云台可拆卸连接,即用户可以根据使用需求将辅助设备安装在云台上,或者,将辅助设备从云台上拆卸下来,使得辅助设备可以独立于云台。在云台上设置有辅助设备和图像采集装置时,该云台用于调整图像采集装置以及辅助设备的空间位置,此时,云台可以分别与图像采集装置、辅助设备通信连接。此外,对于辅助设备和图像采集装置而言,图像采集装置可以与辅助设备机械耦合,机械耦合可以包括:直接连接、通过连接件进行连接等等,具体的,图像采集装置与辅助设备可拆卸连接,即可以根据使用需求将图像采集装置与辅助设备进行连接,或者将图像采集装置与辅助设备进行拆分,使得图像采集装置与辅助设备相互独立。
具体的,为了能够实现对云台进行准确地控制,在获取到拍摄参数之后,可以对拍摄参数进行分析处理,从而可以确定控制参数,本实施例对于确定控制参数的具体实现方式不做限定,本领域技术人员可以根据具体的应用场景或者应用需求进行设置,在一些实例中,预先设置有拍摄参数与控制参数之间的映射关系,基于映射关系来确定与拍摄参数相对应的控制参数。在又一些实例中,获取预先训练好的机器学习模型,将拍摄参数输入至机器学习模型,从而可以获得机器学习模型所输出的控制参数。
步骤S103:基于控制参数对云台和辅助设备中的至少一种进行相应的控制,其中,云台用于支撑图像采集装置和/或辅助设备,辅助设备用于辅助图像采集装置进行相应的拍摄。
由于控制参数用于对云台和/或辅助设备进行控制,因此,在获取到控制参数之后,可以基于控制参数对云台和辅助设备中的至少一种进行相应的控制操作。举例来说:在控制参数包括防抖信息时,可以基于防抖信息对云台进行控制,以实现相应的增稳操作。在一些实例中,在获取到控制参数之后,可以基于控制参数对辅助设备进行控制,例如:在控制参数包括相关于变焦电机的参数时,可以基于该相关于变焦电机的参数对变焦电机进行相应的控制操作,从而可以实现变焦调节操作;在控制参数包括相关于对焦电机的参数时,可以基于该相关于对焦电机的参数对对焦电机进行相应的控制操作,从而可以实现对焦调节操作;在控制参数包括补光信息时,可以基于补光信息对补光设备进行相应的控制操作,从而可以实现补光操作。在又一些实例中,在获取到控制参数之后,可以基于控制参数对云台和辅助设备进行控制,例如:在控制参数包括防抖信息时,可以基于防抖信息对云台和/或具有防抖控制单元的图像采集装置进行控制。
基于上述说明可知,拍摄参数可以由图像采集装置内的控制器获取,其不同于图像信息。也即,拍摄参数并非由云台或辅助设备或其它基于图像采集装置的控制装置根据图像信息或其它设备采集的数据进行分析处理得到,且在某些使用场景下,云台或或辅助设备或其它基于图像采集装置的控制装置,无需获取图像信息或其它设备采集的数据,即可实现相应的控制。
本实施例提供的基于图像采集装置的控制方法,通过获取所述图像采集装置确定的拍摄参数,基于所述拍摄参数确定控制参数,并基于所述控制参数对云台和辅助设备中的至少一种进行相应的控制,从而有效地实现了无需借助任何设备即可直接获取到图像采集装置确定的拍摄参数,降低了数据处理成本,此外,在获取到拍摄参数之后,可以基于拍摄参数来对云台和辅助设备中的至少一种进行控制,从而实 现了无需用户进行手动操作即可实现对云台的有效控制,保证了用户的良好体验,进一步提高了该方法的实用性,有利于市场的推广与应用。
可以理解,在利用控制参数对辅助设备进行相应的控制时,图像采集装置可以是除相机以外的另一种可采集图像的电子设备,此处不做具体限定。
在上述实施例的基础上,对于图像采集装置而言,图像采集装置可以为包括手动镜头的相机,此时,通过图像采集装置确定的拍摄参数能够用于调节图像采集装置采集的采集图像,需要注意的是,在云台上配置不同的辅助设备时,不同的辅助设备可以对应有不同的拍摄参数,这样通过图像采集装置所确定的拍摄参数和辅助设备能够对采集图像实现不同的调节操作。
在一些实例中,对于手动镜头而言,尽管图像采集装置无法实现自动对焦,但在图像采集装置中的图像传感器(例如:(Complementary Metal-Oxide-Semiconductor,简称CMOS)传感器)支持的情况下,可以通过用户点击图像采集装置的显示屏的方式获取指定区域的相位对焦信息,再可以通过标定的方式得到相位-对焦环位置曲线,再根据标定曲线直接计算出跟焦环位置,从而可以控制辅助设备,如跟焦电机旋转目标对焦环位置,即可以实现手动镜头的快速对焦。或者,也可以通过辅助设备,如跟焦电机通过获取上述指定区域的对比度信息,实现CDAF功能。因此,在辅助设备包括用于对图像采集装置的跟焦环进行调节的跟焦电机时,本实施例中的获取图像采集装置确定的拍摄参数可以包括:获取图像采集装置确定的对焦信息,其中,对焦信息可以包括以下至少之一:相位对焦PDAF信息、反差对焦CDAF信息。
对于图像采集装置而言,图像采集装置可以对应有不同的光圈应用模式,例如:光圈应用模式可以包括大光圈模式和小光圈模式,大光圈模式是指光圈值大于设定光圈阈值的工作模式,小光圈模式可以是指光圈值小于或等于预设光圈阈值的工作模式,而在不同的光圈应用模式下,PDAF与CDAF的精度不一致,例如,在小光圈模式下,PDAF精度受限,则可以采用不同的对焦信息来实现自动对焦操作。因此,为了能够保证自动对焦操作的准确可靠性,参考附图2所示,本实施例提供了一种获取对焦信息的实现方式,具体的,获取图像采集装置确定的对焦信息可以包括:
步骤S201:确定图像采集装置所对应的光圈应用模式。
其中,对于图像采集装置而言,不同的光圈应用模式可以对应有不同范围的光圈值,因此,在图像采集装置工作的过程中,可以通过获取图像采集装置所对应的光圈值来确定图像采集装置所对应的光圈应用模式。或者,不同的光圈应用模式可以对应有不同的显示标识(包括:指示灯、在显示界面中所显示的图标等等),通过标识信息可以确定图像采集装置所对应的光圈应用模式。
当然的,本领域技术人员也可以采用其他的方式来确定图像采集装置所对应的光圈应用模式,只要能够保证对光圈应用模式进行确定的准确可靠性即可,在此不再赘述。
步骤S202:根据光圈应用模式,获取图像采集装置确定的相位对焦信息、反差对焦信息中的至少一种。
在获取到光圈应用模式之后,可以对光圈应用模式进行分析处理,从而可以获取通过图像采集装置所确定的相位对焦信息和反差对焦信息中的至少一种,即在不同的应用场景中,用户可以根据使用需求或者设计需求来获取到不同的对焦信息。在一些实例中,在能够准确获取到相位对焦信息时,则可以通过光圈应用模式、获取图像采集装置所确定的相位对焦信息;在另一些实例中,在能够准确获取到反差对焦信息时,可以通过光圈应用模式、获取图像采集装置所确定的反差对焦信息;在一些实例中,还可以通过光圈应用模式、获取图像采集装置所确定的相位对焦信息和反差对焦信息。
在又一些实例中,根据光圈应用模式,获取图像采集装置确定的相位对焦信息、反差对焦信息中的至少一种可以包括:在光圈应用模式为第一模式时,获取图像采集装置确定的反差对焦信息,或者,获取图像采集装置确定的相位对焦信息和反差对焦信息,其中,第一模式所对应的光圈值小于或等于设定 光圈阈值;和/或,在光圈应用模式为第二模式时,获取图像采集装置确定的相位对焦信息,其中,第二模式所对应的光圈值大于设定光圈阈值。
具体的,图像采集装置的光圈应用模式可以包括第一模式和第二模式,由于第一模式所对应的光圈值小于或等于设定光圈阈值,因此可以称第一模式为小光圈模式,第二模式所对应的光圈值大于设定光圈阈值,因此可以称第二模式为大光圈模式。当图像采集装置处于第一模式时,此时的相位对焦信息PDFA的精度受限,因此,为了能够实现准确的对焦操作,则可以获取图像采集装置确定的相位对焦信息和反差对焦信息,以通过相位对焦信息和反差对焦信息来实现快速自动对焦操作。当图像采集装置处于第二模式时,此时的相位对焦信息PDFA可以直接对准目标,即通过相位对焦信息即可实现准确地对焦操作,因此,可以通过图像采集装置确定相位对焦信息。
本实施例中,通过确定图像采集装置所对应的光圈应用模式,而后根据光圈应用模式获取图像采集装置确定的相位对焦信息、反差对焦信息中的至少一种,从而有效地实现了在图像采集装置工作在不同的光圈应用模式时,可以确定不同的对焦信息,从而保证了对焦信息进行获取的灵活可靠性,进而提高了基于对焦信息进行对焦操作的实现精度。
承接上述描述内容,在辅助设备包括跟焦电机时,基于控制参数对云台和辅助设备中的至少一个进行控制可以包括:基于控制参数对跟焦电机进行控制,以实现图像采集装置的跟焦操作。
其中,在辅助设备包括跟焦电机且能够获取图像采集装置所确定的对焦信息时,为了能够实现对跟焦电机进行控制,则可以对对焦信息进行分析处理,以获得与对焦信息相对应的控制参数,而后可以基于控制参数对跟焦电机进行控制。需要注意的是,由于对焦信息可以包括相位对焦信息和/或反差对焦信息,在获取与对焦信息相对应的控制参数时,不同的对焦信息可以对应有不同的数据处理策略,因此,为了保证对控制参数进行获取的准确可靠性,在对焦信息包括相位对焦信息时,本实施例提供了一种获取控制参数的实现方式,具体的,参考附图3所示,本实施例中的基于拍摄参数确定控制参数可以包括:
步骤S301:获取相位对焦信息与跟焦环位置之间的第一映射关系。
步骤S302:基于第一映射关系确定与相位对焦信息相对应的跟焦环位置。
步骤S303:基于跟焦环位置,确定跟焦电机的控制参数。
其中,在辅助设备包括跟焦电机时,图像采集装置可以配置有相对应的跟焦环,跟焦环上标注有多个跟焦环位置,该跟焦环位置用于实现对跟焦电机的控制操作。具体的,通过跟焦环位置可以控制跟焦电机驱动与跟焦环相连接的齿轮,以实现对焦操作。为了能够实现对跟焦电机进行准确的控制操作,可以获取相位对焦信息与跟焦环位置之间的第一映射关系,该第一映射关系可以通过图像采集装置的厂商直接获取,或者,也可以通过预先的标定操作来获取,该第一映射关系用于标识相位对焦信息与跟焦环位置之间的一对一关系。
在获取到第一映射关系之后,则可以基于第一映射关系对相位对焦信息进行分析处理,从而可以确定与相位对焦信息相对应的跟焦环位置,而后则可以基于所确定的跟焦环位置来确定跟焦电机的控制参数,其中,控制参数可以包括以下至少之一:转动行程、转动速度、转动方向等等。在一些实例中,预先配置有跟焦环位置与控制参数之间的映射关系,基于映射关系和跟焦环位置即可确定跟焦电机的控制参数。在另一些实例中,预选训练有用于对跟焦环位置进行分析处理的机器学习模型,将跟焦环位置输入至机器学习模型,从而可以获得机器学习模型所输出的跟焦电机的控制参数。
本实施例中,在对焦信息包括相位对焦信息时,通过获取相位对焦信息与跟焦环位置之间的第一映射关系,而后基于第一映射关系确定与相位对焦信息相对应的跟焦环位置,并基于跟焦环位置来确定跟焦电机的控制参数,从而有效地保证了对跟焦电机的控制参数进行确定的准确可靠性,进一步提高了基于控制参数来对跟焦电机进行控制,以实现对焦操作的精确程度。
在对焦信息包括反差对焦信息时,本实施例提供了一种获取控制参数的实现方式,具体的,参考附图4所示,本实施例中的基于拍摄参数确定控制参数可以包括:
步骤S401:确定跟焦电机与反差对焦信息相对应的当前电机位置。
在获取到反差对焦信息之后,可以对反差对焦信息进行分析处理,具体的,预先设置有反差对焦信息与电机位置之间的映射关系,通过上述映射关系即可确定跟焦电机与反差对焦信息相对应的当前电机位置,从而有效地保证了对当前电机位置进行确定的准确可靠性。
步骤S402:获取与跟焦电机相对应的设定位置范围。
为了能够实现对跟焦电机进行控制的准确可靠性,可以获取与跟焦电机相对应的设定位置范围,该设定位置范围用于标识电机能够进行正常运动的区间范围,可以理解的是,不同的跟焦电机可以对应有相同或者不同的设定位置范围。在一些实例中,设定位置范围可以是预先配置的存储在预设区域中的参数,此时,通过访问预设区域即可获取到跟焦电机的设定位置范围。在一些实例中,设定位置范围可以通过标定操作获得,举例来说,在图像采集装置为相机时,获取与跟焦电机相对应的设定位置范围可以包括:获取相机的镜头的最近端和最远端,其中,镜头的最近端和最远端用于限定镜头的的变换焦距范围,镜头的最近端和最远端可以是与镜头所对应的机械限位,一般情况下,最近端可以对应于镜头的机械下限位,最远端可以对应于镜头的机械上限位。在获取到相机的镜头的最近端和最远端之后,获取最近端相对应的第一电机位置和与最远端相对应的第二电机位置;基于第一电机位置和第二电机位置确定设定位置范围,上述的设定位置范围即为由第一电机位置作为下限位置、由第二电机位置作为上限位置所构成的电机能够运行的位置范围,即跟焦电机能够在第一电机位置和第二电机位置所构成的范围内***,从而有效地实现了可以准确、有效地获取到与跟焦电机相对应的设定位置范围。
需要说明的是,本实施例中步骤S402与步骤S401的执行顺序并不限于上述所限定的描述顺序,本领域技术人员可以根据具体的应用场景或者应用需求进行设置,例如:步骤S402可以在步骤S401之前执行,或者步骤S402可以与步骤S401同时执行。
步骤S403:基于当前电机位置和设定位置范围,确定跟焦电机的第一转动速度和第一转动方向。
在获取到当前电机位置和设定位置范围之后,可以对当前电机位置和设定位置范围进行分析处理,以确定跟焦电机的第一转动速度和第一转动方向。在一些实例中,预先训练有用于对当前电机位置和设定位置范围进行分析处理的机器学习模型,将当前电机位置和设定位置范围输入至机器学习模型中,从而可以获得跟焦电机的第一转动速度和第一转动方向。在另一些实例中,基于当前电机位置和设定位置范围,确定跟焦电机的第一转动速度和第一转动方向可以包括:获取当前电机位置分别与设定位置范围的上限值和下限值之间的第一距离和第二距离;基于第一距离和第二距离,确定跟焦电机的第一转动速度和第一转动方向。
具体的,在获取到设定位置范围之后,可以确定与设定位置范围所对应的上限值和下限值,在获取到上限值和下限值之后,可以获取当前电机位置与上限值之间的第一距离、以及当前电机位置与下限值之间的第二距离,在获取到第一距离和第二距离之后,可以对第一距离和第二距离进行分析处理,以确定跟焦电机的第一转动速度和第一转动方向。
在一些实例中,基于第一距离和第二距离,确定跟焦电机的第一转动速度和第一转动方向可以包括:将第一距离与第二距离进行分析比较,在第一距离大于第二距离时,此时,则说明跟焦电机朝向下限值所能够转动的行程小于朝向上限值所能转动的行程,由于对焦操作可能需要跟焦电机转动较多的行程,这样能够保证对焦操作的顺利实现,因此,则可以将第一转动方向确定为靠近上限值的方向,而后可以基于第一距离来确定跟焦电机的第一转动速度。在第一距离小于第二距离时,则可以将第一转动方向确定为靠近下限值的方向,而后可以基于第二距离来确定跟焦电机的第一转动速度。在第一距离等于第二 距离时,则可以将第一转动方向确定为朝向下限值或者朝向上限值的方向,而后基于第一距离或者第二距离来确定跟焦电机的第一转动速度。
举例1,参考附图4a所示,当前电机位置为L1,L1位于设定位置范围内,而后可以获取L1与上限值之间的第一距离S1、L1与下限值之间的第二距离S2,由于S1>S2,因此,可以将跟焦电机的第一转动方向D确定为靠近上限值的方向,并可以基于S1来确定跟焦电机的第一转动速度V,从而有效地实现了对跟焦电机的第一转动速度和第一转动方向进行确定的准确可靠性。
举例2,参考附图4b,当前电机位置为L2,L2位于设定位置范围内,而后可以获取L2与上限值之间的第一距离S1、L2与下限值之间的第二距离S2,由于S2>S1,因此,可以将跟焦电机的第一转动方向D确定为靠近下限值的方向,并可以基于S2来确定跟焦电机的第一转动速度V,从而有效地实现了对跟焦电机的第一转动速度和第一转动方向进行确定的准确可靠性。
举例3,参考附图4c,当前电机位置为L3,L3位于设定位置范围内,而后可以获取L3与上限值之间的第一距离S1、L3与下限值之间的第二距离S2,由于S2=S1,因此,可以将跟焦电机的第一转动方向D确定为朝向下限值的方向,并可以基于S2来确定跟焦电机的第一转动速度V,或者,将跟焦电机的第一转动方向D`确定为朝向上限值的方向,并可以基于S1来确定跟焦电机的第一转动速度V`,从而有效地实现了对跟焦电机的第一转动速度和第一转动方向进行确定的准确可靠性。
需要注意的是,当基于第一转动速度和第一转动方向控制跟焦电机进行运动之后,跟焦电机所对应的当前电机位置会发生变化,从而可以获得新的当前电机位置,对于新的当前电机位置而言,可以采用上述方式重新确定第一转动速度和第一转动方向,从而实现了在利用跟焦电机进行对焦操作时,在未调整到合焦状态的过程中,所获取到的控制参数(例如:第一转动速度和第一转动方向)可以是实时更新的,这样可以保证对焦操作的稳定可靠性。具体的,在确定跟焦电机的第一转动速度和第一转动方向之后,则可以基于第一转动速度和第一转动方向控制跟焦电机进行运动,以实现自动对焦操作。在一些实例中,为了能够保证自动对焦操作的稳定运行,本实施例中的方法还可以包括:基于当前电机位置对设定位置范围进行更新,获得更新后位置范围;基于更新后位置范围和电机转动后位置,对第一转动速度进行调整,以获得第二转动速度,第二转动速度小于第一转动速度。
在基于第一转动速度和第一转动方向控制跟焦电机运动的过程中,第一转动速度可以是基于跟焦电机运动的位置进行实时变化的。具体的,在获取到当前电机位置之后,可以基于当前电机位置对设定位置范围进行更新,从而可以获得更新后位置范围。在一些实例中,更新后位置范围的一个边界值可以为当前电机位置,另一个边界值可以为上限值或者下限值,即更新后位置范围小于设定位置范围,具体为设定位置范围中的一部分。
在基于第一转动速度和第一转动方向控制跟焦电机进行运动的过程中,跟焦电机的当前电机位置会不断发生变化,即跟焦电机由当前电机位置变为电机转动后位置,在获取到更新后位置范围和电机转动后位置之后,可以对更新后位置范围和电机转动后位置进行分析处理,以实现对第一转动速度进行调整,获得第二转动速度,具体的,“基于更新后位置范围和电机转动后位置获得第二转动速度”的具体实现方式和实现过程与上述“基于当前电机位置和设定位置范围,确定跟焦电机的第一转动速度和第一转动方向”的具体实现方式和实现过程相类似,具体可参考上述陈述内容,在此不再赘述。需要说明的是,由于更新后位置范围小于设定位置范围,因此,所确定的第二转动速度小于第一转动速度,即在对控制跟焦电机进行对焦操作时,跟焦电机所对应的转动速度会不断减小。
另外,对于跟焦电机而言,预先配置有用于标识跟焦电机处于正常运行状态的速度阈值,当跟焦电机的运行速度小于或等于速度阈值时,则说明跟焦电机可以进行正常的运行操作;当跟焦电机的运行速度大于速度阈值时,则无法保证跟焦电机进行正常的运行操作。因此,为了能够保证自动对焦操作的稳 定运行,在获取到第一转动速度之后,可以将第一转动速度与速度阈值进行分析比较,在第一转动速度大于速度阈值时,则说明所确定的第一转动速度较大,无法保证跟焦电机进行正常的运转,因此,则可以将第一转动速度更新为速度阈值,以基于速度阈值来控制跟焦电机进行运动。在第一转动速度小于或等于速度阈值时,则说明所确定的第一转动速度在正常速度范围内,因此,则可以基于第一转动速度控制跟焦电机进行运动。
本实施例中,通过确定跟焦电机与反差对焦信息相对应的当前电机位置,获取与跟焦电机相对应的设定位置范围,而后基于当前电机位置和设定位置范围,确定跟焦电机的第一转动速度和第一转动方向,从而有效地保证了对与跟焦电机相对应的第一转动速度和第一转动方向进行确定的准确可靠性,在基于第一转动速度和第一转动速度控制跟焦电机时,可以快速、稳定地实现对焦操作,进一步提高了该方法使用的稳定可靠性。
相关技术中,在利用跟焦电机进行对焦操作时,以跟焦电机与相机的跟焦环啮合,跟焦电机与相机承载于云台为例进行说明,该云台还承载有深度检测装置,如飞行时间TOF传感器,该深度检测装置可以提供相机的感测范围内的某一区域或某一对象的深度信息,并可以将该信息直接或间接反馈给跟焦电机,该跟焦电机可以基于深度信息驱动跟焦环旋转,从而实现对相机的感测范围内的某一区域或某一对象的对焦操作。如此对比可知,本申请中无需利用额外的深度检测装置来实现对焦操作,而是可以复用相机原有的功能,减少了上述应用场景中的成本,且能够有效实现对焦操作。
在另一些实例中,对于辅助设备而言,其不仅可以包括跟焦电机,还可以包括变焦电机,具体的,在辅助设备包括用于对图像采集装置的变焦环进行调节的变焦电机时,本实施例中的获取图像采集装置确定的拍摄参数可以包括:获取与采集图像相对应的变焦信息。
其中,在图像采集装置运行的过程中,可以通过图像采集装置获取采集图像,而后对采集图像进行分析处理,以获取与采集图像相对应的变焦信息,具体的,变焦信息可以通过变焦控制器来获得。在获取到变焦信息之后,可以基于变焦信息对变焦电机进行控制,以实现图像采集装置的变焦操作,此时,基于控制参数对云台和辅助设备中的至少一个进行控制可以包括:基于控制参数对变焦电机进行控制,以实现图像采集装置的变焦操作。
具体的,在辅助设备包括变焦电机且能够获取图像采集装置所确定的变焦信息时,为了能够实现对跟焦电机进行控制,则可以对变焦信息进行分析处理,以获得与变焦信息相对应的控制参数,而后基于控制参数对变焦电机进行控制,其中,变焦信息用于对相机镜头的变焦能力进行调整,不同的变焦信息可以对应有不同的焦距信息,在不同的焦距信息的控制下,被摄物体在图像中的显示大小不同。在一些实例中,参考附图5所示,本实施例中的基于拍摄参数确定控制参数可以包括:
步骤S501:基于采集图像,确定设定对象在显示画面中的初始占比,其中,显示画面为基于采集图像确定。
在图像采集装置运行的过程中,可以通过图像采集装置获取到采集图像,所获得的采集图像中可以包括设定对象,该设定对象可以为:人物、动物、植物、建筑物、交通工具等等,在获取到采集图像之后,可以对采集图像进行分析处理,以确定设定对象在显示画面中的初始占比,该初始占比用于标识设定对象相对于显示画面中的显示尺寸特征。
需要注意的是,显示画面是基于采集图像进行确定的,显示画面的尺寸与采集图像的尺寸可以相同或者不同,具体的,显示画面可以为采集图像的至少一部分。举例来说,在获取到一采集图像之后,若采集图像中所包括的内容比较多,而用户感兴趣的只是采集图像中的一部分时,用户可以基于需求将采集图像中包括感兴趣部分的图像区域确定为显示画面;若采集图像中所包括的内容比较小时,则可以直接将采集图像的全部区域确定为显示画面。
另外,本实施例对于确定设定对象在显示画面中的初始占比的具体实现方式不做限定,本领域技术人员可以根据具体的应用场景或者应用需求进行设置,例如:在获取到采集图像之后,可以基于用户的输入操作来确定设定对象和显示画面,而后则可以基于所确定的设定对象和显示画面来确定设定对象在显示画面的初始占比。在另一些实例中,基于采集图像,确定设定对象在显示画面中的初始占比可以包括:获取设定对象的对象尺寸特征以及显示画面的画面尺寸特征;基于对象尺寸特征和画面尺寸特征,确定初始占比。
具体的,在获取到采集图像之后,可以响应于用户输入的执行操作来确定设定对象和显示画面,或者,也可以通过目标识别算法来识别设定对象和显示画面,例如:在采集图像中包括对象1、对象2和对象3,由于对象3在整个采集图像中的占比大于另外两个对象在采集图像中的占比,因此,基于目标识别算法可以将对象3确定为设定对象。在获取到设定对象和显示画面之后,可以基于采集图像获取设定对象的对象尺寸特征以及显示画面的画面尺寸特征,其中,对象尺寸特征可以包括以下至少之一:对象长度尺寸、对象宽度尺寸、对象面积尺寸;相对应的,显示画面的画面尺寸特征可以包括以下至少之一:画面长度尺寸、画面宽度尺寸、画面面积尺寸。
其中,获取设定对象的对象尺寸特征可以包括:通过对象识别算法或者机器学习模型来识别设定对象的轮廓信息,基于轮廓信息来确定设定对象的对象尺寸特征。或者,获取设定对象的对象尺寸特征可以包括:获取设定对象所对应的识别框,识别框可以是预先设置的矩形框、方形框、圆形框等等,通过识别框来确定设定对象的对象尺寸特征。在一些实例中,为了能够使得用户可以更加直观地获知到设定对象的对象尺寸特征,在获取到设定对象所对应的识别框或者轮廓信息之后,则可以对设定对象所对应的识别框或者轮廓信息进行显示。
在获取到对象尺寸特征和画面尺寸特征之后,可以对对象尺寸特征和画面尺寸特征进行分析处理,以确定初始占比。由于对象尺寸特征和画面尺寸特征可以具有多种表现形式,因此,初始占比的确定方式也具有多种方式。在一些实例中,在对象尺寸特征包括对象长度尺寸,画面尺寸特征包括画面长度尺寸时,本实施例中的基于对象尺寸特征和画面尺寸特征,确定初始占比可以包括:将对象长度尺寸与画面长度尺寸之间的比值,确定为初始占比。
举例来说,在对象长度尺寸为TL,画面长度尺寸为FL时,则初始占比P可以为TL/FL或者FL/TL,由于设定对象需要在显示画面中进行显示,因此TL<FL,进而可知,在P=TL/FL时,所获得的P是一个大于0、且小于1的数值;在P=FL/TL时,所获得的P是一个大于1的数值。当然的,本领域技术人员也可以采用其他的实现方式来获取初始占比,例如:初始占比P可以为TL/(FL-TL)或者(FL-TL)/TL,此时,P可以为大于1或者小于1的数值。
在另一些实例中,在对象尺寸特征包括对象宽度尺寸,画面尺寸特征包括画面宽度尺寸时,本实施例中的基于对象尺寸特征和画面尺寸特征,确定初始占比可以包括:将特征宽度尺寸与画面宽度尺寸之间的比值,确定为初始占比。
举例来说,在对象宽度尺寸为TW,画面宽度尺寸为FW时,则初始占比P可以为TW/FW或者FW/TW,由于设定对象需要在显示画面中进行显示,因此TW<FW,进而可知,在P=TW/FW时,所获得的P是一个大于0、且小于1的数值;在P=FW/TW时,所获得的P是一个大于1的数值。当然的,本领域技术人员也可以采用其他的实现方式来获取初始占比,例如:初始占比P可以为TW/(FW-TW)或者(FW-TW)/TW,此时,P可以为大于1或者小于1的数值。
在又一些实例中,在对象尺寸特征包括对象面积尺寸,画面尺寸特征包括画面面积尺寸时,本实施例中的基于对象尺寸特征和画面尺寸特征,确定初始占比可以包括:将对象面积尺寸与画面面积尺寸之间的比值,确定为初始占比。
举例来说,在对象面积尺寸为TS,画面面积尺寸为FS时,则初始占比P可以为TS/FS或者FS/TS,由于设定对象需要在显示画面中进行显示,因此TS<FS,进而可知,在P=TS/FS时,所获得的P是一个大于0、且小于1的数值;在P=FS/TS时,所获得的P是一个大于1的数值。当然的,本领域技术人员也可以采用其他的实现方式来获取初始占比,例如:初始占比P可以为TS/(FS-TS)或者(FS-TS)/TS,此时,P可以为大于1或者小于1的数值。
需要注意的是,确定设定对象在显示画面中的初始占比的实现方式并不限于上述所描述的陈述内容,本领域技术人员也可以采用其他的方式来获取设定对象在显示画面中的初始占比,只要能够保证对设定对象在显示画面中的初始占比进行获取的准确可靠性即可,在此不再赘述。
步骤S502:基于初始占比和变焦信息,确定控制参数。
在获取到初始占比和变焦信息之后,可以对初始占比和变焦信息进行分析处理,以确定控制参数。在一些实例中,预先训练有用于确定控制参数的机器学习模型,在获取到初始占比和变焦信息之后,可以将初始占比和变焦信息输入至机器学习模型中,从而可以获得机器学习模型所输出的控制参数。在另一些实例中,基于初始占比和变焦信息,确定控制参数可以包括:获取变焦电机的运动行程范围与变焦行程之间的第二映射关系、以及变焦电机的运动方向与变焦方向之间的第三映射关系;基于初始占比、变焦信息、第二映射关系和第三映射关系,确定与变焦电机相对应的运动行程参数和运动方向。
其中,对于变焦电机而言,预先标定有运动行程范围与变焦行程之间的第二映射关系以及运动方向与变焦方向之间的第三映射关系,上述的第二映射关系用于标识变焦电机的运动行程与变焦值之间的一一对应关系,第三映射关系用于标识变焦电机的运动方向与变焦方向之间的一一对应关系,并且,上述标定好的第二映射关系和第三映射关系可以存储在预设区域或者预设设备中,通过访问预设区域或者预设设备即可获取到上述的第二映射关系和第三映射关系,从而能够准确地基于第二映射关系和第三映射关系来获取到与变焦电机相对应的控制参数。
在获取到第二映射关系和第三映射关系之后,可以对初始占比、变焦信息、第二映射关系和第三映射关系进行分析处理,以确定与变焦电机相对应的运动行程参数和运动方向,上述的运动行程参数和运动方向即为用于对变焦电机进行控制的控制参数,从而有效地实现了对控制参数进行获取的准确可靠性。
本实施例中,在辅助设备包括变焦电机且能够获取图像采集装置所确定的变焦信息时,通过采集图像确定设定对象在显示画面中的初始占比,而后基于初始占比和变焦信息来确定控制参数,上述的控制参数可以包括与变焦电机相对应的运动行程参数和运动方向,有效地实现了对变焦电机的控制参数进行确定的准确可靠性,而后便于基于控制参数对变焦电机进行控制,以实现图像采集装置的变焦操作和希区柯克的运镜操作。
相关技术中,在利用变焦电机进行变焦操作时,以变焦电机与相机的变焦环啮合,变焦电机与相机承载于云台为例进行说明,该云台还承载有深度检测装置,如飞行时间TOF传感器,该深度检测装置可以提供相机的感测范围内的某一区域或某一对象的深度信息,并可以将该信息直接或间接反馈给变焦电机,该变焦电机可以基于深度信息驱动变焦环旋转,从而实现对相机的感测范围内的某一区域或某一对象的变焦操作。如此对比可知,本申请中无需利用额外的深度检测装置来实现变焦操作,而是可以复用相机原有的功能,减少了上述应用场景中的成本,且能够有效实现变焦操作。
在又一些实例中,对于辅助设备而言,其不仅可以包括跟焦电机、变焦电机,还可以包括补光设备(例如:补光灯),具体的,在辅助设备包括用于对图像采集装置进行补光操作的补光设备时,本实施例中的获取图像采集装置确定的拍摄参数可以包括:获取图像采集装置确定的光线检测信息。
其中,在图像采集装置运行的过程中,可以通过图像采集装置获取光线检测信息,该光线检测信息可以包括以下至少之一:曝光强度、光线颜色。在一些实例中,图像采集装置可以对采集图像进行识别 表分析,如白平衡检测,以得到相关的光线检测。在一些实例中,通过图像采集装置获得采集图像,对采集图像进行分析处理,以获得光线检测信息。
在获取到光线检测信息之后,可以基于光线检测信息对补光设备进行控制,以实现图像采集装置的补光操作,此时,基于控制参数对云台和辅助设备中的至少一个进行控制可以包括:基于控制参数对补光设备进行控制,以实现图像采集装置的补光操作。
需要注意的是,由于光线检测信息可以包括曝光强度和/或光线颜色,在获取与光线检测信息相对应的控制参数时,不同的光线检测信息可以对应有不同的数据处理策略,因此,为了保证控制参数获取的准确可靠性,在光线检测信息包括曝光强度时,本实施例提供了一种获取控制参数的实现方式,具体的,参考附图6所示,本实施例中的基于拍摄参数确定控制参数可以包括:
步骤S601:确定与图像采集装置的采集图像相对应的目标曝光强度;
步骤S602:基于曝光强度和目标曝光强度,确定补光设备的补偿曝光参数。
其中,在图像采集装置工作的过程中,为了能够实现曝光平衡操作,以保证图像采集装置所采集图像的画面质量和效果,则可以确定与图像采集装置的采集图像相对应的目标曝光强度,该目标曝光强度可以是预先指定好的,具体的,用户可以根据图像采集装置所处的环境信息配置目标曝光强度,不同的环境信息可以配置有不同的目标曝光强度;或者,该目标曝光强度可以通过对采集图像的显示质量而自动确定的。
在获取到目标曝光强度之后,则可以对曝光强度和目标曝光强度进行分析处理,以确定补光设备的补偿曝光参数,该补偿曝光参数即为与补光设备相对应的控制参数,在获取到补偿曝光参数之后,可以基于补偿曝光参数对补光设备进行控制,以实现图像采集装置的曝光平衡操作。在一些实例中,补光曝光参数可以是通过目标曝光强度与曝光强度之间的差值所确定的参数,或者,补光曝光参数可以为曝光强度与目标曝光强度之间的比值所确定的参数。
本实施例中,在光线检测信息包括曝光强度时,通过确定与图像采集装置的采集图像相对应的目标曝光强度,并基于曝光强度和目标曝光强度确定补光设备的补偿曝光参数,而后则可以基于所确定的补偿曝光参数对补光设备进行控制,从而可以实现图像采集装置的曝光平衡操作,进而保证了图像采集装置所采集图像的质量和效果,这样有效地提高了该方法的实用性。
在光线检测信息包括光线颜色(例如:红、橙、黄、绿、蓝、靛、紫等各种颜色)时,本实施例提供了又一种获取控制参数的实现方式,具体的,参考附图7所示,本实施例中的基于拍摄参数确定控制参数可以包括:
步骤S701:确定与图像采集装置的采集图像相对应的目标场景颜色。
步骤S702:基于光线颜色和目标场景颜色,确定补光设备的补偿颜色参数。
其中,在图像采集装置工作的过程中,为了能够实现拍摄场景的白平衡,从而保证图像采集装置所采集图像的画面质量和效果,则可以确定与图像采集装置的采集图像相对应的目标场景颜色,该目标场景颜色可以是预先指定好的,具体的,用户可以根据图像采集装置所处的环境信息配置目标场景颜色,不同的环境信息可以配置有不同的目标场景颜色;或者,该目标场景颜色可以通过对采集图像的显示质量而自动确定的。
在获取到目标场景颜色之后,则可以对光线颜色和目标场景颜色进行分析处理,以确定补光设备的补偿颜色参数,该补偿颜色参数即为与补光设备相对应的控制参数,在获取到补偿颜色参数之后,可以基于补偿颜色参数对补光设备进行控制,以实现图像采集装置所对应的拍摄场景的白平衡操作。在一些实例中,补偿颜色参数可以是通过目标场景颜色与光线颜色之间的差值所确定的参数,或者,补偿颜色参数可以为光线颜色与目标场景颜色之间的比值所确定的参数。
本实施例中,在光线检测信息包括光线颜色时,通过确定与图像采集装置的采集图像相对应的目标场景颜色,并基于光线颜色和目标场景颜色,确定补光设备的补偿颜色参数,而后则可以基于所确定的补偿颜色参数对补光设备进行控制,从而可以实现图像采集装置所对应拍摄场景的白平衡操作,进而保证了图像采集装置所采集图像的质量和效果,这样有效地提高了该方法的实用性。
如此,本申请中无需利用额外的光线检测装置来实现补光操作,而是可以复用相机原有的功能,减少了上述应用场景中的成本,且能够有效实现补光操作。
在一些实例中,对于图像采集装置而言,图像采集装置中可以包括光学元件和图像传感器,光学元件用于降低经过其内部光线的衍射,具体实现时,光学元件的结构可以为可消除光线衍射的元件,例如:光学元件可以为衍射光学元件(Diffractive Optical Elements,简称DOE)或者为挡光板等,当光学元件为挡光板时,光学元件可为中心具有通光孔的挡光板,图像传感器设置于图像采集装置的成像面内,用于接收经过光学元件降低衍射后的图像光线,以生成采集图像,具体实现时,图像传感器可以为电荷耦合器件(Charge-coupled Device,简称CCD)图像传感器或互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,简称CMOS)图像传感器。
此外,图像采集装置不仅可以包括光学元件和图像传感器,还可以包括防抖控制单元,在一些实例中,防抖控制单元可以包括镜头光学防抖OIS单元和/或机身防抖IBIS单元,该防抖控制单元能够基于拍摄参数对图像采集装置中光学元件和图像传感器中的至少一种进行抖动补偿,以对图像采集装置采集的采集图像进行调节。
承接上述陈述内容,在图像采集装置包括防抖控制单元时,则可以基于防抖控制单元对云台进行防抖控制操作,此时,本实施例中的获取图像采集装置确定的拍摄参数可以包括:获取图像采集装置内所设置的防抖控制单元确定的防抖信息,防抖信息为根据图像采集装置的位置变化信息确定。
在图像采集装置工作的过程中,由于图像采集装置设置于云台上,用户可以基于使用需求或者设计需求通过调整云台的姿态来对图像采集装置的位置进行调整,在图像采集装置的位置发生变化时,图像采集装置内的防抖控制单元则可以根据图像采集装置的位置变化信息来不同的防抖信息。
具体的,由于防抖信息为根据图像采集装置的位置变化信息确定的,因此,为了能够保证对防抖信息进行确定的准确可靠性,则需要准确地获取到图像采集装置的位置变化信息,此时,为了能够准确地获取到图像采集装置的位置变化信息,对于云台和云台所支撑的图像采集装置而言,图像采集装置上可以设置有第一位置传感器,该第一位置传感器用于检测位置变化信息,和/或,云台上可以设置有第二位置传感器,第二位置传感器用于检测位置变化信息,位置变化信息还用于调节图像采集装置的空间位置,也即,该第二位置传感器的感测信息不仅可以用于确定防抖信息,还可以用于图像采集装置的增稳操作。
在获取到防抖信息之后,可以基于防抖信息对云台进行控制,此时,基于控制参数对云台和辅助设备中的至少一个进行控制可以包括:基于控制参数对云台进行控制,以实现云台的增稳操作。
具体的,在辅助设备包括防抖控制单元,且能够获取图像采集装置内所设置的防抖控制单元确定的防抖信息时,为了能够实现对云台进行控制,则可以对防抖信息进行分析处理,以获得与防抖信息相对应的控制参数,而后基于控制参数对云台进行控制,这样可以保证图像采集装置所采集图像的显示质量和效果,进一步提高了该方法的实用性。
其中,在防抖控制单元包括镜头光学防抖OIS单元和机身防抖IBIS单元时,为了能够提高对云台进行控制的稳定可靠性,本实施例中的方法可以包括:检测图像采集装置与云台之间的通信连接状态,在图像采集装置与云台通信连接时,即图像采集装置应用于云台上,则可以启动云台上设置的IBIS单元,以通过IBIS单元来实现防抖操作;在图像采集装置与云台断开通信连接时,即图像采集装置脱离于云台, 则可以启动图像采集装置中所对应的镜头光学防抖OIS单元,以通过OIS单元来实现防抖操作。
由于云台的增稳与图像采集装置内部的防抖操作可能会出现干涉情况,从而会降低防抖的质量和效果,因此,本实施例提供了一种确定控制参数的实现方式,具体的,参考附图8所示,本实施例中的基于拍摄参数确定控制参数可以包括:
步骤S801:基于防抖信息,确定图像采集装置的防抖灵敏度信息,防抖灵敏度信息用于表征针对云台的激励信号进行防抖响应的速度。
在获取到防抖信息之后,则可以对防抖信息进行分析处理,以确定图像采集装置的防抖灵敏度信息,在一些实例中,基于防抖信息,确定图像采集装置的防抖灵敏度信息可以包括:获取防抖信息与防抖灵敏度信息之间的映射关系,利用映射关系和防抖信息即可确定图像采集装置的防抖灵敏度信息。在另一些实例中,基于防抖信息,确定图像采集装置的防抖灵敏度信息可以包括:获取用于对防抖信息进行分析处理的机器学习模型,将防抖信息输入至机器学习模型,从而可以获得图像采集装置的防抖灵敏度信息,该防抖灵敏度信息用于表征这对云台的激励信号进行防抖响应的速度,具体的,在防抖灵敏度信息较大时,则说明此时对云台的激励信号(即输入信号)进行防抖响应的速度比较快;在防抖灵敏度信息较小时,则说明此时对云台的激励信号进行防抖响应的速度比较慢。举例来说,云台的激励信号为15HZ的激励信号,在防抖灵敏度信息较大时,则说明此时与云台的激励信号进行防抖响应的速度比较快;在防抖灵敏度信息较小时,则说明此时与云台的激励信号进行防抖响应的速度比较慢。
步骤S802:获取云台的当前激励信号。
其中,对于云台上设置的机身防抖IBIS单元而言,机身防抖IBIS单元可以基于一激励信号来进行防抖操作,在不同的应用场景中,机身防抖IBIS单元所获得的激励信号可以对应有不同的频率信息,而机身防抖IBIS单元的防抖操作和防抖效果与激励信号所对应的频率相关。因此,为了能够准确地获取到与防抖信息相对应的控制参数,可以获取云台的当前激励信号,该当前激励信号用于控制云台以及云台上的机身防抖IBIS单元进行正常的运行操作,具体的,当前激励信号可以是用户基于应用场景或者应用需求所配置的控制信号或者是默认的控制信号,在一些实例中,当前激励信号可以为10HZ的激励信号、5HZ的激励信号、15HZ的激励信号或者20HZ的激励信号等等。
步骤S803:基于当前激励信号和防抖灵敏度信息,确定与所述云台相对应的控制参数,以对所述云台和/或图像采集装置进行控制。
在获取到当前激励信号和防抖灵敏度信息之后,可以对当前激励信号和防抖灵敏度信息进行分析处理,可以确定与云台相对应的控制参数,在一些实例中,基于当前激励信号和防抖灵敏度信息,确定与所述云台相对应的控制参数可以包括:基于防抖灵敏度信息,确定与当前激励信号相对应的当前灵敏度;在当前灵敏度大于或等于灵敏度阈值时,则生成与云台相对应的控制参数。
其中,在获取到防抖灵敏度信息和当前激励信号之后,可以对防抖灵敏度信息和当前激励信息进行分析处理,以确定与当前激励信号相对应的当前灵敏度。在获取到当前灵敏度之后,可以将当前灵敏度与灵敏度阈值进行分析比较,在当前灵敏度小于灵敏度阈值时,则说明此时的图像采集装置针对云台的激励信号进行防抖响应的速度比较慢,即若基于图像采集装置中所包括的镜头光学防抖OIS单元进行防抖抑制操作时,能够获取到较好的防抖抑制效果,因此无需生成与云台相对应的控制参数,即通过图像采集装置中所包括的镜头光学防抖OIS单元来进行防抖抑制操作。
在当前灵敏度大于或等于敏感度阈值时,则说明此时的图像采集装置针对云台的激励信号进行防抖响应的速度比较快,即若基于图像采集装置中所包括的镜头光学防抖OIS单元进行防抖抑制操作时,无法获取到较好的防抖抑制效果,因此,则可以通过云台上的机身防抖IBIS单元来进行防抖抑制操作,为了能够实现防抖控制操作,则可以生成与云台相对应的控制参数。在一些实例中,生成与云台相对应的 控制参数可以包括:对当前激励信号进行抑制操作,获得抑制后信号,抑制后信号为与云台相对应的控制参数。
具体的,在通过云台即可有效地抑制防抖操作,此时,为了避免图像采集装置中防抖控制单元影响对云台进行防抖操作的效果,则可以对云台的当前激励信号进行抑制操作,从而可以获得抑制后信号,该抑制后信号为与云台相对应的控制参数,这样可以保证云台的防抖质量和效果。
在又一些实例中,基于当前激励信号和防抖灵敏度信息,确定与云台相对应的控制参数可以包括:基于防抖灵敏度信息,确定与当前激励信号相对应的当前灵敏度;在当前灵敏度大于或等于灵敏度阈值时,生成与图像采集装置相对应的控制参数,以基于与图像采集装置相对应的控制参数对防抖控制单元进行控制。
在获取到防抖灵敏度信息和当前激励信号之后,可以对防抖灵敏度信息和当前激励信息进行分析处理,以确定与当前激励信号相对应的当前灵敏度。在获取到当前灵敏度之后,可以将当前灵敏度与灵敏度阈值进行分析比较,在当前灵敏度大于或等于敏感度阈值时,则说明此时的图像采集装置针对云台的激励信号进行防抖响应的速度比较快,即若基于图像采集装置中所包括的镜头光学防抖OIS单元进行防抖抑制操作时,无法获取到较好的防抖抑制效果,因此无需通过OIS单元来进行防抖抑制操作。为了能够保证无需通过OIS单元来进行防抖抑制操作时,则可以生成与图像采集装置相对应的控制参数,以基于基于与图像采集装置相对应的控制参数对防抖控制单元进行控制。在一些实例中,生成与图像采集装置相对应的控制参数可以包括:生成与防抖控制单元相对应的停止运行参数,停止运行参数为与图像采集装置相应的控制参数。具体的,在通过云台足够地抑制防抖操作,此时,为了避免图像采集装置对防抖操作产生影响,则可以生成与防抖控制单元相对应的停止运行参数,停止运行参数为与图像采集装置相应的控制参数,这样可以保证云台的防抖质量和效果。
本实施例中,基于防抖信息确定图像采集装置的防抖灵敏度信息,获取云台的当前激励信号,而后基于当前激励信号和防抖灵敏度信息来确定控制参数,从而有效地保证了对控制参数进行确定的准确可靠性,这样也避免了因抖动操作而降低所采集图像的质量和效果,进一步提高了该方法的实用性。
在云台运行的过程中,云台可以因外界因素(例如:用户操作云台的姿态,如步伐速度、步伐频率、用户抖动情况等等,用于承载云台的主体的振动因素等等)而产生抖动,该抖动可以包括以下至少之一:在平移方向(平行于预设平面的方向)上的抖动、在竖直方向(垂直于预设平面的方向)上的抖动。由于云台承载图像采集装置,因此,云台的抖动可以影响图像采集装置所采集图像的质量,进而会影响云台的运镜效果。具体的,为了保证云台的运镜效果,在图像采集装置内可以是指有信号滤波单元,具体的,参考附图9所示,本实施例提供了一种对信号滤波单元进行配置的过程,本实施例中的方法还可以包括:
步骤S901:确定云台的抖动信息,云台承载图像采集装置。
为了能够实现对信号滤波单元进行参数配置操作,则可以确定云台的抖动信息,在一些实例中,云台上可以设置有惯性测量单元,此时,确定云台的抖动信息可以包括:通过惯性测量单元对用户步频进行识别,获得步频信息,由于预先配置有步频信息与云台的抖动信息之间的映射关系,因此可以利用映射关系来确定云台的抖动信息。在又一些实例中,云台上可以设置环境传感器和惯性测量单元,通过环境传感器和惯性测量单元获得云台所处的环境信息和用户的步频信息,对环境信息和步频信息进行分析处理,以获得云台的抖动信息。
步骤S902:将抖动信息发送至图像采集装置,以使图像采集装置基于抖动信息对信号滤波单元进行参数配置。
其中,由于图像采集装置的信号滤波单元用于对用户的步频信息进行处理,以降低或者消除由于用 户的步频信息所引起的云台抖动情况,因此,为了能够更好地保证云台运行的质量和效果,在获取到抖动信息之后,可以将抖动信息发送至图像采集装置,在图像采集装置获取到抖动信息之后,图像采集装置可以基于抖动信息对信号滤波单元进行参数配置操作,以使配置后的信号滤波单元可以更好地对用户的步频信息进行处理,以降低或者消除由于用户的步频信息所引起的云台抖动情况。
在一些实例中,图像采集装置基于抖动信息对信号滤波单元进行参数配置可以包括:图像采集装置可以对抖动信息进行分析处理,确定用户的步频信息,基于用户的步频信息来确定信号滤波单元所对应的参数信息,而后则可以基于参数信息对信号滤波单元进行配置,配置后的信号滤波单元可以降低或者消除由于用户的步频信息所引起的云台抖动情况。
本实施例中,通过确定云台的抖动信息,而后将抖动信息发送至图像采集装置,以使图像采集装置基于抖动信息对信号滤波单元进行参数配置,从而有效地实现了基于所确定的抖动信息对图像采集装置中的信号滤波单元进行配置操作,这样有利于保证基于信号滤波单元能够更好地对用户的步频信息进行处理,以降低或者消除由于用户的步频信息所引起的云台抖动情况。
在一些实例中,对于拍摄参数而言,拍摄参数可以为基于采集图像和用户期望确定的,并能够用于实现图像采集装置的预设拍摄功能,其中,预设拍摄功能可以包括对焦拍摄操作、智能跟随操作等等。当预设拍摄功能为智能跟随操作时,图像采集装置可以与云台通信连接,并且,本实施例中的获取图像采集装置确定的拍摄参数可以包括:获取目标对象在采集图像中的采集位置。具体的,通过图像采集装置获取到采集图像,而后对采集图像进行分析处理,以获取目标对象在采集图像中的采集位置。在获取到采集位置之后,可以对采集位置进行分析处理,以获取控制参数,在一些实例中,基于拍摄参数确定控制参数可以包括:基于采集位置,确定用于对目标对象进行跟随操作的控制参数。
其中,预先配置有采集位置与用于实现对目标对象进行跟随操作的控制参数之间的映射关系,基于上述映射关系和采集位置即可确定用于对目标对象进行跟随操作的控制参数。
在获取到控制参数之后,可以基于控制参数对云台和/或辅助设备进行控制,在一些实例中,基于控制参数对云台和辅助设备中的至少一种进行相应的控制可以包括:根据控制参数对云台进行控制,以实现对目标对象进行跟随操作,需要注意的是,本实施例中上述步骤的实现方式和实现效果与下述附图16-图18所对应的实现方式和实现效果相类似,具体可参考下述陈述内容,在此不再赘述;本实施例中的技术方案,有效地实现了可以稳定地对目标对象进行跟随操作,进一步提高了该方法的实用性。
下面提供以下具体应用实例,以突显本实施例中所提供的技术方案:
应用实例1,相机可以包括手动镜头、传感器和外置对焦电机,传感器可以包括:能够支持相位对焦操作的传感器、不支持相位对焦操作的传感器。如果传感器能够支持相位对焦操作,外置对焦电机可以通过传感器的通信接口(例如USB接口)获取相位对焦信息(Phase Detection Auto Focus,简称PDAF),具体的,在通过手动镜头进行图像采集操作时,用户可以通过点击显示区域的方式来获取到指定区域,而后可以获取到与指定区域相对应的PDAF信息。在获取到PDAF信息之后,可以获取预先标定的相位-对焦环位置曲线,基于相位-对焦环位置曲线和PDAF信息直接计算出对焦环位置,从而可以基于对焦环位置控制外置对焦电机旋转到目标对焦环位置,即可以实现镜头的快速对焦操作。
由于PDAF信息是相机内的信息,在一些实例中,PDAF信息并不容易获取到,在一些实例中,可以通过与相机厂商进行合作来获取到PDAF信息。如果传感器不支持相位对焦操作,外置电机可以通过获取对焦区域的对比度信息来获取到反差对焦信息(Contrast Detection Auto Focus,简称CDAF),从而可以实现反差对焦功能。
具体的,参考附图10所示,本应用实施例中的自动对焦方法可以包括以下步骤:
步骤1:外置对焦电机通过标定的方式获取到手动镜头的机械限位。
其中,机械限位包括近端位置和远端位置,分别为左值L和右值R,需要注意的是,近端位置可以对应于最小焦距值,远端位置可以对应于最大焦距值。
步骤2:获取外置对焦电机的当前位置C,在当前位置C处(介于L、R之间)计算对焦区域的对比度F。
步骤3:基于外置对焦电机的当前位置C、近端位置L和远端位置R,计算外置对焦电机的转速和对焦方向。
由于初始状态需要决策手动镜头的对焦方向(远端方向或者近端方向),因此,为了避免下一帧位置超出电机对应的L、R范围,可以计算外置电机的转速。具体的,为了能够准确获取到CDAF信息,可以获取到激励信号,该激励信号对应一个频率信息,从而可以获取到与频率信息相对应的时间信息,而后确定当前位置C与近端位置L和远端位置R之间的距离,基于距离信息和上述的时间信息即可确定外置对焦电机的转速(与对焦速度呈正相关),需要注意的是,为了保证对焦操作的稳定进行,外置对焦电机运动的过程中,可以预先配置一最大电机转速,用于控制外置对焦电机进行运动的电机转速均小于或等于最大电机转速。
步骤4:基于转速和对焦方向控制外置对焦电机进行运动,获得外置对焦电机的更新后位置C`。
步骤5:基于更新后位置C`确定计算外置对焦电机的更新转速S和更新对焦方向。
步骤6:检测更新转速S是否达到收敛阈值。
步骤7:在更新转速S达到收敛阈值时,则实现了对焦操作;在更新转速S未达到收敛阈值时,则基于更新转速S获取到下一帧图像。
步骤8:在下一帧图像不满足合焦条件时,则计算电机新位置Cn上的对角区域的对比度Fn。
步骤9:通过判断Fn和F的数值大小,确定当前位置所对应的合焦点是否更加清晰。
步骤10:根据比较结果更新左、右值,重新回到计算对焦方向和转速,依次反复,直到转速满足结束条件为止,从而实现反差对焦(CDAF)操作。
具体的,在F<Fn时,则比较C与Cn的大小,在C<Cn时,则将左值L更新为C;在C>Cn时,则将右值R更新为Cn;在F>Fn时,则比较C与Cn的大小,在C<Cn时,则将右值R更新为Cn;在C>Cn时,则将左值L更新为C,从而实现了更新左值和右值的操作。在将对焦左、右值的范围缩小之后,可以通过降低电机的转速,进一步实现精细对焦,依次反复,直到转速满足结束条件为止,从而实现反差对焦(CDAF)操作。
需要说明的是,在相机运行的过程中,相机可以包括大光圈模式和小光圈模式,由于在大光圈模式(光圈值大于或等于预设阈值)下,所获得的PDAF信息的精度受限,因此,可以通过PDAF信息与CDAF信息相结合的方式实现快速自动对焦。在小光圈模式(光圈值小于预设阈值)下,所获得的PDAF信息的精度较高,因此,可以直接通过PDAF信息实现快速自动对焦操作。
本应用实例所提供的技术方案,有效地解决了现有技术中的手动镜头无法实现自动对焦的问题,并且,能够保证自动对焦操作的稳定运行,进一步提高了该方法的实用性。
应用实例2,本应用实施例提供了一种基于相机信息的自动希区柯克Dolly Zoom效果的方法,由于Dolly Zoom需相机配置可变焦镜头,为了能够实现希区柯克的运镜操作,当确认智能跟随目标之后,需要对所框选的智能跟随目标进行跟随操作,拍摄设备可以前后移动,在镜头焦距不变情况下,智能跟随目标在画面中占比的会变化(远大近小)。为了使智能跟随目标的占比在画面中保持一致,通过控制变焦即可实现Dolly Zoom效果。Dolly Zoom控制变焦的方式分为两种:(1)通过云台的变焦电机控制变焦;(2)通过相机接口控制变焦(若相机有响应接口)。具体的,该方法可以包括如下步骤:
步骤S21:根据相机变焦范围标定变焦电机的运动行程范围。
其中,对于变焦镜头或者电子变焦而言,由于变焦镜头或者电子变焦均对应有变焦范围,因此,需要对变焦电机对可变焦的范围进行提前的行程标定,这样可以获取到镜头行程与变焦电机行程之间的对 应关系,从而可以防止在进行变焦操作时超出变焦范围而导致相机镜头损坏。
步骤S22:标定变焦电机的运动方向。
其中,由于变焦电机的运动方向取决于机械结构与变焦镜头相互的安装关系,因此需要提前标定其运动方向与变焦方向(zoom in and zoom out)的关系,通过上述所标定的运动方向与变焦方向之间的关系,可以使得用户了解到在控制电机进行运动的过程中,所显示的画面是增大的效果还是减小的效果。
步骤S23:框选智能跟随目标,记录初始框选目标在画面中的占比。
在框选出跟随目标之后,可以记录初始框选目标在画面中的占比,具体的,初始框选目标在画面中的占比可以有三个指标分别或联合确定:(1)智能跟随目标框选长度与画面长度的占比;(2)智能跟随目标框选宽度与画面宽度的占比;(3)智能跟随目标框选面积与画面面积的占比。其中,宽度占比相对稳定可靠。
步骤S24:初始框选目标在画面中的占比对跟焦电机进行控制,以控制目标在画面中显示一致。
在拍摄者相对于被拍摄者移动的过程中,可以实时计算跟焦电机或者电子变焦输出的初始目标占比,基于初始目标占比控制目标在画面中显示一致。具体的,参考附图11所示,通过相机获取到显示画面,基于显示画面获取初始目标占比,将初始目标占比输入至比例-积分-微分(Proportional、Integral、Differential,简称PID)控制单元,从而使得PID控制单元可以基于初始目标占比生成与跟焦电机相对应的控制信号,而后则可以基于控制信号对跟焦电机进行控制,以能够保证在跟焦电机运动的过程中,目标在画面中的显示保持一致。
本应用实例所提供的技术方案,有效地解决了现有技术中的手动控制Dollyzoom的运镜方式比较难操控的问题,并且,能够保证Dollyzoom的运镜操作的质量和效果,进一步提高了该方法的实用性。
应用实例3,云台上可以设置有云台增稳单元(Gimbal Image Stabilization,简称GIS),对于云台和云台所承载的相机而言,通常情况下,为了避免拍摄者的手抖导致拍摄画面糊掉或者不清晰,相机内部会设置有光学防抖控制单元(Optical Image Stabilization,简称OIS)。但是,由于物理尺寸的限制,OIS可以运动的范围通常比较小,在抖动幅值比较大时,仅仅通过OIS无法抑制画面的抖动,而由于云台增稳单元(Gimbal Image Stabilization,简称GIS)的运动范围比较大,可以很好的克服姿态下幅度较大的抖动,此时,则可以单独通过云台上的增稳单元GIS来抑制画面的抖动情况。然而,由于相机的OIS和云台的GIS通常是分别由相机厂商和云台厂商分开设计的,因此,在进行防抖控制的过程中,可能会存在GIS和OIS相互干涉的情况,即OIS结合GIS的防抖效果比单独OIS或者单独GIS的防抖效果差。
基于上述情况,本应用实施例提供了一种基于相机防抖OIS结合云台防抖的控制方法,该控制方法能够在云台可以得到相机内部的OIS信息的基础上,对防抖的干涉情况进行相应的规避操作,具体的,该方法可以包括以下步骤:
步骤31:获取相机OIS信息,通过相机OIS信息,测试得到OIS灵敏度函数。
在相机具有OIS时,则可以控制相机开启OIS,而后云台可以在设定频率的激励信号的控制下进行扫频操作,其中,设定频率的激励信号可以包括以下任意之一:1Hz的激励信号、2Hz的激励信号、5Hz的激励信号、10Hz的激励信号或者20Hz的激励信号等等,从而获取到不同激励信号所对应的相机的采集图像,而后基于可以获知到相机在不同激励信号的OIS灵敏度函数。
步骤32:获取云台的当前激励信号。
步骤33:判断当前激励信号在OIS灵敏度函数所对应的设定频率上是否有足够的抑制。
步骤34:若当前激励信号在OIS灵敏度函数所对应的设定频率上没有足够的抑制能力,则说明仅仅通过相机中的OIS无法足够抑制云台或者图像采集装置的抖动情况,此时可以通过云台上的GIS进行增稳 操作,或者,结合相机中的OIS和云台上的GIS进行增稳操作。
步骤35:若当前激励信号在OIS灵敏度函数所对应的设定频率上具有足够的抑制能力,则说明仅仅通过相机中的OIS能够足够降低抖动对相机所采集图像的影响,此时,可以关闭云台上的GIS进行防抖操作;或者,可以对云台的激励信号进行抑制操作,从而可以获得抑制后信号,而后可以基于抑制后信号对云台进行控制,此时,可以仅仅通过相机中的OIS进行防抖操作。
需要注意的是,相机中的OIS还可以针对用户的抖动情况进行滤除,具体的,在手持云台很多的应用场景中,用户走动的步频可能会影响画面的稳定显示,而在云台为三轴云台时,由于三轴云台只能进行姿态上的增稳操作,而无法对平动方向上的干扰进行滤除,此时,则可以通过云台imu对人体步频的识别,然后将识别步频的信息反馈给相机,相机可以基于所识别的步频信息对信号滤波器进行参数设定,在进行参数配置完毕之后,通过相机中的信号滤波器可以对平动方向上的信号进行滤除操作,这样可以降低或者消除由于用户的步频信息所引起的云台抖动情况。
本应用实施例提供的技术方案,能够解决通过融合相机陀螺仪传感器信息关于防抖干涉的问题,这样有效地保证了对云台和相机的增稳效果,从而有利于提高通过相机所采集图像的显示质量和显示效果,进而保证了该控制方法的实用性。
应用实例4,对于云台和云台所支撑的相机而言,在云台和相机运行的过程中,在光线较暗的环境中,相机通常会采用自动调节ISO的方式实现整体画面的曝光平衡。但是ISO高到一定程度之后就会导致画面噪点增多,画质变差。为了解决上述问题,本应用实施例提供了一种自动调整光圈以及外部灯光曝光实现曝光平衡的方法,具体的,该方法可以包括以下步骤:
步骤41:通过相机获取到实际测光值。
步骤42:确定与相机相对应的目标曝光值。
步骤43:将实际测光值和目标曝光值输入至PID单元,从而可以生成与补光灯和光圈电机相对应的控制参数。
具体的,如图12所示,在获取到实际测光值和目标曝光值之后,可以将实际测光值和目标曝光值输入至PID控制单元,从而使得PID控制单元可以基于实际测光值和目标曝光值生成与补光灯和光圈电机相对应的控制信号,而后则可以基于控制信号对补光灯和光圈电机进行控制,以能够保证在相机进行拍摄的过程中实现曝光平衡操作,这样可以保证图像采集操作的质量和效果。
步骤44:基于控制参数对补光灯和光圈电机进行控制操作,以实现补光操作。
其中对于手动镜头而言,相机的光圈优先的曝光模式下无法直接通过卡口控制镜头的光圈大小,由于本申请中的技术方案可以通过控制外置的电机实现光圈控制,以实现调整曝光值的操作,其实现了一个简单的PID闭环控制器。同理情况下,当光圈开到最大之后,还可以通过增加外部补光灯的方式,增加外部灯光,使得曝光达到平衡。
本应用实施例提供的技术方案,通过外部灯光控制,解决手动挡模式下ISO不固定的问题,进一步提高了该控制方法的实用性,有利于市场的推广与应用。
另外,本实施例提供了一种对焦控制方法,该对焦控制方法的执行主体可以为对焦控制装置,该对焦控制装置可以实现为软件、或者软件和硬件的组合,具体实现时,该对焦控制方法可以应用于云台,此时,云台用于支撑图像采集装置和辅助设备,该辅助设备可以包括用于对图像采集装置的跟焦环进行调节的跟焦电机,具体的,本实施例中的对焦控制方法可以包括:
步骤S1001:获取图像采集装置确定的对焦信息;
步骤S1002:基于对焦信息,确定与对焦电机相对应的控制信息;
步骤S1003:基于控制信息对对焦电机进行控制,以实现图像采集装置的跟焦操作。
需要注意的是,本实施例中上述步骤的具体实现方式和实现效果与上述实施例中图1-图4、图10-图12所示方法的具体实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图1-图4、图10-图12所示实施例的相关说明。
此外,本实施例提供了一种变焦控制方法,该变焦控制方法的执行主体可以为变焦控制装置,该变焦控制装置可以实现为软件、或者软件和硬件的组合,具体实现时,该变焦控制方法可以应用于云台,此时,云台用于支撑图像采集装置和辅助设备,该辅助设备可以包括用于对图像采集装置的变焦环进行调节的变焦电机,具体的,本实施例中的变焦控制方法可以包括:
步骤S1101:获取图像采集装置所采集的采集图像和与采集图像相对应的变焦信息。
步骤S1102:基于采集图像和变焦信息,确定变焦电机的控制参数。
步骤S1103:基于控制参数对变焦电机进行控制,以实现图像采集装置的变焦操作。
需要注意的是,本实施例中上述步骤的具体实现方式和实现效果与上述实施例中图1、图5、图10-图12所示方法的具体实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图1、图5、图10-图12所示实施例的相关说明。
此外,本实施例提供了一种补光控制方法,该补光控制方法的执行主体可以为补光控制装置,该补光控制装置可以实现为软件、或者软件和硬件的组合,具体实现时,该补光控制方法可以应用于云台,此时,云台用于支撑图像采集装置和辅助设备,该辅助设备可以包括用于对图像采集装置进行补光操作的补光设备,具体的,本实施例中的补光控制方法可以包括:
步骤S1201:获取图像采集装置确定的光线检测信息。
步骤S1202:基于光线检测信息,确定与补光设备相对应的控制参数。
步骤S1203:基于控制参数对补光设备进行控制,以实现图像采集装置的补光操作。
需要注意的是,本实施例中上述步骤的具体实现方式和实现效果与上述实施例中图1、图6-图7、图10-图12所示方法的具体实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图1、图6-图7、图10-图12所示实施例的相关说明。
此外,本实施例提供了一种防抖控制方法,该防抖控制方法的执行主体可以为防抖控制装置,该防抖控制装置可以实现为软件、或者软件和硬件的组合,具体实现时,该防抖控制方法可以应用于云台,此时,云台用于支撑图像采集装置,云台上可以设置有机身防抖IBIS单元,而图像采集装置包括防抖控制单元,防抖控制单元可以包括:镜头光学防抖OIS单元,防抖控制单元能够基于拍摄参数对图像采集装置中光学元件和图像传感器中的至少一种进行抖动补偿,以对图像采集装置采集的采集图像进行调节;具体的,本实施例中的防抖控制方法可以包括:
步骤S1301:获取图像采集装置内所设置的防抖控制单元确定的防抖信息,防抖信息为根据图像采集装置的位置变化信息确定。
步骤S1302:确定云台的当前激励信号。
步骤S1303:基于当前激励信号和防抖信息,确定控制参数。
步骤S1304:基于控制参数对云台进行控制,以实现云台的增稳操作。
在一些实例中,本实施例中的方法还可以包括:检测图像采集装置与云台之间的通信连接状态,在图像采集装置与云台通信连接时,即图像采集装置应用于云台上,则可以启动云台上设置的IBIS单元,以通过IBIS单元来实现防抖操作;在图像采集装置与云台断开通信连接时,即图像采集装置脱离于云台,则可以启动图像采集装置中所对应的镜头光学防抖OIS单元,以通过OIS单元来实现防抖操作。
需要注意的是,本实施例中上述步骤的具体实现方式和实现效果与上述实施例中图1、图8-图12所示方法的具体实现方式和实现效果相类似,本实施例未详细描述的部分,可参考对图1、图8-图12所示 实施例的相关说明。
图13为本发明实施例提供的一种基于图像采集装置的控制装置的结构示意图;参考附图13所示,本实施例提供了一种基于图像采集装置的控制装置,该基于图像采集装置的控制装置用于执行上述图1所示的基于图像采集装置的控制方法,具体的,该基于图像采集装置的控制装置可以包括:
存储器12,用于存储计算机程序;
处理器11,用于运行存储器12中存储的计算机程序以实现:
获取图像采集装置确定的拍摄参数,拍摄参数能够用于调节图像采集装置采集的采集图像;
基于拍摄参数确定控制参数;
基于控制参数对云台和辅助设备中的至少一种进行相应的控制,其中,云台用于支撑图像采集装置和/或辅助设备,辅助设备用于辅助图像采集装置进行相应的拍摄。
其中,基于图像采集装置的控制装置的结构中还可以包括通信接口13,用于电子设备与其他设备或通信网络通信。
图13所示装置可以执行图1-图12所示实施例的方法,本实施例未详细描述的部分,可参考对图1-图12所示实施例的相关说明。该技术方案的执行过程和技术效果参见图1-图12所示实施例中的描述,在此不再赘述。
另外,本发明实施例提供了一种计算机可读存储介质,存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,程序指令用于实现上述图1-图12的基于图像采集装置的控制方法。
图14为本发明实施例提供的一种云台的结构示意图;参考附图14,本实施例提供了一种云台,该云台可以包括:
云台主体21;
上述图13的基于图像采集装置的控制装置24,设置于云台主体21上。
本实施例中的云台的实现原理和技术效果与基于图像采集装置的控制装置的实现原理和技术效果相类似,具体可参见图13示实施例中的描述,在此不再赘述。
在一些实例中,参考附图14所示,本实施例中的云台还可以包括:
图像采集装置22,设置于云台主体21上;
辅助设备23,设置于云台主体21上,用于辅助图像采集装置22进行相应的拍摄。
本实施例中的云台的实现原理和技术效果与基于图像采集装置的控制装置的实现原理和技术效果相类似,具体可参见图13示实施例中的描述,在此不再赘述。
另一方面,随着科学技术的飞速发展,利用数码相机、单反相机进行视频拍摄操作的用户越来越多,随之云台作为辅助视频拍摄的工具也得到了较为广泛的应用。但是云台却不仅局限于在视频拍摄的过程中进行防抖增稳,并且还可以拓展出了更多的其他的操作方法,从而有利于保证用户的视频拍摄体验效果。
在进行视频拍摄的过程中,如何保证被摄物体(目标对象)保持在视频构图中的固定位置(构图位置)成了一个依赖摄影师的拍摄技术性的关键问题。而随着人工智能(Artificial Intelligence,简称AI)技术的发展,识别被摄物体在画面中的位置成为了可能。另外,云台除了能够进行增稳操作,还能控制相机进行旋转运动,二者结合实现闭环,即可实现被摄物体的智能跟随操作。在针对某一目标对象进行智能跟随操作的过程中,为了保证智能跟随操作的质量和效果,以下两点比较重要:一是如何获取被摄物体在画面中的位置信息;二是如何控制云台进行运动,将被摄物体保持在构图位置,如画面中心。
现有技术中,控制搭配第三方相机实现智能跟随的方法主要是在云台引入一个图像处理装置(包括 下述的图像信号处理装置),图像处理装置可以通过高清多媒体接口(High Definition Multimedia Interface,简称HDMI)或者其他接口得到相机的实时图像,将实时图像输入AI机器学习单元(软件实现),即可得到第三方相机中需要被拍摄的物体的实时位置。
具体的,如图15所示,以相机100作为图像采集装置为例,相机100作为第三方负载,其可以通过HDMI接口连接到图像信号处理(Image Signal Processing,简称ISP)装置,图像信号处理装置中可以包括:ISP模块1011、缓存器1012、实时视频输出器1013、格式转换器1014、机器学习模型1015以及策略处理器1016,上述的ISP模块1011可以对所接收到的图像进行分析处理,并将处理后的图像数据传输至缓存器1012进行缓存,经过缓存器1012进行缓存后的图像数据不仅可以通过实时视频输出器1013进行实时输出,并且还可以利用格式转换器1014对缓存后的图像数据进行格式转换操作,以使得格式转换操作之后的图像数据可以输入到机器学习模型1015中进行机器学习操作,以识别用户设定的待跟随主体。在识别出待跟随主体之后,可以通过策略处理器1016按照策略确定云台的控制参数,而后云台控制器102可以基于云台的控制参数对云台进行控制操作,以实现云台可以对待跟随主体进行智能跟随操作。
然而,上述技术方案存在以下问题:从HDMI接口传输获得的视频信号存在较大的延时,直接导致跟随操作的效果变得很差,而且不同相机的HDMI接口所对应的延时长短不同,导致算法上很难归一化处理。
此外,随着短视频的兴起,相机厂商也都相继推出了自家的自动跟随算法,但是相机厂商的跟随算法是用于对焦点的跟随,但是基本原理是:获取被摄物体在画面中的位置,也就是说相机厂商能够对实时图像进行计算,从而可以获得被摄物体的实时位置,而不会产生因HDMI传输视频,以在云台侧的图像信号处理装置对图像进行处理得到被摄物体的实时位置导致的延时,并且,在利用相机内部计算得到被跟随物体的坐标信息的时效比较高。
在利用第三方相机进行跟随操作时,通常识别到的被摄物体的目标坐标位置越精准越好,然而,这也容易导致第三方相机进行跟随的被摄物体的坐标存在跳点情况,比如:从人头肩跟随切换到人脸跟随、从人脸跟随切换到人眼跟随时,明明是同一个人,但用于跟随的坐标却会发生变化,从而导致跳点问题。
总结来说,现有技术中的智能跟随方法存在以下缺陷:
(1)在利用与相机通信连接的云台进行智能跟随操作时,虽然可以通过图像处理模块计算被跟随物体的实时位置,但是会增加额外的AI机器学习算法的开发成本以及硬件设计成本。
(2)在将图像通过HDMI接口传输至云台侧的图像处理模块时,由于图像处理模块对图像的一系列处理,导致云台获取被跟随物体的实时位置所对应的延时时间比较大,延时越大,对被跟随物体进行跟随操作的效果也越差;当延时达到一个瓶颈的时候,甚至无法实现目标跟随操作。
(3)不同相机的HDMI接口所对应的延时不一样,差别比较大,在算法上很难做归一化处理。
(4)在利用第三方相机进行智能跟随操作时,对于用于限定被跟随物体的跟随框而言,跟随框的坐标容易存在跳变,现有的跟随算法无法解决此类问题。
为了解决上述技术问题中的至少一种,本实施例提供了一种云台的控制方法、装置、可移动平台和存储介质。其中,控制方法通过获取目标对象在采集图像中的采集位置,并基于采集位置确定用于对目标对象进行跟随操作的控制参数;而后根据控制参数对云台进行控制,从而可以实现对目标对象进行跟随操作,其中,由于采集位置是通过图像采集装置所确定的,并且云台可以通过图像采集装置直接获取采集位置,这样有效地降低了在云台通过图像采集装置获取采集位置时所对应的延时时间,从而解决了因延时比较大而导致跟随效果差的问题,进一步保证了云台跟随操作的质量和效果,有效地提高了该方法使用的稳定可靠性。
下面结合附图,对本发明中的一种云台的控制方法、装置、可移动平台和存储介质的一些实施方式作详细说明。在各实施例之间不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
图16为本发明实施例提供的一种云台的控制方法的流程示意图;图17为本发明实施例提供的一种云台与图像采集装置进行通信连接的结构示意图;参考附图16-图17所示,本实施例提供了一种云台的控制方法,其中,云台通信连接有图像采集装置,可以理解的是,图像采集装置是指具有图像采集能力和图像处理能力的装置,例如:照相机、摄像机、具有图像采集能力和图像处理能力的其他装置等等。在一些实例中,云台上可以设有通信串行总线USB接口,USB接口用于与图像采集装置进行有线通信连接,即云台通过USB接口与图像采集装置通信连接。具体应用时,在云台通过USB接口与图像采集装置进行被跟随物体的位置数据的传输时,由于云台侧无需额外设置图像信号处理装置,则云台与图像采集装置之间传输的被跟随物体的位置数据所对应的延时时间比较短。举例来说:在云台通过HDMI接口与图像采集装置进行被跟随物体的位置数据的传输时,被跟随物体的位置数据所对应的延时时间为t1;在云台通过USB接口与图像采集装置进行被跟随物体的位置数据的传输时,被跟随物体的位置数据所对应的延时时间为t2,其中,t2<t1或者t2<<t1。
可以理解的是,云台与图像采集装置之间的通信连接方式并不限于上述所限定的实现方式,本领域技术人员还可以根据具体的应用需求和应用场景进行设置,例如无线通信,或者能够保证在云台与图像采集装置进行数据传输时,所对应的延时时间比较短,在此不再赘述。
需要说明的是,在云台与图像采集装置除利用HDMI接口实现通信连接的其它方式中,除可以传输上述被跟随物体的位置数据之外,还可以传输其它数据,如控制指令、图像数据等,具体可以根据需要与适配性设计,此处不做具体限定。
另外,云台的控制方法的执行主体可以是云台的控制装置。可以理解的是,该控制装置可以实现为软件、或者软件和硬件的组合;在控制装置执行该云台的控制方法时,可以解决因通过接口传输数据所产生的延时比较长而导致跟随效果差的问题,从而可以保证对目标对象进行跟随操作的质量和效果。具体的,该方法可以包括:
步骤S1601:获取目标对象在采集图像中的采集位置,采集位置是通过图像采集装置所确定的,图像采集装置为具有手动镜头或自动镜头的相机,且图像采集装置与云台通信连接。
步骤S1602:基于采集位置,确定用于对目标对象进行跟随操作的控制参数。
步骤S1603:根据控制参数对云台进行控制,以实现对目标对象进行跟随操作。
下面针对上述各个步骤的实现过程进行详细阐述:
步骤S1601:获取目标对象在采集图像中的采集位置,采集位置是通过图像采集装置所确定的。
其中,图像采集装置可以设置于云台上,用于进行图像采集操作,在获取到采集图像之后,图像采集装置可以对采集图像进行分析处理,以确定目标对象在采集图像中的采集位置。具体的,目标对象在采集图像中的采集位置可以包括:目标对象在采集图像中所对应的关键点位置,或者,目标对象在采集图像中所对应的覆盖区域等等。
可以理解的是,采集位置可以是图像采集装置通过对图像的像素直接采样得到,也可以是利用对图像的像素的采样结果进行处理得到,如目标对象对应有跟随框,在得到跟随框的顶点在采集图像中的位置以及跟随框的大小后,可以利用该顶点和跟随框的大小确定跟随框的中心位置,并以该中心位置作为目标对象在采集图像中的采样位置。
需要说明的是,在目标对象对应有跟随框时,采集位置可以是依据该跟随框在采集图像中的位置的相关信息来确定或表征的。其中,该跟随框可以是用户在图像采集装置的显示屏进行触控操作时获取的,例如,用户在在图像采集装置的显示屏选择某一对象,即可生成框选住至少部分该对象的跟随框。
在获取到目标对象在采集图像中的采集位置之后,则可以将目标对象在采集图像中的采集位置通过USB接口主动或者被动地传输至云台,从而使得云台可以获取到目标对象在采集图像中的采集位置。
步骤S1602:基于采集位置,确定用于对目标对象进行跟随操作的控制参数。
在获取到采集位置之后,则可以对采集位置进行分析处理,以确定用于对目标对象进行跟随操作的控制参数,该控制参数可以包括以下至少之一:姿态信息、角速度信息、加速度信息、控制带宽等等。
其中,由于采集位置是通过图像采集装置所确定的,而采集装置传输到云台也需要一定的时间,从而云台直接通过图像采集装置获取到采集位置也存在一定的延时时间,因此,在确定控制参数时,需要结合与采集位置相对应的延时时间对采集位置进行分析处理,以能够准确地确定用于对目标对象进行跟随操作的控制参数。在一些实例中,基于采集位置,确定用于对目标对象进行跟随操作的控制参数可以包括:计算与采集位置相对应的当前位置预测值;基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数。
具体的,如图18所示,在图像采集装置获取到采集图像之后,则可以对采集图像进行分析处理,从而可以获得目标对象在采集图像中的采集位置,并能够将目标对象在采集图像中的采集位置传输至云台。由于图像采集装置获取到采集位置,以及采集位置传输到云台均需要一定的时间,因此,云台直接从图传采集装置获取采集位置也存在一定的延时时间。为了降低延时时间对智能跟随操作的影响程度,则可以基于上述的延时时间计算与采集位置相对应的当前位置预测值。可以理解的是,当前位置预测值与采集位置是不同的位置。
在获取到当前位置预测值之后,则可以对当前位置预测值进行分析处理,以确定用于对目标对象进行跟随操作的控制参数,由于当前位置预测值是考虑了与采集位置进行传输时所对应的延时时间而进行确定的,因此,有效地保证了对控制参数进行确定的准确可靠性。
步骤S1603:根据控制参数对云台进行控制,以实现对目标对象进行跟随操作。
在获取到控制参数之后,则可以基于控制参数对云台进行控制,从而可以实现对目标对象进行跟随操作。
如上分析可知,相关技术中,采集位置是由云台侧计算得到,云台侧需要额外设置图传模块和图像信号处理装置,以获取到图像采集装置的采集图像,并能够对采集图像进行分析处理而进行跟随操作。然而,本申请中,复用了图像采集装置的图像信号处理功能,不仅使得云台侧无需额外信号处理装置也可以实现对目标对象的跟随操作,也可以在不设置图传模块的情况下实现目标对象的跟随操作;同时,在数据传输过程中,采集图像的传输所需要的带宽远大于采集位置的传输所需要的带宽,因而传输数据的减小可以一定程度上减少跟随操作的时延,从而有效提高了对目标对象的跟随效率以及精准度。
可以理解,在直接从图像采集装置获取采集位置的场景下,云台侧也可以设置图传模块,以能够在云台侧进行采集图像的显示,也可以设置图像信号处理装置,以适配不同的图像采集装置,如适配某些无法从中获取采集位置的图像采集装置。
其中,在图像采集装置设有图像信号处理装置时,其处理能力可以由于云台侧的图像信号处理装置。在一些实施例中,可以是图像采集装置的机器学习模型的识别能力由于云台的机器学习模型的识别能力。基于此,在图像采集装置到云台侧的传输数据减小的情况下以及图像采集装置的识别能力由于云台的识别能力的情况下,若云台需要进行跟随操作,则需要云台中的电机可以相应转动以实现图像采集装置的位姿调整而实现跟随操作,那么可知,图像采集装置到云台的控制器的数据传输时间1相应缩短了,且由于无需等待云台侧进行采集装置的识别,则云台的控制器到云台的电机的数据传输时间2也相应缩短了,从而在两个节点上同时减小了数据传输时间,减小了云台实现跟随操作功能的时延。
在一些实施例中,在对目标对象进行跟随操作时,云台可以对应有不同的运动状态,例如:云台可以为匀速运动、匀加速运动、匀减速运动等等。为了保证对目标对象进行跟随操作的质量和效率,不同运动状态的云台可以对应有不同的控制策略。具体的,根据控制参数对云台进行控制可以包括:获取与 目标对象所对应的云台的运动状态;基于云台的运动状态和控制参数对云台进行控制。
在一些实施例中,云台的运动状态可以依据目标对象的运动状态确定,例如,目标对象为匀速运动,云台可以为匀速运动;目标对象为匀加速运动,云台可以为匀加速运动;目标对象为匀减速运动,云台可以为匀减速运动。
在一些实施例中,云台的运动状态与跟随时长相关,例如:初始跟随时,可以是匀加速运动;还可以与跟随状态相关,例如:丢失跟随目标时,可以是匀加速运动。
其中,在利用云台对目标对象进行跟随操作时,可以获取与目标对象所对应的云台的运动状态。具体的,本实施例对于与目标对象所对应的云台的运动状态的具体获取方式不做限定,本领域技术人员可以根据具体的应用需求和设计需求进行设置,例如:通过图像采集装置获得多帧采集图像,对多帧采集图像进行分析处理,以确定与云台相对应的移动速度,基于移动速度确定与目标对象相对应的云台的运动状态,该云台的运动状态可以包括以下之一:匀加速运动、匀减速运动、匀速运动等等。或者,在云台上可以设置有惯性测量单元,通过惯性测量单元获取与目标对象相对应的云台的运动状态等等。在获取到云台的运动状态之后,则可以基于云台的运动状态和控制参数对云台进行控制,以实现对目标对象进行跟随操作,从而有效地提高了对目标对象进行跟随操作的质量和效率。
本实施例提供的云台的控制方法,通过获取目标对象在采集图像中的采集位置,而后基于采集位置确定用于对目标对象进行跟随操作的控制参数,并根据控制参数对云台进行控制,从而可以实现对目标对象进行跟随操作。其中,由于采集位置是通过图像采集装置所确定的,并且云台可以通过图像采集装置直接获取采集位置,这样有效地降低了云台通过图像采集装置获取采集位置时所对应的延时时间,从而解决了因延时比较大而导致跟随效果差的问题,进一步保证了对目标对象进行跟随操作的质量和效果,有效地提高了该方法使用的稳定可靠性。
图19为本发明实施例提供的获取目标对象在采集图像中的采集位置的流程示意图;在上述实施例的基础上,继续参考附图19所示,本实施例提供了一种对目标对象在采集图像中的采集位置进行获取的实现方式,具体的,本实施例中的获取目标对象在采集图像中的采集位置可以包括:
步骤S1901:通过图像采集装置获取与目标对象相对应的目标对焦位置。
步骤S1902:将目标对焦位置确定为目标对象在采集图像中的采集位置。
在现有技术中,在通过配置有图像采集装置的云台或者无人机进行跟随操作时,图像采集装置的对焦操作和云台或者无人机的跟随操作是两个完全独立的操作,此时,在图像采集装置所对应的对焦对象发生变化时,无法基于对焦对象的变化及时地对云台或者无人机的跟随对象进行调整,这样无法保证跟随操作的质量和效果。
其中,可以理解,在利用无人机进行跟随操作时,无人机上可以通过云台挂载有图像采集装置,那么可以对无人机和/或云台的控制参数进行调整,以实现跟随操作。
因此,为了避免上述因图像采集装置所对应的对焦对象发生变化时,无法基于对焦对象的变化及时地对云台或者无人机的跟随对象进行调整的问题,本实施例提供了一种将图像采集装置的对焦操作和云台或者无人机的跟随操作是关联操作的技术方案。具体的,对于摄像技术领域,在通过图像采集装置获取目标对象在采集图像中的采集位置,且图像采集装置针对目标对象的对焦点与采集位置不同时,这样在基于采集位置控制云台对目标对象进行跟随操作时,容易使得通过图像采集装置所获得的目标对象出现虚焦的情况。因此,为了避免出现进行跟随操作的目标对象出现虚焦的情况,在获取目标对象在采集图像中的采集位置时,则可以通过图像采集装置获取与目标对象相对应的目标对焦位置,可以理解的是,上述的目标对焦位置可以是用户选定的对焦位置或者是自动识别出的对焦位置。
在获取到与目标对象相对应的目标对焦位置,则可以将目标对焦位置直接确定为目标对象在采集图 像中的采集位置,即使得与目标对象相对应的对焦位置与目标对象在采集图像中的采集位置一致,进而有效地避免了目标对象出现虚焦的情况。
在另一些实例中,在获取到与目标对象相对应的目标对焦位置之后,将目标对焦位置确定为目标对象在采集图像中的采集位置可以包括:获取与目标对焦位置所对应的预设区域范围,将预设区域范围直接确定为与目标对象在采集图像中的采集位置。其中,目标对焦位置所对应的预设区域范围可以是目标对象在采集图像中所对应的至少部分覆盖区域,且包括目标对焦位置,此时,与目标对象相对应的对焦位置与目标对象在采集图像中的采集位置基本一致,因此也可以避免目标对象出现虚焦的情况。
本实施例中,通过图像采集装置获取与目标对象相对应的目标对焦位置,而后将目标对焦位置确定为目标对象在采集图像中的采集位置,从而有效地实现了与目标对象相对应的对焦位置和与目标对象在采集图像中的采集位置基本一致,进而可以有效地避免目标对象出现虚焦的情况,进一步提高了对目标对象进行跟随操作的质量和效果。
从上分析可知,相关技术中,关于目标对象的对焦与跟随操作是两个独立的操作,即在图像采集装置侧,可以基于采集图像进行分析,以得到目标对象的目标对焦位置而进行对焦,在云台侧,可以基于采集图像进行分析,以得到目标对象的实时位置偏差而进行跟随。也就是说,图像采集装置的控制器与云台侧的控制器分别进行了图像分析处理,不仅增加了计算资源的耗费,还可能导致图像分析处理的不一致而出现跳点的问题。然而,本申请中,由于目标对焦位置用于实现跟随操作,则复用了图像采集装置的图像分析处理能力,并解决了跳点问题。
图20为本发明实施例提供的通过图像采集装置获取与目标对象相对应的目标对焦位置的流程示意图;在上述实施例的基础上,继续参考附图20所示,本实施例提供了一种对目标对焦位置进行获取的实现方式,具体的,本实施例中的通过图像采集装置获取与目标对象相对应的目标对焦位置可以包括:
步骤S2001:通过图像采集装置获取与目标对象相对应的历史对焦位置和当前对焦位置。
步骤S2002:基于历史对焦位置和当前对焦位置,确定与目标对象相对应的目标对焦位置。
其中,在利用图像采集装置对目标对象进行图像采集操作时,由于目标对象可能处于移动状态,例如:目标对象处于匀速移动状态、匀加速移动状态、匀减速移动状态等等,而目标对象的不同移动状态容易使得图像采集操作时所对应的对焦位置发生变化。此时,为了能够保证对目标对象相对应的目标对焦位置进行获取的准确可靠性,则可以通过图像采集装置相对应的与目标对象相对应的历史对焦位置和当前对焦位置,可以理解的是,历史对焦位置是指通过图像采集装置所获得的历史图像帧所对应的对焦位置,当前对焦位置是指通过图像采集装置所获得的当前图像帧所对应的对焦位置。
在获取到历史对焦位置和当前对焦位置之后,则可以对历史对焦位置和当前对焦位置进行分析处理,以确定与目标对象相对应的目标对焦位置。在一些实例中,基于历史对焦位置和当前对焦位置,确定与目标对象相对应的目标对焦位置可以包括:确定与历史对焦位置相对应的历史对象部位和与当前对焦位置相对应的当前对象部位;根据历史对象部位与当前对象部位,确定与目标对象相对应的目标对焦位置。
在通过图像采集装置获取到与目标对象相对应的多帧图像时,可以确定与多帧图像相对应的多个对焦位置(包括历史对焦位置和当前对焦位置),与多帧图像所对应的多个对焦位置可以相同或不同。在获取到上述多个对焦位置之后,则可以确定与历史对焦位置相对应的历史图像和与当前对焦位置相对应的当前图像,而后基于历史对焦位置对历史图像进行分析处理,以确定与历史对焦位置相对应的历史对象部位。具体的,可以利用预设图像识别算法对历史图像进行分析处理,以确定位于历史图像中的目标对象轮廓和目标对象类型,而后确定历史对焦位置与位于历史图像中的目标对象轮廓和目标对象类型之间的对应关系,从而确定与历史对焦位置相对应的历史对象部位。相类似的,也可以基于当前对焦位置对当前图像进行分析处理,以确定与当前对焦位置相对应的当前对象部位。在获取到历史对象部位和当 前对象部位之后,则可以对历史对象部位和当前对象部位进行分析处理,以确定与目标对象相对应的目标对焦位置。
具体的,在图像采集装置获取到采集图像之后,可以利用图像识别算法或者预先训练好的机器学习模型对采集图像进行分析识别,以识别采集图像中所包括的至少一个对象以及对象所在区域。在获取到多个对焦位置之后,则可以确定对焦位置与对象所在区域进行分析比较,在某几个对焦位置是一对象所在区域的一部分时,则可以确定上述某几个对焦位置对应于同一对象。在某几个对焦位置是不同对象所在区域的一部分时,则可以确定上述某几个对焦位置对应于不同对象。在确定某几个对焦位置对应同一对象时,则可以确定上述任意两个对焦位置之间的距离信息,在距离信息小于或等于预设阈值时,则可以确定上述两个对焦位置对应于同一对象的同一部位,在距离信息大于预设阈值时,则可以确定上述两个对焦位置对应于同一对象的不同部位。
当然的,本领域技术人员也可以采用其他的方式来确定某几个对焦位置是否对应于同一目标对象、确定某几个对焦位置是否对应于同一目标对象的同一部位,在此不再赘述。
在获取到历史对焦位置和当前对焦位置之后,可以确定历史对焦位置和当前对焦位置是否对应同一目标对象,并且在历史对焦位置和当前对焦位置对应同一目标对象时,可以确定历史对焦位置和当前对焦位置是否对应同一对象的同一部位。在确定上述信息之后,则可以将上述信息传输至云台,以使得云台可以基于上述信息进行跟随控制操作,进而保证了智能跟随操作的质量和效果。
可以理解,对焦位置与对焦对象、对焦对象的对焦部位之间可以具有相应的映射关系,且具有各自的属性信息,该属性信息可以具有相应的标识,该映射关系以及属性信息可以经由图像采集装置发送至云台,以使得云台可以依据该信息进行相应的判断,并作出相应的执行策略。
在一些实例中,根据历史对象部位与当前对象部位,确定与目标对象相对应的目标对焦位置可以包括:在历史对象部位与当前对象部位为同一目标对象的不同部位时,则获取历史对象部位与当前对象部位之间的相对位置信息;基于相对位置信息对当前对焦位置进行调整,获得与目标对象相对应的目标对焦位置。
在获取到历史对象部位和当前对象部位之后,则可以对历史对象部位与当前对象部位进行分析处理,在历史对象部位和当前对象部位为同一目标对象的不同部位时,则说明历史图像与当前图像进行跟随操作的是同一目标对象的不同部位,例如:历史图像帧中所对应的历史对象部位为人物甲的眼睛,当前图像帧中所对应的当前对象部位为人物甲的肩部。此时,为了避免因对焦位置的变化而出现抖动的情况,则可以获取历史对象部位与当前对象部位之间的相对位置信息,例如:人物甲的眼睛与人物甲的肩部之间的相对位置信息;在获取到相对位置信息之后,则可以基于相对位置信息对当前对焦位置进行调整,以获得与目标对象相对应的目标对焦位置。
具体的,基于相对位置信息对当前对焦位置进行调整,获得与目标对象相对应的目标对焦位置可以包括:在相对位置信息大于或等于预设阈值时,基于相对位置信息对当前对焦位置进行调整,获得与目标对象相对应的目标对焦位置;在相对位置信息小于预设阈值时,将当前对焦位置确定为与目标对象相对应的目标对焦位置。
在获取到相对位置信息之后,则可以将相对位置信息与预设阈值进行分析比较,在相对位置信息大于或等于预设阈值时,即说明在利用图像采集装置对一目标对象进行对焦操作时,在不同时刻针对同一目标对象的对焦部位不同,进而则可以基于相对位置信息对当前对焦位置进行调整,以获得与目标对象相对应的目标对焦位置。在相对位置信息小于预设阈值时,则说明在利用图像采集装置对一目标对象进行对焦操作时,在不同时刻针对一目标对象的对焦部位基本不变,进而可以将当前对焦位置确定为与目标对象相对应的目标对焦位置。
举例来说,在通过图像采集装置对一人物进行图像采集操作时,可以获取至少两帧图像,在获取到上述至少两帧图像之后,则可以确定与至少两帧图像相对应的历史对焦位置和当前对焦位置,基于历史对焦位置可以确定相对应的历史对象部位,基于当前对焦位置可以确定相对应的当前对象部位。
而后可以对历史对象部位和当前对象部位进行分析处理,以确定与目标对象相对应的目标对焦位置。参考附图21所示,在历史对象部位为部位1、当前对象部位为部位2时,可以获取部位1与部位2之间的相对位置信息d1,而后将相对位置信息d1与预设阈值进行分析比较,在相对位置信息d1小于预设阈值时,则说明在利用图像采集装置对上述人物进行对焦操作时,对焦位置发生较小变化,进而可以将当前对焦位置确定为与目标对象相对应的目标对焦位置。
在另一些实例中,参考附图22所示,在历史对象部位为部位3、当前对象部位为部位4时,可以获取部位3与部位4之间的相对位置信息d2,而后将相对位置信息d2与预设阈值进行分析比较,在相对位置信息d2大于预设阈值时,则说明在利用图像采集装置对上述人物进行对焦操作时,对焦位置发生较大变化,进而可以基于相对位置信息对当前对焦位置进行调整,以获得与目标对象相对应的目标对焦位置,即在待跟随的目标对象未发生改变,只是对焦位置发生改变时,则可以基于目标对象中各个部位之间的相对位置关系对当前对焦位置进行自动调整,从而可以有效地避免图像出现跳变的情况。
在又一些实例中,在获取到历史对象部位和当前对象部位之后,则可以对历史对象部位和当前对象部位进行分析处理,以确定与目标对象相对应的目标对焦位置。具体的,根据历史对象部位与当前对象部位,确定与目标对象相对应的目标对焦位置可以包括:在历史对象部位与当前对象部位为同一目标对象的不同部位时,基于当前对焦位置对构图位置进行更新,获得第一更新后的构图位置;基于第一更新后的构图位置对目标对象进行跟随操作。
在获取到历史对象部位和当前对象部位之后,则可以识别历史对象部位和当前对象部位是否为同一目标对象的不同部位,在确定历史对象部位与当前对象部位为同一目标对象的不同部位时,如图8所示,则可以基于当前对焦位置对构图位置进行更新,获得第一更新后的构图位置。例如:预设的构图目标位置为画面的中心位置时,此时,为了避免因目标对象变更而使得图像出现抖动现象,则可以基于当前对焦位置对预设的构图位置进行更新,即可以将当前对焦位置确定为第一更新后的构图位置。在获取到第一更新后的构图位置之后,则可以基于第一更新后的构图位置对目标对象进行跟随操作,进而可以保证对目标对象进行跟随操作的质量和效率。
本实施例中,通过图像采集装置获取与目标对象相对应的历史对焦位置和当前对焦位置,而后基于历史对焦位置和当前对焦位置,确定与目标对象相对应的目标对焦位置,从而有效地保证了对目标对焦位置进行确定的准确可靠性,而后便于基于目标对焦位置对目标对象进行跟随操作,进一步提高了该方法的实用性。
图23为本发明实施例提供的另一种云台的控制方法的流程示意图;在上述实施例的基础上,继续参考附图23所示,本实施例中的方法还可以包括:
步骤S2301:检测进行跟随操作的目标对象是否发生改变。
步骤S2302:在目标对象由第一对象改变为第二对象时,获取第二对象在采集图像中的采集位置。
步骤S2303:基于第二对象在采集图像中的采集位置对构图位置进行更新,获得与第二对象相对应的第二更新后的构图位置,以基于第二更新后的构图位置对第二对象进行跟随操作。
在通过图像采集装置对目标对象进行跟随操作时,为了避免因进行跟随操作的目标对象发生变化而容易使得云台出现抖动情况,则可以实时检测进行跟随操作的目标对象是否为发生改变。具体的,可以获取历史对焦位置和当前对焦位置,识别与历史对焦位置所对应的历史目标对象和与当前对焦位置所对应的当前目标对象,识别历史目标对象与当前目标对象是否发生改变。
在历史目标对象与当前目标对象为同一目标对象时,则可以确定进行跟随操作的目标对象未发生改变,如图22所示;在历史目标对象与当前目标对象为不同的目标对象时,如图24所示,即目标对象由第一对象改变为第二对象,而后则可以确定进行跟随操作的目标对象已发生改变。此时,为了保证对第二对象进行跟随操作的质量和效果,则可以获取第二对象在采集图像中的采集位置,而后可以基于第二对象在采集图像中的采集位置对构图位置进行更新,获得与第二对象相对应的第二更新后的构图位置。具体的,可以将第二对象在采集图像中的采集位置确定为与第二对象相对应的第二更新后的构图位置,而后可以基于第二更新后的构图位置对第二对象进行跟随操作,这样可以有效地避免因目标对象改变而出现图像抖动的情况,进一步提高了对云台进行控制的质量和效率。
图25为本发明实施例提供的计算与采集位置相对应的当前位置预测值的流程示意图;在上述实施例的基础上,继续参考附图25所示,本实施例提供了一种计算与采集位置相对应的当前位置预测值的实现方式,具体的,本实施例中的计算与采集位置相对应的当前位置预测值可以包括:
步骤S2501:确定与采集位置相对应的延时时间,延时时间用于指示云台经由图像采集装置获得采集位置所需要的时长。
其中,在云台通过图像采集装置直接获取采集位置时,由于数据传输存在一定的延时时间。因此,为了能够实现对与采集位置相对应的当前位置预测值进行获取的准确可靠性,则可以确定与采集位置相对应的延时时间,该延时时间用于指示云台经由图像采集装置获得采集位置所需要的时长信息。在一些实例中,确定与采集位置相对应的延时时间可以包括:获取与采集图像相对应的曝光时间;在云台获取到采集位置时,确定与采集位置相对应的当前接收时间;将当前接收时间与曝光时间之间的时间间隔,确定为与采集位置相对应的延时时间。
具体的,在利用图像采集装置进行图像采集操作时,可以记录与采集图像相对应的曝光时间t n,所记录的与采集图像相对应的曝光时间t n可以存储在预设区域中,从而使得云台可以通过图像采集装置获取与采集图像相对应的曝光时间t n。另外,在图像采集装置将目标对象在采集图像中的采集位置传输至云台时,在云台获取到采集位置时,可以确定与采集位置相对应的当前接收时间t n+1。在获取到当前接收时间t n+1与曝光时间t n之后,则可以将当前接收时间t n+1与曝光时间t n之间的时间间隔,确定为与采集位置相对应的延时时间,即Δt=t n+1-t n
步骤S2502:基于延时时间和采集位置,确定与采集位置相对应的当前位置预测值。
在获取到延时时间和采集位置之后,则可以对延时时间和采集位置进行分析处理,以确定与采集位置相对应的当前位置预测值。在一些实例中,基于延时时间和采集位置,确定与采集位置相对应的当前位置预测值可以包括:在云台获取到目标对象在前一采集图像的前一采集位置时,确定与前一采集位置相对应的前一接收时间;确定与前一采集位置相对应的前一位置预测值;根据采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,计算与采集位置相对应的当前位置预测值。
在图像采集装置获取到多帧图像时,可以确定目标对象在多帧图像中所对应的多个采集位置,在将多个采集位置传输至云台时,云台则可以获取到多个采集位置,多个采集位置可以包括:前一采集位置和当前采集位置。在云台获取到前一采集位置时,则可以确定与前一采集位置相对应的前一接收时间,同时还可以确定与前一采集位置相对应的前一位置预测值,其中,对前一位置预测值进行确定的具体实现方式与上述实施例中对当前位置预测值进行确定的具体实现方式相类似,具体可参考上述陈述内容,在此不再赘述。
在获取到采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值之后,则可以对采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值进行分析处理,以计算出与采集位置相对应的当前位置预测值。在一些实例中,根据采集位置、曝光时间、延时时间、前一接收时间和前一位置预 测值,计算与采集位置相对应的当前位置预测值可以包括:基于采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,确定与采集位置相对应的位置调整值;将位置调整值与采集位置的和值,确定为与采集位置相对应的当前位置预测值。
其中,在获取到采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值之后,则可以对采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值进行分析处理,以确定与采集位置相对应的位置调整值Δx,在获取到位置调整值Δx之后,则可以将位置调整值与采集位置的和值确定为与采集位置相对应的当前位置预测值
Figure PCTCN2021135818-appb-000001
从而有效地提高了对与采集位置相对应的当前位置预测值进行确定的准确可靠性。
本实施例中,通过确定与采集位置相对应的延时时间,而后基于延时时间和采集位置确定与采集位置相对应的当前位置预测值,由于当前位置预测值考虑了与采集位置相对应的延时时间,因此有效地保证了对当前位置预测值进行确定的准确可靠性;另外,在利用不同的图像采集装置和/或不同的传输接口传输采集位置时,可以获得与上述不同的图像采集装置和/或不同的传输接口相对应的延时时间,从而有效地解决了现有技术中存在的不同的图像采集装置和/或不同的传输接口进行数据传输时,所对应的延时时间的长短不同的问题,实现算法的归一化处理,进一步提高了对目标对象进行跟随操作的质量和效率。
图26为本发明实施例提供的基于采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,确定与采集位置相对应的位置调整值的流程示意图;在上述实施例的基础上,继续参考附图26所示,在计算与采集位置相对应的当前位置预测值的过程中,为了提高对当前位置预测值进行计算的准确度,本实施例提供了一种确定与采集位置相对应的位置调整值的实现方式,具体的,本实施例中的基于采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,确定与采集位置相对应的位置调整值可以包括:
步骤S2601:基于采集位置、前一位置预测值、曝光时间和前一接收时间,确定与目标对象相对应的移动速度。
其中,在获取到采集位置、前一位置预测值、曝光时间和前一接收时间之后,则可以对采集位置、前一位置预测值、曝光时间和前一接收时间进行分析处理,以确定与目标对象相对应的移动速度。具体的,基于采集位置、前一位置预测值、曝光时间和前一接收时间,确定与目标对象相对应的移动速度可以包括:获取采集位置与前一位置预测值之间的位置差值以及曝光时间与前一接收时间之间的时间差值;将位置差值与时间差值之间的比值,确定为与目标对象相对应的移动速度。
以采集位置为x n、前一位置预测值为
Figure PCTCN2021135818-appb-000002
曝光时间为t n、前一接收时间t n-1为例,在获取到采集位置x n、前一位置预测值
Figure PCTCN2021135818-appb-000003
曝光时间t n和前一接收时间t n-1之后,则可以获取采集位置与前一位置预测值之间的位置差值
Figure PCTCN2021135818-appb-000004
以及曝光时间与前一接收时间之间的时间差值(t n-t n-1),则可以将位置差值与时间差值之间的比值确定为与目标对象相对应的移动速度,即
Figure PCTCN2021135818-appb-000005
Figure PCTCN2021135818-appb-000006
步骤S2602:将移动速度与延时时间之间的乘积值,确定为与采集位置相对应的位置调整值。
在获取到移动速度和延时时间之后,则可以获取移动速度与延时时间之间的乘积值,并将上述乘积值确定为与采集位置相对应的位置调整值,即位置调整值:
Figure PCTCN2021135818-appb-000007
本实施例中,基于采集位置、前一位置预测值、曝光时间和前一接收时间,确定与目标对象相对应的移动速度,而后将移动速度与延时时间之间的乘积值确定为与采集位置相对应的位置调整值,从而有效地保证了对位置调整值进行确定的准确可靠性,进一步提高了基于位置调整值计算与采集位置相对应的当前位置预测值的精确程度。
图27为本发明实施例提供的基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数的流程示意图一;在上述实施例的基础上,继续参考附图27所示,本实施例提供了一种确定用于对目标对象进行跟随操作的控制参数的实现方式,具体的,本实施例中的基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数可以包括:
步骤S2701:确定当前位置预测值与构图目标位置之间的位置偏差。
步骤S2702:基于位置偏差,确定用于对目标对象进行跟随操作的控制参数。
其中,在对目标对象进行跟随操作时,预先配置有构图位置,该构图位置即为在对目标对象进行跟随操作的过程中,期望目标对象持续位于图像中的位置,一般情况下,构图位置可以是指图像的中心位置,即使得目标对象持续位于图像的中心位置,这样可以保证对目标对象进行跟随操作的质量和效果。
在获取到当前位置预测值之后,则可以确定当前位置预测值与构图位置之间的位置偏差,而后对位置偏差进行分析处理,以确定用于对目标对象进行跟随操作的控制参数。在一些实例中,基于位置偏差,确定用于对目标对象进行跟随操作的控制参数可以包括:获取与采集图像相对应的画面视场角;基于画面视场角和位置偏差,确定用于对目标对象进行跟随操作的控制参数。
具体的,通过图像采集装置获取与采集图像相对应的画面视场角可以包括:通过图像采集装置获得与采集图像相对应的焦距信息;根据焦距信息确定与采集图像相对应的画面视场角。在获取到与采集图像相对应的画面视场角之后,则可以对画面视场角和位置偏差进行分析处理,以确定用于对目标对象进行跟随操作的控制参数。
在一些实例中,控制参数的大小与画面视场角的大小呈负相关,即在画面视场角增大时,位于图像中的目标对象的尺寸变小,此时控制参数(例如:云台的转动速度)可以随着画面视场角的增加而减小。在画面视场角减小时,位于图像中的目标对象的尺寸变大,此时控制参数(例如:云台的转动速度)可以随着画面视场角的减小而增大。
在另一些实例中,基于位置偏差,确定用于对目标对象进行跟随操作的控制参数可以包括:通过设置于云台上的惯性测量单元IMU获取与采集位置相对应的云台姿态;基于云台姿态和画面视场角将位置偏差转换至大地坐标系下,获得用于对目标对象进行跟随操作的控制参数,这样同样实现了对控制参数进行确定的准确可靠性。
本实施例中,通过确定当前位置预测值与构图位置之间的位置偏差,而后基于位置偏差确定用于对目标对象进行跟随操作的控制参数,这样不仅有效地保证了对控制参数进行确定的准确可靠性,进一步也提高了该方法的实用性。
图28为本发明实施例提供的基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数的流程示意图二;在上述实施例的基础上,继续参考附图28所示,本实施例提供了另一种确定用于对目标对象进行跟随操作的控制参数的实现方式,具体的,本实施例中的基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数可以包括:
步骤S2801:获取与云台相对应的跟随模式,跟随模式包括以下任意之一:单轴跟随模式、双轴跟随模式、全跟随模式。
步骤S2802:基于当前位置预测值和跟随模式,确定用于对目标对象进行跟随操作的控制参数。
在云台针对目标对象进行跟随操作时,可以对应有不同的跟随模式。具体的,与云台相对应的跟随模型可以包括以下之一:单轴跟随模式、双轴跟随模式、全跟随模式。可以理解的是,本领域技术人员可以基于不同的应用场景和应用需求对云台的控制模式进行调整,在此不再赘述。
其中,在对云台进行控制时,位于不同跟随模式下的云台可以对应有不同的控制参数,例如:当云台的跟随模式为单轴跟随模式时,控制参数可以与云台的单个轴相对应,例如:可以基于目标姿态控制 yaw轴进行运动。当云台的跟随模式为双轴跟随模式时,控制参数可以与云台的两个轴相对应,例如:可以基于目标姿态控制yaw轴和pitch轴进行运动。当云台的跟随模式为三轴跟随模式时,控制参数可以与云台的三个轴相对应,例如:可以基于目标姿态控制yaw轴、pitch轴和roll轴进行运动。
基于上述陈述内容可知,由于云台可以对应有不同的跟随模式,而不同的跟随模式可以对应有不同的控制参数,因此,为了提高对控制参数进行确定的准确可靠性,在获取到跟随模式之后,则可以对当前位置预测值和跟随模式进行分析处理,以确定用于对目标对象进行跟随操作的控制参数。在一些实例中,基于当前位置预测值和跟随模式,确定用于对目标对象进行跟随操作的控制参数可以包括:基于当前位置预测值,确定用于对目标对象进行跟随操作的备选控制参数;在备选控制参数中,确定与跟随模式相对应的目标控制参数。
其中,在获取到当前位置预测值之后,则可以基于当前位置预测值与控制参数之间的对应关系来确定用于对目标对象进行跟随操作的备选控制参数,可以理解的是,备选控制参数的数量可以为多个,例如,在云台为三轴云台时,备选控制参数可以包括与yaw轴、pitch轴和roll轴相对应的控制参数。
在获取到备选控制参数之后,则可以在备选控制参数中确定与跟随模型相对应的目标控制参数,其中,目标控制参数可以为备选控制参数中的至少一部分。具体的,在备选控制参数中,确定与跟随模式相对应的目标控制参数可以包括:在跟随模式为单轴跟随模式时,在备选控制参数中可以确定与单轴跟随模式相对应的单轴的控制参数,并将其他备选控制参数置零;在跟随模式为双轴跟随模式时,在备选控制参数中可以确定与双轴跟随模式相对应的双轴的控制参数,并将其他备选控制参数置零;在跟随模式为全跟随模式时,将备选控制参数确定为与全跟随模式相对应的三轴的控制参数。
本实施中,通过获取与云台相对应的跟随模式,而后基于当前位置预测值和跟随模式确定用于对目标对象进行跟随操作的控制参数,这样不仅实现了对与不同跟随模式的云台相对应的控制参数进行确定的准确可靠性,并且有效地满足各个应用场景的需求,进一步提高了该方法使用的灵活可靠性。
图29为本发明实施例提供的基于云台的运动状态和控制参数对云台进行控制的流程示意图;在上述实施例的基础上,继续参考附图29所示,本实施例提供了一种对云台进行控制的实现方式,具体的,本实施例中的基于云台的运动状态和控制参数对云台进行控制可以包括:
步骤S2901:获取用于对目标对象进行跟随操作所对应的时长信息。
其中,云台的控制装置上可以设置有计时器,该计时器可以用于对目标对象进行跟随操作所对应的时长信息进行计时操作,因此,通过计时器可以获取到用于对目标对象进行跟随操作所对应的时长信息。
步骤S2902:在时长信息小于第一时间阈值时,则基于云台的运动状态对控制参数进行更新,获得更新后的控制参数,并基于更新后的控制参数对云台进行控制。
其中,在云台进行运动的过程中,可以对应有不同的云台的运动状态,而不同的云台的运动状态可以对应有不同的控制参数,因此在获取到时长信息之后,则可以将时长信息与预设的第一时间阈值进行分析比较,在时长信息小于第一时间阈值时,则可以基于云台的运动状态对控制参数进行更新,从而可以获得更新后的控制参数,并可以基于更新后的控制参数对云台进行控制。
在一些实例中,基于云台的运动状态对控制参数进行更新,获得更新后的控制参数可以包括:基于云台的运动状态,确定与控制参数相对应的更新系数,其中,更新系数小于1;将更新系数与控制参数的乘积值,确定为更新后的控制参数。
具体的,基于云台的运动状态,确定与控制参数相对应的更新系数可以包括:在云台运动状态为第一特定运动状态(意味着开始对目标对象进行跟随),例如匀加速运动时,基于所述时长信息与所述第一时间阈值之间的比值确定与所述控制参数相对应的更新系数,此时的更新系数小于1。而后可以将更 新系数与控制参数的乘积值确定为更新后的控制参数,即在时长信息t<第一时间阈值T时,则可以基于以下公式确定更新后的控制参数,
Figure PCTCN2021135818-appb-000008
其中,E n为控制参数,更新后的控制参数为
Figure PCTCN2021135818-appb-000009
其中,该时长信息t的起始时刻为对目标对象开始跟随的时刻。
举例来说,在云台开始针对某一目标对象进行跟随操作时,在云台获取到用于对目标对象进行跟随操作的控制参数时,为了避免云台突然对目标对象进行跟随操作,则在时长信息小于第一时间阈值时,则可以获取与控制参数相对应的更新后的控制参数,该更新后的控制参数即为由0到控制参数之间的过渡控制参数,即在时长信息小于第一时间阈值时,基于更新后的控制参数对云台进行控制,从而实现了缓慢启动操作,即可以控制云台缓慢地调整至控制参数,进而保证了对目标对象进行跟随操作的质量和效果。
另一些实例中,在云台的运动状态为第二特定运动状态(意味着开始对目标对象结束跟随),例如匀减速运动时,则基于所述时长信息与所述第一时间阈值之间的比值确定与所述控制参数相对应的更新系数,此时的更新系数小于1。而后可以将更新系数与控制参数的乘积值确定为更新后的控制参数,即在时长信息t<第一时间阈值T时,则可以基于以下公式确定更新后的控制参数,
Figure PCTCN2021135818-appb-000010
其中,E n为控制参数,更新后的控制参数为
Figure PCTCN2021135818-appb-000011
其中,该时长信息t的起始时刻为开始对目标对象结束跟随的时刻。
举例来说,在云台开始针对某一目标对象停止跟随操作时,在云台获取到用于对目标对象停止跟随操作的控制参数时,为了避免云台突然停止对目标对象进行跟随操作,则在时长信息小于第一时间阈值时,则可以获取与控制参数相对应的更新后的控制参数,该更新后的控制参数即为由控制参数到0之间的过渡控制参数,即在时长信息小于第一时间阈值时,基于更新后的控制参数对云台进行控制,从而实现了缓慢停止操作,即可以控制云台缓慢地调整至0,进而保证了对目标对象停止跟随操作的质量和效果。
步骤S2903:在时长信息大于或等于第一时间阈值时,利用控制参数对云台进行控制。
其中,在时长信息与第一时间阈值的比较结果为时长信息大于或等于第一时间阈值时,则直接利用控制参数对云台进行控制,即在时长信息t≥第一时间阈值T时,更新后的控制参数与控制参数相同
Figure PCTCN2021135818-appb-000012
而后可以利用控制参数对云台进行控制。其中,该时长信息t的起始时刻为开始对目标对象结束跟随的时刻。
在另一些实例中,在时长信息大于或等于第一时间阈值时,则可以将控制参数配置为0。其中,该时长信息t的起始时刻为确定目标对象丢失的时刻。
本实施例中,通过获取用于对目标对象进行跟随操作所对应的时长信息,在时长信息小于第一时间阈值时,则基于云台的运动状态对控制参数进行更新,获得更新后的控制参数,并基于更新后的控制参数对云台进行控制;在时长信息大于或等于第一时间阈值时,利用控制参数对云台进行控制,从而有效地实现了利用缓慢启动策略对云台进行控制操作,进一步保证了对目标对象进行跟随操作的质量和效率。其中,利用缓慢停止策略对云台进行控制操作的效果类似,此处不再赘述。
图30为本发明实施例提供的根据控制参数对云台进行控制的流程示意图一;在上述任意一个实施例的基础上,继续参考附图30所示,本实施例提供了一种根据控制参数对云台进行控制的实现方式,具体的,本实施例中的根据控制参数对云台进行控制可以包括:
步骤S3001:获取与目标对象相对应的跟随状态。
其中,在对目标对象进行跟随操作时,目标对象可以对应有不同的跟随状态,在一些实例中,与目标对象相对应的跟随状态可以包括以下至少之一:保持跟随状态、丢失状态。可以理解的是,在目标对象对应有不同的跟随状态时,可以利用不同的控制参数对云台进行控制,以保证对云台进行控制的安全可靠性。
另外,本实施例对于获取与目标对象相对应的跟随状态的具体实现方式不做限定,本领域技术人员可以根据具体的应用需求和设计需求,在一些实例中,通过图像采集装置可以获取与目标对象相对应的跟随状态,具体的,在通过图像采集装置所采集的图像中存在目标对象时,则可以确定与目标对象相对应的跟随状态为保持跟随状态;在通过图像采集装置所采集的图像中不存在目标对象时,则可以确定与目标对象相对应的跟随状态为丢失状态。
在另一些实例中,获取与目标对象相对应的跟随状态可以包括:检测进行跟随操作的目标对象是否发生改变;在目标对象由第一对象改变为第二对象时,同样可以确定第一对象为丢失状态。
步骤S3002:基于跟随状态和控制参数对云台进行控制。
在获取到跟随状态和控制参数之后,则可以基于跟随状态和控制参数对云台进行控制。在一些实例中,基于跟随状态和控制参数对云台进行控制可以包括:在目标对象为丢失状态时,则获取对目标对象进行跟随操作过程中所对应的丢失时长信息;根据丢失时长信息对控制参数进行更新,获得更新后的控制参数;基于更新后的控制参数对云台进行控制。
其中,在目标对象为丢失状态时,则可以通过计时器获取对目标对象进行跟随操作过程中所对应的丢失时长信息,而后可以根据丢失时长信息对控制参数进行更新,获得更新后的控制参数。在一些实例中,根据丢失时长信息对控制参数进行更新,获得更新后的控制参数可以包括:在丢失时长信息大于或等于第二时间阈值时,将控制参数更新为零;在丢失时长信息小于第二时间阈值时,获取丢失时长信息与第二时间阈值之间的比值,并将1与比值之间的差值确定为与控制参数相对应的更新系数,将更新系数与控制参数的乘积值确定为更新后的控制参数。
具体的,在获取到丢失时长信息之后,则可以将丢失时长信息与第二时间阈值进行分析比较,在丢失时长信息大于或等于第二时间阈值时,则说明所跟随的目标对象处于丢失状态的时间较长,进而可以将控制参数更新为零,即在丢失时长信息t≥第二时间阈值T时,则可以将控制参数
Figure PCTCN2021135818-appb-000013
在丢失时长信息小于第二时间阈值时,则说明所跟随的目标对象处于丢失状态的时间较短,进而则可以获取丢失时长信息与第二时间阈值之间的比值,并将1与比值之间的差值确定为与控制参数相对应的更新系数,而后则可以将更新系数与控制参数的乘积值确定为更新后的控制参数,即在丢失时长信息t<第二时间阈值T时,则可以将控制参数
Figure PCTCN2021135818-appb-000014
本实施例中,通过获取与目标对象相对应的跟随状态,而后基于跟随状态和控制参数对云台进行控制,从而有效地保证了对云台进行控制的准确可靠性。
图31为本发明实施例提供的根据控制参数对云台进行控制的流程示意图二;在上述任意一个实施例的基础上,继续参考附图31所示,本实施例提供了另一种对云台进行控制的实现方式,具体的,本实施例中的根据控制参数对云台进行控制可以包括:
步骤S3101:获取目标对象的对象类型。
步骤S3102:根据对象类型和控制参数对云台进行控制。
其中,在利用云台对目标对象进行跟随操作时,目标对象可以对应有不同的对象类型,上述的对象类型可以包括以下之一:静止对象、高度较高的移动对象、高度较低的移动对象等等,而为了能够保证对不同的目标对象进行跟随操作的质量,在对不同的目标对象进行跟随操作时,则可以根据对象类型和控制参数对云台进行控制。在一些实例中,根据对象类型和控制参数对云台进行控制可以包括:根据对象类型对控制参数进行调整,获得调整后的参数;基于调整后的参数对云台进行控制。
具体的,根据对象类型对控制参数进行调整,获得调整后的参数可以包括:在目标对象为静止对象时,则降低云台在偏航方向所对应的控制带宽和云台在俯仰方向所对应的控制带宽;在目标对象为移动 对象、且移动对象的高度大于或等于高度阈值时,则提高云台在偏航方向所对应的控制带宽,并降低云台在俯仰方向所对应的控制带宽;在目标对象为移动对象、且移动对象的高度小于高度阈值时,则提高云台在偏航方向所对应的控制带宽和云台在俯仰方向所对应的控制带宽。
举例来说,在目标对象为建筑物时,为了能够保证对建筑物进行跟随操作的质量和效果,则可以降低云台在偏航方向(yaw轴方向)所对应的控制带宽和云台在俯仰方向(pitch轴方向)所对应的控制带宽,从而可以降低平移跟随性能和俯仰跟随性能。
在目标对象为移动对象、且移动对象的高度大于或等于高度阈值时,例如:目标对象为某一人物时,为了能够保证对人物进行跟随操作的质量和效果,则可以提高云台在偏航方向(yaw轴方向)所对应的控制带宽,并降低云台在俯仰方向(pitch轴方向)所对应的控制带宽,从而可以提高平移跟随性能、并降低俯仰跟随性能。
在目标对象为移动对象、且移动对象的高度小于高度阈值时,例如:目标对象为某一宠物时,为了能够保证对宠物进行跟随操作的质量和效果,则可以提高云台在偏航方向(yaw轴方向)所对应的控制带宽和云台在俯仰方向(pitch轴方向)所对应的控制带宽,从而可以提高平移跟随性能和俯仰跟随性能。
本实施例中,通过获取目标对象的对象类型,而后根据对象类型和控制参数对云台进行控制,从而有效地实现了可以针对不同类型的目标对象进行不同的跟随控制操作,进而保证了对目标对象进行跟随操作的质量和效果。
图32为本发明实施例提供的又一种云台的控制方法的流程示意图;在上述任意一个实施例的基础上,继续参考附图32所示,本实施例中的方法还可以包括:
步骤S3201:通过显示界面获取用户针对图像采集装置所输入的执行操作。
步骤S3202:根据执行操作对图像采集装置进行控制,以使得图像采集装置确定采集位置。
其中,预先设置有可以供用户进行交互操作的显示界面,具体的,显示界面可以为云台的控制装置(如云台的遥控器,与云台连接的诸如手机、平板、穿戴设备等、集成于云台的手柄上的显示装置)上的显示界面,或者,显示界面可以为图像采集装置上的显示界面。在获取到显示界面之后,则可以通过显示界面获取用户针对图像采集装置所输入的执行操作(如通过点选、框选、输入特征、输入坐标等的操作确定目标对象),而后可以根据执行操作对图像采集装置进行控制,以使得图像采集装置可以基于执行操作来确定目标对象在采集图像中的采集位置。
举例1,在显示界面为云台的控制装置上的显示界面时,云台的控制装置上可以设置有用于对图像采集装置进行控制的应用程序APP,通过对云台的控制装置进行操作即可启动上述APP,并可以在显示器上显示用于对图像采集装置进行控制的显示界面,用户可以通过显示界面获取用户针对图像采集装置所输入的执行操作,而后则可以根据执行操作对图像采集装置进行控制,以使得图像采集装置确定采集位置,从而实现了用户可以通过云台的控制装置对图像采集装置进行控制。
举例2,在显示界面为图像采集装置上的显示界面时,用户可以通过显示界面获取用户针对图像采集装置所输入的执行操作,而后则可以根据执行操作对图像采集装置进行控制,以使得图像采集装置确定采集位置,从而实现了用户可以通过图像采集装置对图像采集装置进行控制。
本实施例中,通过显示界面获取用户针对图像采集装置所输入的执行操作,而后根据执行操作对图像采集装置进行控制,以使得图像采集装置确定采集位置,从而有效地实现了对图像采集装置进行控制,进一步提高了对目标对象进行跟随操作的质量和效果。
图33为本发明实施例提供的另一种云台的控制方法的流程示意图;在上述任意一个实施例的基础上,继续参考附图33所示,本实施例中的方法还可以包括:
步骤S3301:通过设置于图像采集装置上的测距传感器获取与目标对象相对应的距离信息。
步骤S3302:将距离信息发送至图像采集装置,以使图像采集装置结合距离信息确定目标对象在采集图像中的采集位置。
其中,在图像采集装置获取目标对象在采集图像中的采集位置时,为了提高对目标对象在采集图像中的采集位置进行确定的精确度,在图像采集装置上可以设置有测距传感器,测距传感器可以通过云台与图像采集装置通信连接,在具体应用时,可以通过设置于图像采集装置上的测距传感器获取与目标对象相对应的距离信息,而后可以将距离信息发送至图像采集装置,在图像采集装置获得距离信息之后,则可以将结合距离信息确定目标对象在采集图像中的采集位置,这样有效地提高了对目标对象在采集图像中的采集位置进行确定的准确可靠性。也即,使得图像采集装置通过距离信息获取的采集位置,对基于图像识别获取的采集位置进行融合或校准。
图34为本发明实施例提供的又一种云台的控制方法的流程示意图;在上述任意一个实施例的基础上,继续参考附图34所示,本实施例中的方法还可以包括:
步骤S3401:确定图像采集装置所对应的工作模式,工作模式包括以下之一:先跟随后对焦模式、先对焦后跟随模式;
步骤S3402:利用工作模式对图像采集装置进行控制。
其中,在基于图像采集装置进行跟随操作时,图像采集装置可以对应有不同的工作模型,该工作模式可以包括:先跟随后对焦模式、先对焦后跟随模式,上述的先跟随后对焦模式是指在图像采集装置需要执行跟随操作和对焦操作时,图像采集装置可以优先进行跟随操作,而后再进行对焦操作。先对焦后跟随模式是指在图像采集装置需要执行跟随操作和对焦操作时,图像采集装置可以优先进行对焦操作,而后再进行跟随操作。
举例来说,在控制云台和图像采集装置进行跟随操作时,可以通过图像采集装置获得采集图像时,是先基于采集图像进行跟随操作,还是先对采集图像中的目标对象进行对焦操作;在图像采集装置的工作模式为先跟随后对焦模式时,则可以优先基于采集图像进行跟随操作,而后再对经过构图跟随操作的目标对象进行对焦操作。在图像采集装置的工作模式为先对焦后跟随模式时,则可以优先对采集图像中的目标对象进行对焦操作,而后再对进行对焦操作的目标对象进行跟随操作。
具体应用时,预先设置有用于对图像采集装置进行控制的操作界面/操作控件,可以通过操作界面/操作控件对图像采集装置的工作模式进行配置/选择,在配置完图像采集装置的工作模式之后,则可以通过工作模式标识来确定该图像采集装置所对应的工作模式,这样可以简单、快速地实现对图像采集装置的工作模式进行选择、调整或者配置操作。
在确定图像采集装置所对应的工作模式之后,则可以利用工作模式对图像采集装置进行控制,从而有效地实现了图像采集装置可以满足不同的应用场景需求,进一步提高了对图像采集装置进行控制的灵活可靠性。
图35为本发明实施例提供的一种云台***的控制方法的流程示意图;参考附图35所示,本实施例提供了一种云台***的控制方法,其中,云台***包括:云台和与云台通信连接的图像采集装置,其中,图像采集装置可以为具有手动镜头或自动镜头的相机。在一些实例中,图像采集装置可以集成在云台上,此时,云台和设置于云台上的图像采集装置可以整体进行销售或者维护操作,此时,图像采集装置和云台可以整体进行销售或者维护操作。在另一些实例中,图像采集装置可以单独设置于云台上,此时,图像采集装置和云台可以单独进行销售或者维护操作。
另外,图像采集装置是指具有图像采集能力和图像处理能力的装置,例如:照相机、摄像机、具有图像采集能力的其他装置等等。在一些实例中,云台设有通信串行总线USB接口,USB接口用于与图像采 集装置有线通信连接,即云台通过USB接口与图像采集装置通信连接,具体应用时,在云台通过USB接口与图像采集装置进行传输数据时,传输数据所对应的延时时间比较短。
可以理解的是,云台与图像采集装置之间的通信连接方式并不限于上述所限定的实现方式,本领域技术人员还可以根据具体的应用需求和应用场景进行设置,例如无线通信,或者能够保证在云台与图像采集装置进行数据传输时,所对应的延时时间比较短,在此不再赘述。
此外,该云台***的控制方法的执行主体可以是云台***的控制装置,可以理解的是,该云台***的控制装置可以实现为软件、或者软件和硬件的组合;另外,云台***的控制装置可以设置于云台或者图像采集装置上,当云台***的控制装置设置于图像采集装置上时,云台与图像采集装置可以是集成的产品。在控制装置执行该云台***的控制方法时,可以解决因云台侧的图像处理装置获取采集位置所产生的延时时间比较长而导致跟随效果差的问题,从而可以保证对目标对象进行跟随操作的质量和效果。具体的,该方法可以包括:
步骤S3501:控制图像采集装置采集图像,并获取目标对象在采集图像中的采集位置,采集位置是通过图像采集装置所确定的。
步骤S3502:控制图像采集装置将采集位置传输至云台。
步骤S3503:控制云台按照控制参数进行运动,以实现对目标对象进行跟随操作,其中,控制参数是基于采集位置所确定的。
下面针对上述各个步骤的实现过程进行详细阐述:
步骤S3501:控制图像采集装置采集图像,并获取目标对象在图像中的采集位置,采集位置是通过图像采集装置所确定的。
当针对一目标对象存在跟随需求时,则可以根据跟随需求控制图像采集装置进行图像采集装置,在图像采集装置获取到图像之后,图像采集装置可以对图像进行分析处理,以确定目标对象在图像中的采集位置。具体的,目标对象在图像中的采集位置可以包括:目标对象在图像所对应的关键点位置,或者,目标对象在图像中所对应的覆盖区域等等。
步骤S3502:控制图像采集装置将采集位置传输至云台。
在获取到目标对象在采集图像中的采集位置之后,则可以将目标对象在采集图像中的采集位置通过USB接口主动或者被动地传输至云台,从而使得云台可以获取到目标对象在图像中的采集位置。
步骤S3503:控制云台按照控制参数进行运动,以实现对目标对象进行跟随操作,其中,控制参数是基于采集位置所确定的。
在云台获取到采集位置之后,则可以对采集位置进行分析处理,以确定用于对云台进行控制的控制参数,而后则可以控制云台按照控制参数进行运动,以实现对目标对象进行跟随操作。
需要注意的是,本实施例中的方法还可以包括上述图2至图34中所示的实施例的方法,本实施例未详细描述的部分,可参考对图2至图34中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图2至图34所示实施例中的描述,在此不再赘述。
本实施例中提供的云台***的控制方法,通过控制图像采集装置采集图像,并获取目标对象在图像中的采集位置,而后将采集位置传输至云台,并控制云台按照控制参数进行运动,其中,控制参数是基于采集位置所确定的,从而可以实现对目标对象进行跟随操作,另外,由于采集位置是通过图像采集装置所确定的,而云台可以通过图像采集装置直接获取采集位置,这样有效地降低了在云台通过图像采集装置获取采集位置时所对应的延时时间,从而解决了因延时比较长而导致跟随效果差的问题,进一步保证了对目标对象进行跟随操作的质量和效果,有效地提高了该方法使用的稳定可靠性。
图36为本发明实施例提供的另一种云台的控制方法的流程示意图;参考附图36所示,本实施立体供 了另一种云台的控制方法,该方法适用于云台,该云台通信连接有图像采集装置,另外,云台的控制方法的执行主体可以是云台的控制装置,可以理解的是,该控制装置可以实现为软件、或者软件和硬件的组合,具体的,该方法可以包括:
步骤S3601:获取采集图像,采集图像中包括目标对象。
步骤S3602:在采集图像中确定目标对象的位置,以基于目标对象的位置对目标对象进行跟随操作。
步骤S3603:将目标对象的位置发送至图像采集装置,以使得图像采集装置基于目标对象的位置,确定与目标对象相对应的对焦位置,并基于对焦位置对目标对象进行对焦操作。
下面针对上述各个步骤的实现过程进行详细阐述:
步骤S3601:获取采集图像,采集图像中包括目标对象。
其中,云台通信连接有图像采集装置,上述的图像采集装置可以针对一目标对象进行图像采集装置,从而可以获取到采集图像,在图像采集装置获取到采集图像之后,则可以将采集图像主动或者被动地传输至云台,从而使得云台可以稳定地获取到采集图像。
步骤S3602:在采集图像中确定目标对象的位置,以基于目标对象的位置对目标对象进行跟随操作。
其中,在获取到采集图像之后,则可以对采集图像进行分析处理,以确定目标对象的位置,所获取到的目标对象的位置用于实现对目标对象进行跟随操作。具体的,可以通过显示界面对采集图像进行显示,而后用户可以通过显示界面针对采集图像输入执行操作,根据执行操作即可确定目标对象的位置,即用户可以对采集图像中所包括的目标对象进行框选操作,从而可以确定目标对象的位置。或者,在获取到采集图像之后,可以利用预设图像处理算法对采集图像进行自动分析处理,以确定目标对象的位置。
当然的,本领域技术人员也可以采用其他的方式在采集图像中确定目标对象的位置,只要能够保证对目标对象的位置进行确定的准确可靠性即可,在此不再赘述。
步骤S3603:将目标对象的位置发送至图像采集装置,以使得图像采集装置基于目标对象的位置,确定与目标对象相对应的对焦位置,并基于对焦位置对目标对象进行对焦操作。
其中,在获取到目标对象的位置之后,为了保证通过图像采集装置对目标对象进行跟随操作的质量和效果,则可以将目标对象的位置发送至图像采集装置,在图像采集装置获取到目标对象的位置之后,则可以基于目标对象的位置确定与目标对象相对应的对焦位置,从而实现了在对目标对象进行跟随操作时,用于对目标对象进行跟随操作的跟随位置与目标对象相对应的对焦位置相同,这样有效地避免了因对焦位置与跟随位置不一致而导致对目标对象出现虚焦的情况,进一步提高了对目标对象进行跟随操作的质量和效果。
需要注意的是,本实施例中的方法还可以包括上述图2至图34中所示的实施例的方法,本实施例未详细描述的部分,可参考对图2至图34中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图2至图34所示实施例中的描述,在此不再赘述。
本实施例中提供的云台的控制方法,通过获取采集图像,并在采集图像中确定目标对象的位置,以基于目标对象的位置对目标对象进行跟随操作,而后将目标对象的位置发送至图像采集装置,以使得图像采集装置基于目标对象的位置,确定与目标对象相对应的对焦位置,并基于对焦位置对目标对象进行对焦操作,从而有效地保证了在对目标对象进行跟随操作时,用于对目标对象进行跟随操作的跟随位置与目标对象相对应的对焦位置相同,这样有效地避免了因对焦位置与跟随位置不一致而导致对目标对象出现虚焦的情况,从而有效地提高了对目标对象进行跟随操作的质量和效果,进一步提高了该方法使用的稳定可靠性。
图37为本发明实施例提供的另一种云台***的控制方法的流程示意图;参考附图37所示,本实施例提供了一种云台***的控制方法,其中,云台***包括:云台和与云台通信连接的图像采集装置,在一 些实例中,图像采集装置可以集成在云台上,此时,云台和设置于云台上的图像采集装置可以整体进行销售或者维护操作。在另一些实例中,图像采集装置可以单独设置于云台上,此时,图像采集装置和云台之间可以单独进行销售或者维护操作。
另外,图像采集装置是指具有图像采集能力和图像处理能力的装置,例如:照相机、摄像机、具有图像采集能力的其他装置等等。在一些实例中,云台设有通信串行总线USB接口,USB接口用于与图像采集装置有线通信连接,即云台通过USB接口与图像采集装置通信连接,具体应用时,在云台通过USB接口与图像采集装置进行被跟随物体的位置数据的传输时,由于云台侧无需额外设置图像处理装置,则云台与图像采集装置之间传输的被跟随物体的位置数据所对应的延时时间比较短。
可以理解的是,云台与图像采集装置之间的通信连接方式并不限于上述所限定的实现方式,本领域技术人员还可以根据具体的应用需求和应用场景进行设置,例如无线通信,或者能够保证在云台与图像采集装置进行数据传输时,所对应的延时时间比较短,在此不再赘述。
此外,该云台***的控制方法的执行主体可以是云台***的控制装置,可以理解的是,该云台***的控制装置可以实现为软件、或者软件和硬件的组合;另外,云台***的控制装置可以设置于云台或者图像采集装置上。在控制装置执行该云台***的控制方法时,可以解决因接口传输数据所产生的延时长而导致跟随效果差的问题,从而可以保证对目标对象进行跟随操作的质量和效果。具体的,该方法可以包括:
步骤S3701:控制图像采集装置采集图像,图像包括目标对象。
步骤S3702:在图像中确定目标对象的位置。
步骤S3703:基于目标对象的位置控制云台对目标对象进行跟随操作,并根据目标对象的位置控制图像采集装置对目标对象进行对焦操作。
下面针对上述各个步骤的实现过程进行详细阐述:
步骤S3701:控制图像采集装置采集图像,图像包括目标对象。
当针对一目标对象存在跟随需求时,则可以根据跟随需求控制图像采集装置进行图像采集装置,在图像采集装置获取到图像之后,可以将图像主动或者被动地传输至云台,从而使得云台可以获取到图像。
步骤S3702:在图像中确定目标对象的位置。
其中,在获取到图像之后,则可以对图像进行分析处理,以确定目标对象在图像中的采集位置。具体的,目标对象在图像中的采集位置可以包括:目标对象在图像所对应的关键点位置,或者,目标对象在图像中所对应的覆盖区域等等。当然,在采集图像中目标对象的位置的确定也可以由图像采集装置来实现。
步骤S3703:基于目标对象的位置控制云台对目标对象进行跟随操作,并根据目标对象的位置控制图像采集装置对目标对象进行对焦操作。
在获取到目标对象的位置之后,则可以基于目标对象的位置控制云台对目标对象进行跟随操作,此外,在获取到目标对象的位置之后,还可以基于目标对象的位置确定与目标对象相对应的对焦位置,具体的,目标对象的位置可以与目标对象相对应的对焦位置相同,从而实现了在对目标对象进行跟随操作时,用于对目标对象进行跟随操作的跟随位置与目标对象相对应的对焦位置相同,这样有效地避免了因对焦位置与跟随位置不一致而导致对目标对象出现虚焦的情况,进一步提高了对目标对象进行跟随操作的质量和效果。
需要注意的是,本实施例中的方法还可以包括上述图2至图34中所示的实施例的方法,本实施例未详细描述的部分,可参考对图2至图34中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图2至图34所示实施例中的描述,在此不再赘述。
本实施例提供的云台***的控制方法,通过控制图像采集装置采集图像,并在图像中确定目标对象的位置,而后基于目标对象的位置控制云台对目标对象进行跟随操作,并根据目标对象的位置控制图像采集装置对目标对象进行对焦操作,从而有效地保证了在对目标对象进行跟随操作时,用于对目标对象进行跟随操作的跟随位置与目标对象相对应的对焦位置相同,这样有效地避免了因对焦位置与跟随位置不一致而导致对目标对象出现虚焦的情况,从而有效地提高了对目标对象进行跟随操作的质量和效果,进一步提高了该方法使用的稳定可靠性。
图38为本发明实施例提供的又一种云台***的控制方法的流程示意图;参考附图38所示,本实施例提供了又一种云台***的控制方法,其中,云台***包括:云台和与云台通信连接的图像采集装置,该云台***的控制方法的执行主体可以是云台***的控制装置,可以理解的是,该云台***的控制装置可以实现为软件、或者软件和硬件的组合;另外,云台***的控制装置可以设置于云台或者图像采集装置上,当云台***的控制装置设置于图像采集装置上时,云台与图像采集装置可以是集成的产品。具体的,本实施例中的方法还可以包括:
步骤S3801:获取第一对象在图像采集装置采集的采集图像中的采集位置,第一对象的采集位置用于云台对第一对象进行跟随操作,以及用于图像采集装置对第一对象进行对焦操作。
步骤S3802:在第一对象改变为第二对象时,获取第二对象在图像采集装置采集的采集图像中的采集位置,以使得云台由对第一对象的跟随操作变化为基于第二对象的采集位置对第二对象进行跟随操作,以及使得图像采集装置由对第一对象的对焦操作变化为基于第二对象的位置对第二对象进行对焦操作。
下面针对上述各个步骤的实现过程进行详细阐述:
步骤S3801:获取第一对象在图像采集装置采集的采集图像中的采集位置,第一对象的采集位置用于云台对第一对象进行跟随操作,以及用于图像采集装置对第一对象进行对焦操作。
其中,针对第一对象存在跟随需求时,则可以通过图像采集装置针对第一对象进行图像采集操作,从而可以获得包括第一对象的采集图像。在获取到采集图像之后,则可以对采集图像进行分析处理,确定第一对象在采集图像中的采集位置,所确定的第一对象在采集图像中的采集位置用于供云台对第一对象进行跟随操作,此外,所确定的第一对象在采集图像中的采集位置用于供图像采集装置对第一对象进行对焦操作。另外,对采集图像进行分析处理,确定第一对象在采集图像中的采集位置的执行主体可以为“图像采集装置”或者“云台”。
步骤S3802:在第一对象改变为第二对象时,获取第二对象在图像采集装置采集的采集图像中的采集位置,以使得云台由对第一对象的跟随操作变化为基于第二对象的采集位置对第二对象进行跟随操作,以及使得图像采集装置由对第一对象的对焦操作变化为基于第二对象的位置对第二对象进行对焦操作。
在对第一对象进行跟随操作时,可能会出现跟随对象发生变更的情况,即第一对象可能改变为第二对象。在第一对象改变为第二对象时,则可以获取第二对象在采集图像中的采集位置,而后可以基于第二对象在采集图像中的采集位置对云台进行控制,从而有效地实现了可以控制云台由对第一对象的跟随操作变化为基于第二对象的采集位置对第二对象进行跟随操作。
此外,所获得的第二对象在采集图像中的采集位置还可以用于供图像采集装置进行对焦操作,具体的,可以使得图像采集装置由对第一对象的对焦操作变化为基于第二对象的位置对第二对象进行对焦操作,从而实现了在对第二对象进行跟随操作时,用于对第二对象进行跟随操作的跟随位置与第二对象相对应的对焦位置相同,这样有效地避免了因对焦位置与跟随位置不一致而导致对第二对象出现虚焦的情况,进一步提高了对第二对象进行跟随操作的质量和效果。
此外,对第二对象在采集图像中的采集位置进行获取的实现方式与上述对第一对象在采集图像中的采集位置进行获取的实现方式相类似,具体可参考上述陈述内容,在此不再赘述。
需要注意的是,本实施例中的方法还可以包括上述图2至图34中所示的实施例的方法,本实施例未详细描述的部分,可参考对图2至图34中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图2至图34所示实施例中的描述,在此不再赘述。
可以理解,上述方法也可以分别在图像采集装置侧和云台侧进行相应对象的更新,例如,在图像采集装置侧,用户通过触控操作将对焦操作的对象由第一对象变更为第二对象时,可以从云台侧的显示屏发现跟随操作的对象也由第一对象变更为第二对象;在云台侧,用户将跟随操作的对象由第一对象变更为第二对象时,可以从图像采集装置侧的显示屏发现对焦操作的对象也由第一对象变更为第二对象。其中,变更对象的具体操作手段不限于上述说明的触控操作。
本实施例提供的云台***的控制方法,通过获取第一对象在图像采集装置采集的采集图像中的采集位置,在第一对象改变为第二对象时,则可以获取第二对象在图像采集装置采集的采集图像中的采集位置,以使得云台由对第一对象的跟随操作变化为基于第二对象的采集位置对第二对象进行跟随操作,以及使得图像采集装置由对第一对象的对焦操作变化为基于第二对象的位置对第二对象进行对焦操作,从而有效地保证了跟随对象由第一对象改变为第二对象时,则可以对第二对象进行跟随操作,并且,在对第二对象进行跟随操作时,通过保证用于对第二对象进行跟随操作的跟随位置与第二对象相对应的对焦位置相同,这样有效地避免了因对焦位置与跟随位置不一致而导致对第二对象出现虚焦的情况,从而有效地提高了对第二对象进行跟随操作的质量和效果,进一步提高了该方法使用的稳定可靠性。
具体应用时,参考附图39所示,本发明提供了一种基于相机(可以为设置于云台上的第三方相机、或者集成于云台上的相机)而实现的智能跟随方法,该方法的执行主体可以包括:相机和云台。具体的,本实施例中的方法包括以下步骤:
步骤1:相机平面偏差预测。
通过相机直接获取当前图像帧的相机曝光时间戳,当前图像帧的相机曝光时间戳可以为t n,在相机将当前图像帧的检测信息(可以包括目标对象位于图像帧中的坐标信息)发送至云台时,云台接收与当前图像帧的检测信息的时间戳为t n+1,相对应的,在相机将上一图像帧的检测信息发送至云台时,云台接收与上一图像帧的检测信息的时间戳为t n-1
考虑到由相机和云台所构成的通信链路上所存在的链路延时以及与链路相对应的其他不稳定因素,通过相机所获得的目标对象在当前图像帧中的目标对象目标值的时间与云台接收上述检测信息的时间之间存在偏差。因此,为了保证对目标对象进行跟随操作的质量和效果,则需要考虑链路延时对智能跟随操作的影响,具体可以进行如下步骤:
步骤1.1:获取由相机和云台所构成的通信链路所对应的链路延时。
具体的,链路延时即为当前图像帧的曝光时间与云台接收到与当前图像帧相对应的检测信息的接收时间之间的时间间隔,即:Δt=t n+1-t n
步骤1.2:基于当前图像帧,获得目标对象在当前图像帧的采集位置。
具体的,相机可以对当前图像帧进行分析处理,以确定目标对象在当前图像帧的采集位置为(x n,y n)。
步骤1.3:基于目标对象在当前图像帧的采集位置,确定与采集位置相对应的当前位置预测值
Figure PCTCN2021135818-appb-000015
具体的可以基于以下公式来实现:
Figure PCTCN2021135818-appb-000016
Figure PCTCN2021135818-appb-000017
其中,
Figure PCTCN2021135818-appb-000018
为目标对象在上一图像帧中的前一位置预测值,(x n,y n)为目标对象在当前图像帧的采集位置,
Figure PCTCN2021135818-appb-000019
为与采集位置相对应的当前位置预测值,Δt为由相机和云台所构成的通信 链路所对应的链路延时,t n为与当前图像帧所对应的相机曝光时间戳,t n-1为云台接收到与上一图像帧的检测信息的时间戳。
需要注意的是,在n=1时,目标对象在上一图像帧中的前一位置预测值
Figure PCTCN2021135818-appb-000020
与目标对象在当前图像帧的采集位置(x n,y n)相同。
步骤1.4:基于与采集位置相对应的当前位置预测值
Figure PCTCN2021135818-appb-000021
确定相机平面偏差。
具体的,相机平面偏差为归一化后的坐标值偏差,记作e x和e y,为了能够获取到相机平面偏差,则可以获取构图目标,具体实现时,可以将构图目标记作(tgt x,tgt y),而后基于构图目标和当前位置预测值确定相机平面偏差,具体的,可以基于以下公式来获取相机平面偏差:
Figure PCTCN2021135818-appb-000022
Figure PCTCN2021135818-appb-000023
步骤2:对相机平面偏差进行坐标转换操作,确定用于对目标对象进行跟随操作的偏差角度。
步骤2.1:获取相机的实际画面视场角fov信息和云台的当前姿态信息。
其中,为了能够获取相机的实际画面视场角fov信息,则可以先获取到相机的焦距信息,基于焦距信息确定相机的实际画面视场角fov信息,需要注意的是,上述的焦距信息可以直接通过相机获取,或者也可以是用户基于具体的应用场景和应用需求进行配置所获得的。
步骤2.2:根据实际画面视场角fov信息和当前姿态信息,将相机平面偏差转换至大地坐标系NED(北、东、地坐标系),从而可以获得偏差角度。
具体的,相机坐标系可以记作b系,NED坐标系可以记作n系。
在大地坐标系下的偏差角度可以通过以下公式来获得:
E x=0;
Figure PCTCN2021135818-appb-000024
其中,e x和e y为在相机平面进行归一化之后的坐标值偏差,FOV x、FOV y分别为相机在横向(x轴方向)和纵向(y轴方向)所对应的fov角度,E x、E y、E z为相机坐标系下各个轴所对应的偏差角度,其矩阵表示如下:
Figure PCTCN2021135818-appb-000025
通过IMU可以测得云台姿态和所对应的旋转矩阵
Figure PCTCN2021135818-appb-000026
并可以根据下式得到NED坐标系下的角度偏差:
Figure PCTCN2021135818-appb-000027
其中,E n为在大地坐标系NED中所对应的偏差角度,
Figure PCTCN2021135818-appb-000028
为与云台姿态所对应的旋转矩阵,E b为在相机坐标系中所对应的偏差角度。
在一些实例中,云台可以对应有不同的跟随模式,该跟随模式可以包括单轴跟随模式、双轴跟随模式和三轴跟随模式,不同的跟随模式可以对应有不同的偏差角度。在云台为单轴跟随模式时,所获得的偏差角度可以与云台的单轴相对应,例如:偏差角度与yaw轴相对应,与其他两个轴所对应的偏差角度调整为零。在云台为双轴跟随模式时,所获得的偏差角度可以与云台的两个轴相对应,例如:偏差角度与yaw轴和pitch轴相对应,与其他轴所对应的偏差角度调整为零。在云台为三轴跟随模式时,所获得的偏差角度可以与云台的三个轴相对应,例如:偏差角度与yaw轴、pitch轴和roll轴相对应。
步骤3:基于偏差角度对云台进行控制,以实现对目标对象进行跟随操作。
其中,云台上可以设置有云台控制器,该云台控制器可以包括三个比例积分微分(Proportion Integral Differential,简称PID)控制器,具体结构参考附图40,其具体可以包括跟踪换PID控制器、 位置环PID控制器和速度环PID控制器。
在获取到偏差角度E n之后,则可以将偏差角度E n输入至PID控制器中,从而可以获得用于控制云台电机进行转动的控制参数。
需要注意的是,本领域技术人员可以根据不同的需求调节三个PID控制器的相关参数。在PID控制器的带宽越高时,跟随性能越好,但是其平滑性会降低,反之带宽越低跟随性能越差,平滑度增高。
步骤4:云台智能跟随策略。
其中,云台智能跟随策略可以包括以下三大方面:针对跟随目标或丢失目标的缓启停策略、针对不同物体调整云台控制器、根据历史对焦位置来确定对焦偏移量,下面对上述三大方面的云台智能跟随策略进行说明:
步骤4.1:针对跟随目标和丢失目标的缓启停策略。
其中,缓启停策略包括匀加速策略或匀减速策略,设置加/减速时间阈值为T,已知E n为ned坐标系下各个轴的偏差角度,实际输出给云台控制器的实际偏差角度
Figure PCTCN2021135818-appb-000029
具体的,输出给云台控制器的实际偏差角度
Figure PCTCN2021135818-appb-000030
与偏差角度E n之间存在以下关系:
(a)开始跟随目标的匀加速运动中:
Figure PCTCN2021135818-appb-000031
其中,t为开始跟随目标对象的时长信息,T为预设时间阈值,用户可以根据具体的应用场景和应用需求对预设时间阈值的具体时间长短进行设置,一般情况下,T可以为0.5s或者1s。
具体的,在开始对目标对象进行跟随操作时,为了避免突然对目标对象进行跟随操作,在对目标对象进行跟随操作的时长信息小于预设时间阈值时,则可以获取与偏差角度E n相对应的实际偏差角度
Figure PCTCN2021135818-appb-000032
该实际偏差角度
Figure PCTCN2021135818-appb-000033
即为0与偏差角度E n之间的过渡参数,从而可以实现缓慢启动对目标对象进行跟随操作。在对目标对象进行跟随操作的时长信息大于或等于预设时间阈值时,则可以将实际偏差角度
Figure PCTCN2021135818-appb-000034
确定为偏差角度E n,即可以稳定地对目标对象进行跟随操作。
(b)丢失目标减速运动中:
Figure PCTCN2021135818-appb-000035
其中,t为开始丢失目标对象的时长信息,T为预设时间阈值,用户可以根据具体的应用场景和应用需求对预设时间阈值的具体时间长短进行设置,一般情况下,T可以为1s、1.5s或者2s等等。
具体的,在开始对目标对象进行跟随操作时,为了避免突然结束对目标对象进行跟随操作,在目标对象丢失时,获取目标对象丢失的时长信息,在目标对象进行跟随操作的时长信息小于预设时间阈值T时,则可以获取与偏差角度E n相对应的实际偏差角度
Figure PCTCN2021135818-appb-000036
该实际偏差角度
Figure PCTCN2021135818-appb-000037
即为0与偏差角度E n之间的过渡参数,从而可以实现缓慢结束对目标对象进行跟随操作。在对目标对象进行跟随操作的时长信息大于或等于预设时间阈值时,则可以将实际偏差角度
Figure PCTCN2021135818-appb-000038
确定为0,即可以稳定地结束对目标对象进行跟随操作。
步骤4.2:根据不同物体跟随速度的调整云台控制器。
其中,云台可以根据所跟随的不同物体类型调整云台控制器,具体可以包括以下几类:
(a)在待跟随的目标对象为人物时,为了提高跟随质量和效果,则可以提高平移跟随性能(与yaw方向相对应),并降低俯仰跟随性能(与pitch方向相对应);
(b)在待跟随的目标对象为宠物时,为了提高跟随质量和效果,则可以提高平移跟随性能(与yaw方向相对应),并提高俯仰跟随性能(与pitch方向相对应);
(c)在待跟随的目标对象为建筑物时,为了提高跟随质量和效果,则可以降低平移跟随性能(与yaw方向相对应),并降低俯仰跟随性能(与pitch方向相对应)。
(d)在待跟随的目标对象为其他物体时,则可以保持控制参数不变。
其中,跟随性能可以根据调整相应的控制带宽来实现。
步骤4.3:根据历史对焦位置来确定对焦偏移量。
其中,在利用图像采集装置针对某一目标对象进行图像采集操作的过程中,在历史图像帧所对应的历史对焦位置与当前图像帧所对应的当前对焦位置可以不同。此时,为了避免因对焦位置偏移而使得云台发生抖动或者抽动的情况,则可以当检测到当前对焦位置和历史对焦位置不同时,则可以获取当前对焦位置和历史对焦位置自检的对焦偏移量,基于对焦偏移量来确定与目标对象相对应的目标对焦位置。
具体的,在对焦偏移量小于或等于预设阈值时,即说明当前对焦位置与历史对焦位置之间的距离比较近,进而则可以将当前对焦位置确定为与目标对象相对应的目标对焦位置。
在对焦偏移量大于预设阈值时,即说明当前对焦位置与历史对焦位置之间的距离比较远,进而则可以基于对焦偏移量对当前对焦位置进行调整,从而可以获得与目标对象相对应的目标对焦位置。
在对焦偏移量大于预设阈值时,即说明当前对焦位置与历史对焦位置之间的距离比较远,此时则可以检测目标对象是否发生改变,在目标对象发生改变之后,则可以基于更改后的目标对象对构图目标位置进行更新,获得更新后目标位置,以基于更新后目标位置对云台进行控制,实现对更改后的目标对象进行跟随操作。
本应用实施例提供的基于相机而实现的智能跟随方法,有效地解决了以下问题:(1)解决实时图像因为HDMI传输至图像处理装置确定采集位置产生的延时时间比较长而导致跟随效果差的问题;(2)解决云台实现目标跟随,增加了额外的AI机器学习算法的开发成本以及硬件设计成本问题;(3)解决相机跟随目标类型变化导致坐标点跳跃的问题;(4)解决对焦点和跟随点不统一问题,不会出现被跟随目标虚焦情况;进一步保证了对目标对象进行跟随操作的质量和效果,有效地提高了该方法使用的稳定可靠性。
图41为本发明实施例提供的一种云台的控制装置的结构示意图;参考附图41所示,本实施例提供了一种云台的控制装置,其中,云台通信连接有图像采集装置,该云台的控制装置可以执行与图2相对应的云台的控制方法。具体的,本实施例中的装置可以包括:
存储器412,用于存储计算机程序;
处理器411,用于运行存储器412中存储的计算机程序以实现:
获取目标对象在采集图像中的采集位置,采集位置是通过图像采集装置所确定的,图像采集装置为具有手动镜头或自动镜头的相机,且图像采集装置与云台通信连接;
基于采集位置,确定用于对目标对象进行跟随操作的控制参数;
根据控制参数对云台进行控制,以实现对目标对象进行跟随操作。
其中,云台的控制装置的结构中还可以包括通信接口413,用于电子设备与其他设备或通信网络通信。
在一些实例中,在处理器411获取目标对象在采集图像中的采集位置时,处理器411用于:通过图像采集装置获取与目标对象相对应的目标对焦位置;将目标对焦位置确定为目标对象在采集图像中的采集位置。
在一些实例中,在处理器411通过图像采集装置获取与目标对象相对应的目标对焦位置时,处理器411用于:通过图像采集装置获取与目标对象相对应的历史对焦位置和当前对焦位置;基于历史对焦位置和当前对焦位置,确定与目标对象相对应的目标对焦位置。
在一些实例中,在处理器411基于历史对焦位置和当前对焦位置,确定与目标对象相对应的目标对焦位置时,处理器411用于:确定与历史对焦位置相对应的历史对象部位和与当前对焦位置相对应的当前对象部位;根据历史对象部位与当前对象部位,确定与目标对象相对应的目标对焦位置。
在一些实例中,在处理器411根据历史对象部位与当前对象部位,确定与目标对象相对应的目标对焦位置时,处理器411用于:在历史对象部位与当前对象部位为同一目标对象的不同部位时,则获取历史对象部位与当前对象部位之间的相对位置信息;基于相对位置信息对当前对焦位置进行调整,获得与目标对象相对应的目标对焦位置。
在一些实例中,在处理器411基于相对位置信息对当前对焦位置进行调整,获得与目标对象相对应的目标对焦位置时,处理器411用于:在相对位置信息大于或等于预设阈值时,基于相对位置信息对当前对焦位置进行调整,获得与目标对象相对应的目标对焦位置;在相对位置信息小于预设阈值时,将当前对焦位置确定为与目标对象相对应的目标对焦位置。
在一些实例中,在处理器411根据历史对象部位与当前对象部位,确定与目标对象相对应的目标对焦位置时,处理器411用于:在历史对象部位与当前对象部位为同一目标对象的不同部位时,基于当前对焦位置对构图目标位置进行更新,获得第一更新后构图目标位置;基于第一更新后构图目标位置对目标对象进行跟随操作。
在一些实例中,处理器411用于:检测进行跟随操作的目标对象是否发生改变;在目标对象由第一对象改变为第二对象时,获取第二对象在采集图像中的采集位置;基于第二对象在采集图像中的采集位置对构图目标位置进行更新,获得与第二对象相对应的第二更新后构图目标位置,以基于第二更新后构图目标位置对第二对象进行跟随操作。
在一些实例中,在处理器411基于采集位置,确定用于对目标对象进行跟随操作的控制参数时,处理器411用于:计算与采集位置相对应的当前位置预测值;基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数。
在一些实例中,在处理器411计算与采集位置相对应的当前位置预测值时,处理器411用于:确定与采集位置相对应的延时时间,延时时间用于指示云台经由图像采集装置获得采集位置所需要的时长;基于延时时间和采集位置,确定与采集位置相对应的当前位置预测值。
在一些实例中,在处理器411确定与采集位置相对应的延时时间时,处理器411用于:获取与采集图像相对应的曝光时间;在云台获取到采集位置时,确定与采集位置相对应的当前接收时间;将当前接收时间与曝光时间之间的时间间隔,确定为与采集位置相对应的延时时间。
在一些实例中,在处理器411基于延时时间和采集位置,确定与采集位置相对应的当前位置预测值时,处理器411用于:在云台获取到目标对象在前一采集图像的前一采集位置时,确定与前一采集位置相对应的前一接收时间;确定与前一采集位置相对应的前一位置预测值;根据采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,计算与采集位置相对应的当前位置预测值。
在一些实例中,在处理器411根据采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,计算与采集位置相对应的当前位置预测值时,处理器411用于:基于采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,确定与采集位置相对应的位置调整值;将位置调整值与采集位置的和值,确定为与采集位置相对应的当前位置预测值。
在一些实例中,在处理器411基于采集位置、曝光时间、延时时间、前一接收时间和前一位置预测值,确定与采集位置相对应的位置调整值时,处理器411用于:基于采集位置、前一位置预测值、曝光时间和前一接收时间,确定与目标对象相对应的移动速度;将移动速度与延时时间之间的乘积值,确定为与采集位置相对应的位置调整值。
在一些实例中,在处理器411基于采集位置、前一位置预测值、曝光时间和前一接收时间,确定与目标对象相对应的移动速度时,处理器411用于:获取采集位置与前一位置预测值之间的位置差值以及曝光时间与前一接收时间之间的时间差值;将位置差值与时间差值之间的比值,确定为与目标对象相对应的移动速度。
在一些实例中,在处理器411基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数时,处理器411用于:确定当前位置预测值与构图目标位置之间的位置偏差;基于位置偏差,确定用于对目标对象进行跟随操作的控制参数。
在一些实例中,在处理器411基于位置偏差,确定用于对目标对象进行跟随操作的控制参数时,处理器411用于:获取与采集图像相对应的画面视场角;基于画面视场角和位置偏差,确定用于对目标对象进行跟随操作的控制参数。
在一些实例中,控制参数的大小与画面视场角的大小呈负相关。
在一些实例中,在处理器411基于当前位置预测值,确定用于对目标对象进行跟随操作的控制参数时,处理器411用于:获取与云台相对应的跟随模式,跟随模式包括以下之一:单轴跟随模式、双轴跟随模式、全跟随模式;基于当前位置预测值和跟随模式,确定用于对目标对象进行跟随操作的控制参数。
在一些实例中,在处理器411基于当前位置预测值和跟随模式,确定用于对目标对象进行跟随操作的控制参数时,处理器411用于:基于当前位置预测值,确定用于对目标对象进行跟随操作的备选控制参数;在备选控制参数中,确定与跟随模式相对应的目标控制参数。
在一些实例中,在处理器411在备选控制参数中,确定与跟随模式相对应的目标控制参数时,处理器411用于:在跟随模式为单轴跟随模式时,在备选控制参数中,确定与单轴跟随模式相对应的单轴的控制参数,并将其他备选控制参数置零;在跟随模式为双轴跟随模式时,在备选控制参数中,确定与双轴跟随模式相对应的双轴的控制参数,并将其他备选控制参数置零;在跟随模式为全跟随模式时,将备选控制参数确定为与全跟随模式相对应的三轴的控制参数。
在一些实例中,在处理器411根据控制参数对云台进行控制时,处理器411用于:获取与目标对象所对应的云台的运动状态;基于云台的运动状态和控制参数对云台进行控制。
在一些实例中,在处理器411基于云台的运动状态和控制参数对云台进行控制时,处理器411用于:获取用于对目标对象进行跟随操作所对应的时长信息;在时长信息小于第一时间阈值时,则基于云台的运动状态对控制参数进行更新,获得更新后的控制参数,并基于更新后的控制参数对云台进行控制;在时长信息大于或等于第一时间阈值时,利用控制参数对云台进行控制。
在一些实例中,在处理器411基于云台的运动状态对控制参数进行更新,获得更新后的控制参数时,处理器411用于:基于云台的运动状态,确定与控制参数相对应的更新系数,其中,更新系数小于1;将更新系数与控制参数的乘积值,确定为更新后的控制参数。
在一些实例中,在处理器411基于云台的运动状态,确定与控制参数相对应的更新系数时,处理器411用于:在云台的运动状态为特定运动状态时,基于所述时长信息与所述第一时间阈值之间的比值确定与所述控制参数相对应的更新系数。
在一些实例中,在处理器411根据控制参数对云台进行控制时,处理器411用于:获取与目标对象相对应的跟随状态;基于跟随状态和控制参数对云台进行控制。
在一些实例中,在处理器411获取与目标对象相对应的跟随状态时,处理器411用于:检测进行跟随操作的目标对象是否发生改变;在目标对象由第一对象改变为第二对象时,则确定第一对象为丢失状态。
在一些实例中,在处理器411基于跟随状态和控制参数对云台进行控制时,处理器411用于:在目标对象为丢失状态时,则获取对目标对象进行跟随操作过程中所对应的丢失时长信息;根据丢失时长信息 对控制参数进行更新,获得更新后控制参数;基于更新后控制参数对云台进行控制。
在一些实例中,在处理器411根据丢失时长信息对控制参数进行更新,获得更新后控制参数时,处理器411用于:在丢失时长信息大于或等于第二时间阈值时,将控制参数更新为零;在丢失时长信息小于第二时间阈值时,获取丢失时长信息与第二时间阈值之间的比值,并将1与比值之间的差值确定为与控制参数相对应的更新系数,将更新系数与控制参数的乘积值确定为更新后控制参数。
在一些实例中,在处理器411根据控制参数对云台进行控制时,处理器411用于:获取目标对象的对象类型;根据对象类型和控制参数对云台进行控制。
在一些实例中,在处理器411根据对象类型和控制参数对云台进行控制时,处理器411用于:根据对象类型对控制参数进行调整,获得调整后的参数;基于调整后的参数对云台进行控制。
在一些实例中,在处理器411根据对象类型对控制参数进行调整,获得调整后的参数时,处理器411用于:在目标对象为静止对象时,则降低云台在偏航方向所对应的控制带宽和云台在俯仰方向所对应的控制带宽;在目标对象为移动对象、且移动对象的高度大于或等于高度阈值时,则提高云台在偏航方向所对应的控制带宽,并降低云台在俯仰方向所对应的控制带宽;在目标对象为移动对象、且移动对象的高度小于高度阈值时,则提高云台在偏航方向所对应的控制带宽和云台在俯仰方向所对应的控制带宽。
在一些实例中,处理器411用于:通过显示界面获取用户针对图像采集装置所输入的执行操作;根据执行操作对图像采集装置进行控制,以使得图像采集装置确定采集位置。
在一些实例中,处理器411用于:通过设置于图像采集装置上的测距传感器获取与目标对象相对应的距离信息;将距离信息发送至图像采集装置,以使图像采集装置结合距离信息确定目标对象在采集图像中的采集位置。
在一些实例中,处理器411用于:确定图像采集装置所对应的工作模式,工作模式包括以下之一:先跟随后对焦模式、先对焦后跟随模式;利用工作模式对图像采集装置进行控制。
在一些实例中,云台设有通信串行总线USB接口,USB接口用于与图像采集装置有线通信连接。
图41所示装置可以执行图16至图34、图39至图40中所示的实施例的方法,本实施例未详细描述的部分,可参考对图16至图34、图39至图40中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图16至图34、图39至图40所示实施例中的描述,在此不再赘述。
图42为本发明实施例提供的一种云台***的控制装置的结构示意图;参考附图42所示,本实施例提供了一种云台***的控制装置,其中,云台***包括云台和与云台通信连接的图像采集装置,该云台***的控制装置可以执行与图35相对应的云台***的控制方法。具体的,本实施例中的装置可以包括:
存储器422,用于存储计算机程序;
处理器421,用于运行存储器422中存储的计算机程序以实现:
控制图像采集装置采集图像,并获取目标对象在图像中的采集位置,采集位置是通过图像采集装置所确定的;
将采集位置传输至云台;
控制云台按照控制参数进行运动,以实现对目标对象进行跟随操作,其中,控制参数是基于采集位置所确定的。
其中,云台***的控制装置的结构中还可以包括通信接口423,用于电子设备与其他设备或通信网络通信。
图42所示装置可以执行图35、图39至图40中所示的实施例的方法,本实施例未详细描述的部分,可参考对图35、图39至图40中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图35、图39至图40所示实施例中的描述,在此不再赘述。
图43为本发明实施例提供的另一种云台的控制装置的结构示意图;参考附图43所示,本实施例提供了一种云台的控制装置,用于云台,云台通信连接有图像采集装置,该云台的控制装置可以执行与图36相对应的云台的控制方法。具体的,本实施例中的装置可以包括:
存储器432,用于存储计算机程序;
处理器431,用于运行存储器432中存储的计算机程序以实现:
获取采集图像,采集图像中包括目标对象;
在采集图像中确定目标对象的位置,以基于目标对象的位置对目标对象进行跟随操作;
将根据目标对象的位置发送至图像采集装置,以使得图像采集装置基于目标对象的位置,确定与目标对象相对应的对焦位置,并基于对焦位置对目标对象进行对焦操作。
其中,云台的控制装置的结构中还可以包括通信接口433,用于电子设备与其他设备或通信网络通信。
图43所示装置可以执行图36、图39至图40中所示的实施例的方法,本实施例未详细描述的部分,可参考对图36、图39至图40中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图36、图39至图40所示实施例中的描述,在此不再赘述。
图44为本发明实施例提供的另一种云台***的控制装置的结构示意图;参考附图44所示,本实施例提供了另一种云台***的控制装置,其中,云台***包括云台和与云台通信连接的图像采集装置,该云台***的控制装置可以执行与图37相对应的云台***的控制方法。具体的,本实施例中的装置可以包括:
存储器442,用于存储计算机程序;
处理器441,用于运行存储器442中存储的计算机程序以实现:
控制图像采集装置采集图像,图像包括目标对象;
在图像中确定目标对象的位置;
基于目标对象的位置控制云台对目标对象进行跟随操作,并根据目标对象的位置控制图像采集装置对目标对象进行对焦操作。
其中,云台***的控制装置的结构中还可以包括通信接口443,用于电子设备与其他设备或通信网络通信。
图44所示装置可以执行图37、图39至图40中所示的实施例的方法,本实施例未详细描述的部分,可参考对图37、图39至图40中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图37、图39至图40所示实施例中的描述,在此不再赘述。
图45为本发明实施例提供的又一种云台***的控制装置的结构示意图,参考附图45所示,本实施例提供了又一种云台***的控制装置,其中,云台***包括云台和与云台通信连接的图像采集装置,该云台***的控制装置可以执行与图38相对应的云台***的控制方法。具体的,本实施例中的装置可以包括:
存储器452,用于存储计算机程序;
处理器451,用于运行存储器452中存储的计算机程序以实现:
获取第一对象在图像采集装置采集的采集图像中的采集位置,第一对象的采集位置用于云台对第一对象进行跟随操作,以及用于图像采集装置对第一对象进行对焦操作;
在第一对象改变为第二对象时,获取第二对象在图像采集装置采集的采集图像中的采集位置,以使得云台由对第一对象的跟随操作变化为基于第二对象的采集位置对第二对象进行跟随操作,以及使得图像采集装置由对第一对象的对焦操作变化为基于第二对象的位置对第二对象进行对焦操作。
其中,云台***的控制装置的结构中还可以包括通信接口453,用于电子设备与其他设备或通信网络通信。
图45所示装置可以执行图38至图40中所示的实施例的方法,本实施例未详细描述的部分,可参考对图38至图40中所示的实施例的相关说明。该技术方案的执行过程和技术效果参见图38至图40所示实施例中的描述,在此不再赘述。
可以理解,上述的任一实施例的控制装置可以独立于云台或图像采集装置,也可以是集成于云台或图像采集装置。
图46为本发明实施例提供的一种云台的控制***的结构示意图;参考附图46所示,本实施例提供了一种云台的控制***,具体的,该控制***可以包括:
云台61;
上述图41所示的云台的控制装置62,设置于云台61上,且用于与图像采集装置通信连接,并用于通过图像采集装置对云台61进行控制。
在一些实例中,本实施例中的控制***还可以包括:
测距传感器63,设置于图像采集装置上,用于获取与目标对象相对应的距离信息;
其中,云台的控制装置62与测距传感器63通信连接,用于将距离信息发送至图像采集装置,以使图像采集装置结合距离信息确定目标对象在采集图像中的采集位置。
图46所示云台的控制***的具体实现原理、实现过程和实现效果与图41所示的云台的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图41中所示的实施例的相关说明。
图47为本发明实施例提供的一种云台的控制***的结构示意图;参考附图47所示,本实施例提供了一种云台的控制***,具体的,该云台的控制***可以包括:
云台71;
上述图42所对应的云台***的控制装置73,设置于云台71上,且用于与图像采集装置72通信连接,并用于分别对图像采集装置72以及云台71进行控制。
图47所示云台的控制***的具体实现原理、实现过程和实现效果与图42所示的云台***的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图42中所示的实施例的相关说明。
图48为本发明实施例提供的另一种云台的控制***的结构示意图;参考附图48所示,本实施例提供了另一种云台的控制***,具体的,该云台的控制***可以包括:
云台81;
上述图43的云台的控制装置82,设置于云台81上,且用于与图像采集装置通信连接,并用于通过云台81对图像采集装置进行控制。
图48所示云台的控制***的具体实现原理、实现过程和实现效果与图43所示的云台的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图43中所示的实施例的相关说明。
图49为本发明实施例提供的又一种云台的控制***的结构示意图;参考附图49所示,本实施例提供了又一种云台的控制***,具体的,该云台的控制***可以包括:
云台91;
上述图44的云台***的控制装置92,设置于云台91上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及云台91进行控制。
图49所示云台的控制***的具体实现原理、实现过程和实现效果与图44所示的云台***的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图44中所示的实 施例的相关说明。
图50为本发明实施例提供的另一种云台的控制***的结构示意图;参考附图50所示,本实施例提供了另一种云台的控制***,具体的,该云台的控制***可以包括:
云台101;
上述图45所对应的云台***的控制装置103,设置于云台101上,且用于与图像采集装置102通信连接,并用于分别对图像采集装置102以及云台101进行控制。
图50所示云台的控制***的具体实现原理、实现过程和实现效果与图45所示的云台***的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图45中所示的实施例的相关说明。
可以理解,上述各个实施例中的云台的控制***中的控制装置可以集成于云台,其还可以进一步包括图像采集装置,该图像采集装置可以集成于云台上,或者,也可以与云台可拆卸连接。
图51为本发明实施例提供的一种可移动平台的结构示意图一;参考附图51所示,本实施例提供了一种可移动平台,具体的,该可移动平台可以包括:
云台112;
支撑机构111,用于连接云台112;
上述图41的云台的控制装置113,设置于云台112上,且用于与图像采集装置114通信连接,并用于通过图像采集装置114对云台112进行控制。
其中,支撑机构111随可移动平台的类型而不同,例如,当可移动平台为手持云台时,支撑机构111可以为手柄,当可移动平台为机载云台时,支撑机构111可以为用于搭载云台的机身。可以理解,可移动平台包括但不限于上述说明的类型。
图51所示可移动平台的具体实现原理、实现过程和实现效果与图41所示的云台的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图41中所示的实施例的相关说明。
图52为本发明实施例提供的一种可移动平台的结构示意图二;参考附图52所示,本实施例提供了一种可移动平台,具体的,该可移动平台可以包括:
云台122;
支撑机构121,用于连接云台122;
上述图42的云台***的控制装置123,设置于云台122上,且用于与图像采集装置124通信连接,并用于分别对图像采集装置124以及云台122进行控制。
其中,支撑机构121随可移动平台的类型而不同,例如,当可移动平台为手持云台时,支撑机构121可以为手柄,当可移动平台为机载云台时,支撑机构121可以为用于搭载云台的机身。可以理解,可移动平台包括但不限于上述说明的类型。
图52所示可移动平台的具体实现原理、实现过程和实现效果与图42所示的云台***的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图42中所示的实施例的相关说明。
图53为本发明实施例提供的一种可移动平台的结构示意图三;参考附图53所示,本实施例提供了一种可移动平台,具体的,该可移动平台可以包括:
云台132;
支撑机构131,用于连接云台132;
上述图43的云台的控制装置133,设置于云台132上,且用于与图像采集装置134通信连接,并用于 通过云台132对图像采集装置134进行控制。
其中,支撑机构131随可移动平台的类型而不同,例如,当可移动平台为手持云台时,支撑机构131可以为手柄,当可移动平台为机载云台时,支撑机构131可以为用于搭载云台的机身。可以理解,可移动平台包括但不限于上述说明的类型。
图51所示可移动平台的具体实现原理、实现过程和实现效果与图43所示的云台的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图43中所示的实施例的相关说明。
图54为本发明实施例提供的一种可移动平台的结构示意图四;参考附图54所示,本实施例提供了一种可移动平台,具体的,该可移动平台可以包括:
云台142;
支撑机构141,用于连接云台142;
上述图44的云台***的控制装置143,设置于云台142上,且用于与图像采集装置144通信连接,并用于分别对图像采集装置144以及云台142进行控制。
其中,支撑机构141随可移动平台的类型而不同,例如,当可移动平台为手持云台时,支撑机构141可以为手柄,当可移动平台为机载云台时,支撑机构141可以为用于搭载云台的机身。可以理解,可移动平台包括但不限于上述说明的类型。
图54所示可移动平台的具体实现原理、实现过程和实现效果与图44所示的云台***的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图44中所示的实施例的相关说明。
图55为本发明实施例提供的一种可移动平台的结构示意图五;参考附图55所示,本实施例提供了一种可移动平台,具体的,该可移动平台可以包括:
云台152;
支撑机构151,用于连接云台152;
上述图45的云台***的控制装置153,设置于云台152上,且用于与图像采集装置154通信连接,并用于分别对图像采集装置154以及云台152进行控制。
其中,支撑机构151随可移动平台的类型而不同,例如,当可移动平台为手持云台时,支撑机构151可以为手柄,当可移动平台为机载云台时,支撑机构151可以为用于搭载云台的机身。可以理解,可移动平台包括但不限于上述说明的类型。
图55所示可移动平台的具体实现原理、实现过程和实现效果与图45所示的云台***的控制装置的具体实现原理、实现过程和实现效果相类似,本实施例未详细描述的部分,可参考对图45中所示的实施例的相关说明。
可以理解,上述各个实施例中的可移动平台中的控制装置可以集成于云台,其还可以进一步包括图像采集装置,该图像采集装置也可以是集成于云台,或与云台可拆卸连接。
另外,本发明实施例提供了一种计算机可读存储介质,存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,程序指令用于实现上述图2至图34、图39至图40的云台的控制方法。
本发明实施例提供了一种计算机可读存储介质,存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,程序指令用于实现上述图35、图39至图40的云台***的控制方法。
本发明实施例提供了一种计算机可读存储介质,存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,程序指令用于实现上述图36、图39至图40的云台的控制方法。
本发明实施例提供了一种计算机可读存储介质,存储介质为计算机可读存储介质,该计算机可读存 储介质中存储有程序指令,程序指令用于实现上述图37、图39至图40的云台***的控制方法。
本发明实施例提供了一种计算机可读存储介质,存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,程序指令用于实现上述图38至图40的云台***的控制方法。
以上各个实施例中的技术方案、技术特征在与本相冲突的情况下均可以单独,或者进行组合,只要未超出本领域技术人员的认知范围,均属于本申请保护范围内的等同实施例。
在本发明所提供的几个实施例中,应该理解到,所揭露的相关检测装置和方法,可以通过其它的方式实现。例如,以上所描述的检测装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,检测装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得计算机处理器(processor)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘或者光盘等各种可以存储程序代码的介质。
以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (172)

  1. 一种基于图像采集装置的控制方法,其特征在于,包括:
    获取所述图像采集装置确定的拍摄参数,其中,所述图像采集装置为具有手动镜头或自动镜头的相机,所述拍摄参数能够用于调节所述图像采集装置采集的采集图像;
    基于所述拍摄参数确定控制参数;
    基于所述控制参数对云台和辅助设备中的至少一种进行相应的控制,其中,所述云台用于支撑所述图像采集装置和/或所述辅助设备,所述辅助设备用于辅助所述图像采集装置进行相应的拍摄。
  2. 根据权利要求1所述的方法,其特征在于,所述图像采集装置与所述辅助设备机械耦合,所述云台用于调整所述图像采集装置以及所述辅助设备的空间位置。
  3. 根据权利要求1所述的方法,其特征在于,所述图像采集装置与所述云台可拆卸连接;和/或,
    所述辅助设备与所述云台可拆卸连接;和/或,
    所述图像采集装置与所述辅助设备可拆卸连接;和/或,
    所述图像采集装置的所述手动镜头或所述自动镜头与所述图像采集装置的机身可拆卸连接。
  4. 根据权利要求1所述的方法,其特征在于,所述云台分别与所述图像采集装置、所述辅助设备通信连接。
  5. 根据权利要求4所述的方法,其特征在于,获取所述图像采集装置确定的拍摄参数,包括:
    基于无线通信设备与所述图像采集装置建立无线通信链路,其中,所述无线通信设备设于所述云台或所述辅助设备;
    通过所述无线通信链路,获取所述拍摄参数。
  6. 根据权利要求5所述的方法,其特征在于,所述无线通信设备包括以下任意至少之一:蓝牙模块、近距离无线通信模块、无线局域网wifi模块。
  7. 根据权利要求1所述的方法,其特征在于,所述图像采集装置为具有所述手动镜头的相机,所述辅助设备包括用于对所述图像采集装置的跟焦环进行调节的跟焦电机;
    获取所述图像采集装置确定的拍摄参数,包括:
    获取所述图像采集装置确定的对焦信息;
    所述基于所述控制参数对云台和辅助设备中的至少一个进行控制,包括:
    基于所述控制参数对所述跟焦电机进行控制,以实现所述图像采集装置的跟焦操作。
  8. 根据权利要求7所述的方法,其特征在于,所述对焦信息包括以下至少之一:
    相位对焦信息、反差对焦信息。
  9. 根据权利要求8所述的方法,其特征在于,获取所述图像采集装置确定的对焦信息,包括:
    确定所述图像采集装置所对应的光圈应用模式;
    根据所述光圈应用模式,获取所述图像采集装置确定的所述相位对焦信息、反差对焦信息中的至少一种。
  10. 根据权利要求9所述的方法,其特征在于,根据所述光圈应用模式,获取所述图像采集装置确定的所述相位对焦信息、反差对焦信息中的至少一种,包括:
    在所述光圈应用模式为第一模式时,获取所述图像采集装置确定的所述反差对焦信息,或者,获取所述图像采集装置确定的所述相位对焦信息和所述反差对焦信息;和/或,
    在所述光圈应用模式为第二模式时,获取所述图像采集装置确定的所述相位对焦信息,其中,所述第一模式所对应的光圈值小于或等于设定光圈阈值,其中,所述第二模式所对应的光圈值大于所述设定光圈阈值。
  11. 根据权利要求8-10中任意一项所述的方法,其特征在于,基于所述拍摄参数确定控制参数,包括:
    获取所述相位对焦信息与跟焦环位置之间的第一映射关系;
    基于所述第一映射关系确定与所述相位对焦信息相对应的跟焦环位置;
    基于所述跟焦环位置,确定所述跟焦电机的控制参数。
  12. 根据权利要求8-10中任意一项所述的方法,其特征在于,基于所述拍摄参数确定控制参数,包括:
    确定所述跟焦电机与所述反差对焦信息相对应的当前电机位置;
    获取与所述跟焦电机相对应的设定位置范围;
    基于所述当前电机位置和所述设定位置范围,确定所述跟焦电机的第一转动速度和第一转动方向。
  13. 根据权利要求12所述的方法,其特征在于,基于所述当前电机位置和所述设定位置范围,确定所述跟焦电机的第一转动速度和第一转动方向,包括:
    获取所述当前电机位置分别与所述设定位置范围的上限值和下限值之间的第一距离和第二距离;
    基于所述第一距离和所述第二距离,确定所述跟焦电机的第一转动速度和第一转动方向。
  14. 根据权利要求12所述的方法,其特征在于,所述方法还包括:
    基于所述当前电机位置对所述设定位置范围进行更新,获得更新后位置范围;
    基于所述更新后位置范围和电机转动后位置,对所述第一转动速度进行调整,以获得第二转动速度,所述第二转动速度小于所述第一转动速度。
  15. 根据权利要求7所述的方法,其特征在于,所述辅助设备包括用于对所述图像采集装置的变焦环进行调节的变焦电机;
    获取所述图像采集装置确定的拍摄参数,包括:
    获取与所述采集图像相对应的变焦信息;
    所述基于所述控制参数对云台和辅助设备中的至少一个进行控制,包括:
    基于所述控制参数对所述变焦电机进行控制,以实现所述图像采集装置的变焦操作。
  16. 根据权利要求15所述的方法,其特征在于,基于所述拍摄参数确定控制参数,包括:
    基于所述采集图像,确定设定对象在显示画面中的初始占比,其中,所述显示画面为基于所述采集图像确定;
    基于所述初始占比和所述变焦信息,确定所述控制参数。
  17. 根据权利要求16所述的方法,其特征在于,基于所述采集图像,确定所述设定对象在显示画面中的初始占比,包括:
    获取所述设定对象的对象尺寸特征以及显示画面的画面尺寸特征;
    基于所述对象尺寸特征和所述画面尺寸特征,确定所述初始占比。
  18. 根据权利要求17所述的方法,其特征在于,所述对象尺寸特征包括对象长度尺寸,所述画面尺寸特征包括画面长度尺寸;基于所述对象尺寸特征和所述画面尺寸特征,确定所述初始占比,包括:
    将所述对象长度尺寸与所述画面长度尺寸之间的比值,确定为所述初始占比。
  19. 根据权利要求17所述的方法,其特征在于,所述对象尺寸特征包括对象宽度尺寸,所述画面尺寸特征包括画面宽度尺寸;基于所述对象尺寸特征和所述画面尺寸特征,确定所述初始占比,包括:
    将所述特征宽度尺寸与所述画面宽度尺寸之间的比值,确定为所述初始占比。
  20. 根据权利要求17所述的方法,其特征在于,所述对象尺寸特征包括对象面积尺寸,所述画面尺寸特征包括画面面积尺寸;基于所述对象尺寸特征和所述画面尺寸特征,确定所述初始占比,包括:
    将所述对象面积尺寸与所述画面面积尺寸之间的比值,确定为所述初始占比。
  21. 根据权利要求16所述的方法,其特征在于,基于所述初始占比和所述变焦信息,确定所述控制参数,包括:
    获取所述变焦电机的运动行程范围与变焦行程之间的第二映射关系、以及所述变焦电机的运动方向与变焦方向之间的第三映射关系;
    基于所述初始占比、所述变焦信息、所述第二映射关系和所述第三映射关系,确定与所述变焦电机相对应的运动行程参数和运动方向。
  22. 根据权利要求1所述的方法,其特征在于,所述辅助设备包括用于对所述图像采集装置进行补光操作的补光设备;
    获取所述图像采集装置确定的拍摄参数,包括:
    获取所述图像采集装置确定的光线检测信息;
    所述基于所述控制参数对云台和辅助设备中的至少一个进行控制,包括:
    基于所述控制参数对所述补光设备进行控制,以实现所述图像采集装置的补光操作。
  23. 根据权利要求22所述的方法,其特征在于,所述光线检测信息包括以下至少之一:
    曝光强度、光线颜色。
  24. 根据权利要求23所述的方法,其特征在于,基于所述拍摄参数确定控制参数,包括:
    确定与所述图像采集装置的采集图像相对应的目标曝光强度;
    基于所述曝光强度和所述目标曝光强度,确定所述补光设备的补偿曝光参数。
  25. 根据权利要求23所述的方法,其特征在于,基于所述拍摄参数确定控制参数,包括:
    确定与所述图像采集装置的采集图像相对应的目标场景颜色;
    基于所述光线颜色和所述目标场景颜色,确定所述补光设备的补偿颜色参数。
  26. 根据权利要求1所述的方法,其特征在于,所述图像采集装置包括防抖控制单元,所述防抖控制单元能够基于所述拍摄参数对所述图像采集装置中光学元件和图像传感器中的至少一种进行抖动补偿,以对所述图像采集装置采集的采集图像进行调节。
  27. 根据权利要求26所述的方法,其特征在于,获取所述图像采集装置确定的拍摄参数,包括:
    获取所述图像采集装置内所设置的防抖控制单元确定的防抖信息,所述防抖信息为根据所述图像采集装置的位置变化信息确定;
    所述基于所述控制参数对云台和辅助设备中的至少一个进行控制,包括:
    基于所述控制参数对所述云台进行控制,以实现所述云台的增稳操作。
  28. 根据权利要求26所述的方法,其特征在于,所述图像采集装置设有第一位置传感器,所述第一位置传感器用于检测所述位置变化信息;或,
    所述云台设有第二位置传感器,所述第二位置传感器用于检测所述位置变化信息,所述位置变化信息还用于调节所述图像采集装置的空间位置。
  29. 根据权利要求28所述的方法,其特征在于,基于所述拍摄参数确定控制参数,包括:
    基于所述防抖信息,确定所述图像采集装置的防抖灵敏度信息,所述防抖灵敏度信息用于表征针对所述云台的激励信号进行防抖响应的速度;
    获取所述云台的当前激励信号;
    基于所述当前激励信号和所述防抖灵敏度信息,确定与所述云台相对应的控制参数,以对所述云台和/或图像采集装置进行控制。
  30. 根据权利要求29所述的方法,其特征在于,基于所述当前激励信号和所述防抖灵敏度信息,确 定与所述云台相对应的控制参数,包括:
    基于所述防抖灵敏度信息,确定与所述当前激励信号相对应的当前灵敏度;
    在所述当前灵敏度大于或等于灵敏度阈值时,则生成与所述云台相对应的控制参数。
  31. 根据权利要求30所述的方法,其特征在于,生成与所述云台相对应的控制参数,包括:
    对所述当前激励信号进行抑制操作,获得抑制后信号,所述抑制后信号为与所述云台相对应的控制参数。
  32. 根据权利要求30所述的方法,其特征在于,所述方法还包括:
    基于所述防抖灵敏度信息,确定与所述当前激励信号相对应的当前灵敏度;
    在所述当前灵敏度大于或等于灵敏度阈值时,生成与所述图像采集装置相对应的控制参数,以基于与所述图像采集装置相对应的控制参数对所述防抖控制单元进行控制。
  33. 根据权利要求32所述的方法,其特征在于,生成与所述图像采集装置相对应的控制参数,包括:
    生成与所述防抖控制单元相对应的停止运行参数,所述停止运行参数为与所述图像采集装置相对应的控制参数。
  34. 根据权利要求26至33中任一项所述的方法,其特征在于,所述防抖控制单元包括镜头光学防抖OIS单元和/或机身防抖IBIS单元。
  35. 根据权利要求1所述的方法,其特征在于,所述图像采集装置内设置有信号滤波单元;所述方法还包括:
    确定所述云台的抖动信息;
    将所述抖动信息发送至所述图像采集装置,以使所述图像采集装置基于所述抖动信息对所述信号滤波单元进行参数配置。
  36. 根据权利要求35所述的方法,其特征在于,所述抖动信息包括在重力方向上的抖动信息。
  37. 根据权利要求1所述的方法,其特征在于,所述图像采集装置与所述云台通信连接,获取所述图像采集装置确定的拍摄参数,包括:
    获取目标对象在采集图像中的采集位置;
    基于所述拍摄参数确定控制参数,包括:
    基于所述采集位置,确定用于对所述目标对象进行跟随操作的控制参数;
    基于所述控制参数对云台和辅助设备中的至少一种进行相应的控制,包括:
    根据所述控制参数对所述云台进行控制,以实现对所述目标对象进行跟随操作。
  38. 一种基于图像采集装置的控制装置,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于运行所述存储器中存储的计算机程序以实现:
    获取所述图像采集装置确定的拍摄参数,其中,所述图像采集装置为具有手动镜头或自动镜头的相机,所述拍摄参数能够用于调节所述图像采集装置采集的采集图像;
    基于所述拍摄参数确定控制参数;
    基于所述控制参数对云台和辅助设备中的至少一种进行相应的控制,其中,所述云台用于支撑所述图像采集装置和/或所述辅助设备,所述辅助设备用于辅助所述图像采集装置进行相应的拍摄。
  39. 根据权利要求38所述的装置,其特征在于,所述图像采集装置与所述辅助设备机械耦合,所述云台用于调整所述图像采集装置以及所述辅助设备的空间位置。
  40. 根据权利要求38所述的装置,其特征在于,所述图像采集装置与所述云台可拆卸连接;和/或,
    所述辅助设备与所述云台可拆卸连接;和/或,
    所述图像采集装置与所述辅助设备可拆卸连接;和/或,
    所述图像采集装置的所述手动镜头或所述自动镜头与所述图像采集装置的机身可拆卸连接。
  41. 根据权利要求38所述的装置,其特征在于,所述云台分别与所述图像采集装置、所述辅助设备通信连接。
  42. 根据权利要求41所述的装置,其特征在于,在所述处理器获取所述图像采集装置确定的拍摄参数时,所述处理器用于:
    基于无线通信设备与所述图像采集装置建立无线通信链路,其中,所述无线通信设备设于所述云台或所述辅助设备;
    通过所述无线通信链路,获取所述拍摄参数。
  43. 根据权利要求42所述的装置,其特征在于,所述无线通信设备包括以下任意至少之一:蓝牙模块、近距离无线通信模块、无线局域网wifi模块。
  44. 根据权利要求38所述的装置,其特征在于,所述图像采集装置为具有所述手动镜头的相机,所述辅助设备包括用于对所述图像采集装置的跟焦环进行调节的跟焦电机;
    在所述处理器获取所述图像采集装置确定的拍摄参数时,所述处理器用于:获取所述图像采集装置确定的对焦信息;
    在所述处理器基于所述控制参数对云台和辅助设备中的至少一个进行控制时,所述处理器用于:基于所述控制参数对所述跟焦电机进行控制,以实现所述图像采集装置的跟焦操作。
  45. 根据权利要求44所述的装置,其特征在于,所述对焦信息包括以下至少之一:相位对焦信息、反差对焦信息。
  46. 根据权利要求45所述的装置,其特征在于,在所述处理器获取所述图像采集装置确定的对焦信息时,所述处理器用于:
    确定所述图像采集装置所对应的光圈应用模式;
    根据所述光圈应用模式,获取所述图像采集装置确定的所述相位对焦信息、反差对焦信息中的至少一种。
  47. 根据权利要求46所述的装置,其特征在于,在所述处理器根据所述光圈应用模式,获取所述图像采集装置确定的所述相位对焦信息、反差对焦信息中的至少一种时,所述处理器用于:
    在所述光圈应用模式为第一模式时,获取所述图像采集装置确定的所述反差对焦信息,或者,获取所述图像采集装置确定的所述相位对焦信息和所述反差对焦信息;和/或,
    在所述光圈应用模式为第二模式时,获取所述图像采集装置确定的所述相位对焦信息,其中,所述第一模式所对应的光圈值小于或等于设定光圈阈值,其中,所述第二模式所对应的光圈值大于所述设定光圈阈值。
  48. 根据权利要求45-47中任意一项所述的装置,其特征在于,在所述处理器基于所述拍摄参数确定控制参数时,所述处理器用于:
    获取所述相位对焦信息与跟焦环位置之间的第一映射关系;
    基于所述第一映射关系确定与所述相位对焦信息相对应的跟焦环位置;
    基于所述跟焦环位置,确定所述跟焦电机的控制参数。
  49. 根据权利要求45-47中任意一项所述的装置,其特征在于,在所述处理器基于所述拍摄参数确定控制参数时,所述处理器用于:
    确定所述跟焦电机与所述反差对焦信息相对应的当前电机位置;
    获取与所述跟焦电机相对应的设定位置范围;
    基于所述当前电机位置和所述设定位置范围,确定所述跟焦电机的第一转动速度和第一转动方向。
  50. 根据权利要求49所述的装置,其特征在于,在所述处理器基于所述当前电机位置和所述设定位置范围,确定所述跟焦电机的第一转动速度和第一转动方向时,所述处理器用于:
    获取所述当前电机位置分别与所述设定位置范围的上限值和下限值之间的第一距离和第二距离;
    基于所述第一距离和所述第二距离,确定所述跟焦电机的第一转动速度和第一转动方向。
  51. 根据权利要求49所述的装置,其特征在于,所述处理器用于:
    基于所述当前电机位置对所述设定位置范围进行更新,获得更新后位置范围;
    基于所述更新后位置范围和电机转动后位置,对所述第一转动速度进行调整,以获得第二转动速度,所述第二转动速度小于所述第一转动速度。
  52. 根据权利要求43所述的装置,其特征在于,所述辅助设备包括用于对所述图像采集装置的变焦环进行调节的变焦电机;
    在所述处理器获取所述图像采集装置确定的拍摄参数时,所述处理器用于:获取与所述采集图像相对应的变焦信息;
    在所述处理器基于所述控制参数对云台和辅助设备中的至少一个进行控制时,所述处理器用于:基于所述控制参数对所述变焦电机进行控制,以实现所述图像采集装置的变焦操作。
  53. 根据权利要求52所述的装置,其特征在于,在所述处理器基于所述拍摄参数确定控制参数时,所述处理器用于:
    基于所述采集图像,确定设定对象在显示画面中的初始占比,其中,所述显示画面为基于所述采集图像确定;
    基于所述初始占比和所述变焦信息,确定所述控制参数。
  54. 根据权利要求53所述的装置,其特征在于,在所述处理器基于所述采集图像,确定所述设定对象在显示画面中的初始占比时,所述处理器用于:
    获取所述设定对象的对象尺寸特征以及显示画面的画面尺寸特征;
    基于所述对象尺寸特征和所述画面尺寸特征,确定所述初始占比。
  55. 根据权利要求54所述的装置,其特征在于,所述对象尺寸特征包括对象长度尺寸,所述画面尺寸特征包括画面长度尺寸;在所述处理器基于所述对象尺寸特征和所述画面尺寸特征,确定所述初始占比时,所述处理器用于:
    将所述对象长度尺寸与所述画面长度尺寸之间的比值,确定为所述初始占比。
  56. 根据权利要求54所述的装置,其特征在于,所述对象尺寸特征包括对象宽度尺寸,所述画面尺寸特征包括画面宽度尺寸;在所述处理器基于所述对象尺寸特征和所述画面尺寸特征,确定所述初始占比时,所述处理器用于:
    将所述特征宽度尺寸与所述画面宽度尺寸之间的比值,确定为所述初始占比。
  57. 根据权利要求54所述的装置,其特征在于,所述对象尺寸特征包括对象面积尺寸,所述画面尺寸特征包括画面面积尺寸;在所述处理器基于所述对象尺寸特征和所述画面尺寸特征,确定所述初始占比时,所述处理器用于:
    将所述对象面积尺寸与所述画面面积尺寸之间的比值,确定为所述初始占比。
  58. 根据权利要求53所述的装置,其特征在于,在所述处理器基于所述初始占比和所述变焦信息,确定所述控制参数时,所述处理器用于:
    获取所述变焦电机的运动行程范围与变焦行程之间的第二映射关系、以及所述变焦电机的运动方向与变焦方向之间的第三映射关系;
    基于所述初始占比、所述变焦信息、所述第二映射关系和所述第三映射关系,确定与所述变焦电机相对应的运动行程参数和运动方向。
  59. 根据权利要求38所述的装置,其特征在于,所述辅助设备包括用于对所述图像采集装置进行补光操作的补光设备;
    在所述处理器获取所述图像采集装置确定的拍摄参数时,所述处理器用于:获取所述图像采集装置确定的光线检测信息;
    在所述处理器基于所述控制参数对云台和辅助设备中的至少一个进行控制时,所述处理器用于:基于所述控制参数对所述补光设备进行控制,以实现所述图像采集装置的补光操作。
  60. 根据权利要求59所述的装置,其特征在于,所述光线检测信息包括以下至少之一:曝光强度、光线颜色。
  61. 根据权利要求60所述的装置,其特征在于,在所述处理器基于所述拍摄参数确定控制参数时,所述处理器用于:
    确定与所述图像采集装置的采集图像相对应的目标曝光强度;
    基于所述曝光强度和所述目标曝光强度,确定所述补光设备的补偿曝光参数。
  62. 根据权利要求60所述的装置,其特征在于,在所述处理器基于所述拍摄参数确定控制参数时,所述处理器用于:
    确定与所述图像采集装置的采集图像相对应的目标场景颜色;
    基于所述光线颜色和所述目标场景颜色,确定所述补光设备的补偿颜色参数。
  63. 根据权利要求38所述的装置,其特征在于,所述图像采集装置包括防抖控制单元,所述防抖控制单元能够基于所述拍摄参数对所述图像采集装置中光学元件和图像传感器中的至少一种进行抖动补偿,以对所述图像采集装置采集的采集图像进行调节。
  64. 根据权利要求63所述的装置,其特征在于,在所述处理器获取所述图像采集装置确定的拍摄参数时,所述处理器用于:获取所述图像采集装置内所设置的防抖控制单元确定的防抖信息,所述防抖信息为根据所述图像采集装置的位置变化信息确定;
    在所述处理器基于所述控制参数对云台和辅助设备中的至少一个进行控制时,所述处理器用于:基于所述控制参数对所述云台进行控制,以实现所述云台的增稳操作。
  65. 根据权利要求63所述的装置,其特征在于,所述图像采集装置设有第一位置传感器,所述第一位置传感器用于检测所述位置变化信息;或,
    所述云台设有第二位置传感器,所述第二位置传感器用于检测所述位置变化信息,所述位置变化信息还用于调节所述图像采集装置的空间位置。
  66. 根据权利要求65所述的装置,其特征在于,在所述处理器基于所述拍摄参数确定控制参数时,所述处理器用于:
    基于所述防抖信息,确定所述图像采集装置的防抖灵敏度信息,所述防抖灵敏度信息用于表征针对所述云台的激励信号进行防抖响应的速度;
    获取所述云台的当前激励信号;
    基于所述当前激励信号和所述防抖灵敏度信息,确定与所述云台相对应的控制参数,以对所述云台和/或图像采集装置进行控制。
  67. 根据权利要求66所述的装置,其特征在于,在所述处理器基于所述当前激励信号和所述防抖灵敏度信息,确定与所述云台相对应的控制参数时,所述处理器用于:
    基于所述防抖灵敏度信息,确定与所述当前激励信号相对应的当前灵敏度;
    在所述当前灵敏度大于或等于灵敏度阈值时,则生成与所述云台相对应的控制参数。
  68. 根据权利要求67所述的装置,其特征在于,在所述处理器生成与所述云台相对应的控制参数时,所述处理器用于:
    对所述当前激励信号进行抑制操作,获得抑制后信号,所述抑制后信号为与所述云台相对应的控制参数。
  69. 根据权利要求66所述的装置,其特征在于,所述处理器还用于:
    基于所述防抖灵敏度信息,确定与所述当前激励信号相对应的当前灵敏度;
    在所述当前灵敏度大于或等于灵敏度阈值时,生成与所述图像采集装置相对应的控制参数,以基于与所述图像采集装置相对应的控制参数对所述防抖控制单元进行控制。
  70. 根据权利要求69所述的装置,其特征在于,在所述处理器生成与所述图像采集装置相对应的控制参数时,所述处理器用于:
    生成与所述防抖控制单元相对应的停止运行参数,所述停止运行参数为与所述图像采集装置相对应的控制参数。
  71. 根据权利要求63-70中任一项所述的装置,其特征在于,所述防抖控制单元包括镜头光学防抖OIS单元和/或机身防抖IBIS单元。
  72. 根据权利要求38所述的装置,其特征在于,所述图像采集装置内设置有信号滤波单元;所述处理器用于:
    确定所述云台的抖动信息;
    将所述抖动信息发送至所述图像采集装置,以使所述图像采集装置基于所述抖动信息对所述信号滤波单元进行参数配置。
  73. 根据权利要求72所述的装置,其特征在于,所述抖动信息包括在重力方向上的抖动信息。
  74. 根据权利要求38所述的装置,其特征在于,所述图像采集装置与所述云台通信连接,在所述处理器获取所述图像采集装置确定的拍摄参数时,所述处理器用于:获取目标对象在采集图像中的采集位置;
    在所述处理器基于所述拍摄参数确定控制参数时,所述处理器用于:基于所述采集位置,确定用于对所述目标对象进行跟随操作的控制参数;
    在所述处理器基于所述控制参数对云台和辅助设备中的至少一种进行相应的控制时,所述处理器用于:根据所述控制参数对所述云台进行控制,以实现对所述目标对象进行跟随操作。
  75. 一种计算机可读存储介质,其特征在于,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现权利要求1-37中任意一项所述的基于图像采集装置的控制方法。
  76. 一种云台,其特征在于,包括:
    云台主体;
    权利要求38-74中任意一项所述的基于图像采集装置的控制装置,设置于所述云台主体。
  77. 一种云台的控制方法,其特征在于,包括:
    获取目标对象在采集图像中的采集位置,所述采集位置是通过图像采集装置所确定的,所述图像采集装置为具有手动镜头或自动镜头的相机,且所述图像采集装置与所述云台通信连接;
    基于所述采集位置,确定用于对所述目标对象进行跟随操作的控制参数;
    根据所述控制参数对所述云台进行控制,以实现对所述目标对象进行跟随操作。
  78. 根据权利要求77所述的方法,其特征在于,获取目标对象在采集图像中的采集位置,包括:
    通过图像采集装置获取与所述目标对象相对应的目标对焦位置;
    将所述目标对焦位置确定为目标对象在采集图像中的采集位置。
  79. 根据权利要求78所述的方法,其特征在于,通过图像采集装置获取与所述目标对象相对应的目标对焦位置,包括:
    通过图像采集装置获取与目标对象相对应的历史对焦位置和当前对焦位置;
    基于所述历史对焦位置和所述当前对焦位置,确定与所述目标对象相对应的目标对焦位置。
  80. 根据权利要求79所述的方法,其特征在于,基于所述历史对焦位置和所述当前对焦位置,确定与所述目标对象相对应的目标对焦位置,包括:
    确定与所述历史对焦位置相对应的历史对象部位和与所述当前对焦位置相对应的当前对象部位;
    根据所述历史对象部位与所述当前对象部位,确定与所述目标对象相对应的目标对焦位置。
  81. 根据权利要求80所述的方法,其特征在于,根据所述历史对象部位与所述当前对象部位,确定与所述目标对象相对应的目标对焦位置,包括:
    在所述历史对象部位与所述当前对象部位为同一目标对象的不同部位时,则获取所述历史对象部位与所述当前对象部位之间的相对位置信息;
    基于所述相对位置信息对所述当前对焦位置进行调整,获得与所述目标对象相对应的目标对焦位置。
  82. 根据权利要求81所述的方法,其特征在于,基于所述相对位置信息对所述当前对焦位置进行调整,获得与所述目标对象相对应的目标对焦位置,包括:
    在所述相对位置信息大于或等于预设阈值时,基于所述相对位置信息对所述当前对焦位置进行调整,获得与所述目标对象相对应的目标对焦位置;
    在所述相对位置信息小于所述预设阈值时,将所述当前对焦位置确定为与所述目标对象相对应的目标对焦位置。
  83. 根据权利要求80所述的方法,其特征在于,根据所述历史对象部位与所述当前对象部位,确定与所述目标对象相对应的目标对焦位置,包括:
    在所述历史对象部位与所述当前对象部位为同一目标对象的不同部位时,基于所述当前对焦位置对构图目标位置进行更新,获得第一更新后构图目标位置;
    基于所述第一更新后构图目标位置对所述目标对象进行跟随操作。
  84. 根据权利要求77所述的方法,其特征在于,所述方法还包括:
    检测进行跟随操作的目标对象是否发生改变;
    在所述目标对象由第一对象改变为第二对象时,获取所述第二对象在所述采集图像中的采集位置;
    基于所述第二对象在所述采集图像中的采集位置对构图目标位置进行更新,获得与所述第二对象相对应的第二更新后构图目标位置,以基于所述第二更新后构图目标位置对所述第二对象进行跟随操作。
  85. 根据权利要求77所述的方法,其特征在于,基于所述采集位置,确定用于对所述目标对象进行跟随操作的控制参数,包括:
    计算与所述采集位置相对应的当前位置预测值;
    基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数。
  86. 根据权利要求85所述的方法,其特征在于,计算与所述采集位置相对应的当前位置预测值,包括:
    确定与所述采集位置相对应的延时时间,所述延时时间用于指示所述云台经由所述图像采集装置获得所述采集位置所需要的时长;
    基于所述延时时间和所述采集位置,确定与所述采集位置相对应的当前位置预测值。
  87. 根据权利要求86所述的方法,其特征在于,确定与所述采集位置相对应的延时时间,包括:
    获取与所述采集图像相对应的曝光时间;
    在所述云台获取到所述采集位置时,确定与所述采集位置相对应的当前接收时间;
    将所述当前接收时间与所述曝光时间之间的时间间隔,确定为与所述采集位置相对应的延时时间。
  88. 根据权利要求87所述的方法,其特征在于,基于所述延时时间和所述采集位置,确定与所述采集位置相对应的当前位置预测值,包括:
    在所述云台获取到所述目标对象在前一采集图像中的前一采集位置时,确定与所述前一采集位置相对应的前一接收时间;
    确定与所述前一采集位置相对应的前一位置预测值;
    根据所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,计算与所述采集位置相对应的当前位置预测值。
  89. 根据权利要求88所述的方法,其特征在于,根据所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,计算与所述采集位置相对应的当前位置预测值,包括:
    基于所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,确定与所述采集位置相对应的位置调整值;
    将所述位置调整值与所述采集位置的和值,确定为与所述采集位置相对应的当前位置预测值。
  90. 根据权利要求89所述的方法,其特征在于,基于所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,确定与所述采集位置相对应的位置调整值,包括:
    基于所述采集位置、所述前一位置预测值、所述曝光时间和所述前一接收时间,确定与所述目标对象相对应的移动速度;
    将所述移动速度与所述延时时间之间的乘积值,确定为与所述采集位置相对应的位置调整值。
  91. 根据权利要求90所述的方法,其特征在于,基于所述采集位置、所述前一位置预测值、所述曝光时间和所述前一接收时间,确定与所述目标对象相对应的移动速度,包括:
    获取所述采集位置与前一位置预测值之间的位置差值以及所述曝光时间与所述前一接收时间之间的时间差值;
    将所述位置差值与所述时间差值之间的比值,确定为与所述目标对象相对应的移动速度。
  92. 根据权利要求85所述的方法,其特征在于,基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数,包括:
    确定所述当前位置预测值与构图目标位置之间的位置偏差;
    基于所述位置偏差,确定用于对所述目标对象进行跟随操作的控制参数。
  93. 根据权利要求92所述的方法,其特征在于,基于所述位置偏差,确定用于对所述目标对象进行跟随操作的控制参数,包括:
    获取与所述采集图像相对应的画面视场角;
    基于所述画面视场角和所述位置偏差,确定用于对所述目标对象进行跟随操作的控制参数。
  94. 根据权利要求93所述的方法,其特征在于,所述控制参数的大小与所述画面视场角的大小呈负相关。
  95. 根据权利要求85所述的方法,其特征在于,基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数,包括:
    获取与所述云台相对应的跟随模式,所述跟随模式包括以下之一:单轴跟随模式、双轴跟随模式、全跟随模式;
    基于所述当前位置预测值和所述跟随模式,确定用于对所述目标对象进行跟随操作的控制参数。
  96. 根据权利要求95所述的方法,其特征在于,基于所述当前位置预测值和所述跟随模式,确定用于对所述目标对象进行跟随操作的控制参数,包括:
    基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的备选控制参数;
    在所述备选控制参数中,确定与所述跟随模式相对应的目标控制参数。
  97. 根据权利要求96所述的方法,其特征在于,在所述备选控制参数中,确定与所述跟随模式相对应的目标控制参数,包括:
    在所述跟随模式为所述单轴跟随模式时,在所述备选控制参数中,确定与所述单轴跟随模式相对应的单轴的控制参数,并将其他备选控制参数置零;
    在所述跟随模式为所述双轴跟随模式时,在所述备选控制参数中,确定与所述双轴跟随模式相对应的双轴的控制参数,并将其他备选控制参数置零;
    在所述跟随模式为所述全跟随模式时,将所述备选控制参数确定为与所述全跟随模式相对应的三轴的控制参数。
  98. 根据权利要求77-97中任意一项所述的方法,其特征在于,根据所述控制参数对所述云台进行控制,包括:
    获取与所述目标对象所对应的云台的运动状态;
    基于所述云台的运动状态和所述控制参数对所述云台进行控制。
  99. 根据权利要求98所述的方法,其特征在于,基于所述云台的运动状态和所述控制参数对所述云台进行控制,包括:
    获取用于对所述目标对象进行跟随操作所对应的时长信息;
    在所述时长信息小于第一时间阈值时,则基于所述云台的运动状态对所述控制参数进行更新,获得更新后的控制参数,并基于所述更新后的控制参数对所述云台进行控制;
    在所述时长信息大于或等于第一时间阈值时,则利用所述控制参数对所述云台进行控制。
  100. 根据权利要求99所述的方法,其特征在于,基于所述云台的运动状态对所述控制参数进行更新,获得更新后的控制参数,包括:
    基于所述云台的运动状态,确定与所述控制参数相对应的更新系数,其中,所述更新系数小于1;
    将所述更新系数与所述控制参数的乘积值,确定为所述更新后的控制参数。
  101. 根据权利要求100所述的方法,其特征在于,基于所述云台的运动状态,确定与所述控制参数相对应的更新系数,包括:
    在所述云台的运动状态为特定运动状态时,基于所述时长信息与所述第一时间阈值之间的比值确定与所述控制参数相对应的更新系数。
  102. 根据权利要求77-97中任意一项所述的方法,其特征在于,根据所述控制参数对所述云台进行控制,包括:
    获取与所述目标对象相对应的跟随状态;
    基于所述跟随状态和所述控制参数对所述云台进行控制。
  103. 根据权利要求102所述的方法,其特征在于,获取与所述目标对象相对应的跟随状态,包括:
    检测进行跟随操作的目标对象是否发生改变;
    在所述目标对象由第一对象改变为第二对象时,则确定所述第一对象为丢失状态。
  104. 根据权利要求102所述的方法,其特征在于,基于所述跟随状态和所述控制参数对所述云台进行控制,包括:
    在所述目标对象为丢失状态时,则获取对所述目标对象进行跟随操作过程中所对应的丢失时长信息;
    根据所述丢失时长信息对所述控制参数进行更新,获得更新后的控制参数;
    基于所述更新后的控制参数对所述云台进行控制。
  105. 根据权利要求104所述的方法,其特征在于,根据所述丢失时长信息对所述控制参数进行更新,获得更新后控制参数,包括:
    在所述丢失时长信息大于或等于第二时间阈值时,将所述控制参数更新为零;
    在所述丢失时长信息小于第二时间阈值时,获取所述丢失时长信息与所述第二时间阈值之间的比值,并将1与所述比值之间的差值确定为与所述控制参数相对应的更新系数,将所述更新系数与所述控制参数的乘积值确定为所述更新后控制参数。
  106. 根据权利要求77-97中任意一项所述的方法,其特征在于,根据所述控制参数对所述云台进行控制,包括:
    获取所述目标对象的对象类型;
    根据所述对象类型和所述控制参数对所述云台进行控制。
  107. 根据权利要求106所述的方法,其特征在于,根据所述对象类型和所述控制参数对所述云台进行控制,包括:
    根据所述对象类型对所述控制参数进行调整,获得调整后的参数;
    基于所述调整后的参数对所述云台进行控制。
  108. 根据权利要求107所述的方法,其特征在于,根据所述对象类型对所述控制参数进行调整,获得调整后的参数,包括:
    在所述目标对象为静止对象时,则降低所述云台在偏航方向所对应的控制带宽和所述云台在俯仰方向所对应的控制带宽;
    在所述目标对象为移动对象、且所述移动对象的高度大于或等于高度阈值时,则提高所述云台在偏航方向所对应的控制带宽,并降低所述云台在俯仰方向所对应的控制带宽;
    在所述目标对象为移动对象、且所述移动对象的高度小于高度阈值时,则提高所述云台在偏航方向所对应的控制带宽和所述云台在俯仰方向所对应的控制带宽。
  109. 根据权利要求77-97中任意一项所述的方法,其特征在于,所述方法还包括:
    通过显示界面获取用户针对所述图像采集装置所输入的执行操作;
    根据所述执行操作对所述图像采集装置进行控制,以使得所述图像采集装置确定所述采集位置。
  110. 根据权利要求77-97中任意一项所述的方法,其特征在于,所述方法还包括:
    通过设置于图像采集装置上的测距传感器获取与所述目标对象相对应的距离信息;
    将所述距离信息发送至所述图像采集装置,以使所述图像采集装置结合所述距离信息确定目标对象在采集图像中的采集位置。
  111. 根据权利要求77-97中任意一项所述的方法,其特征在于,所述方法还包括:
    确定所述图像采集装置所对应的工作模式,所述工作模式包括以下之一:先跟随后对焦模式、先对焦后跟随模式;
    利用所述工作模式对所述图像采集装置进行控制。
  112. 根据权利要求77-97中任意一项所述的方法,其特征在于,所述云台设有通信串行总线USB接口,所述USB接口用于与所述图像采集装置有线通信连接。
  113. 一种云台***的控制方法,其特征在于,其中,所述云台***包括云台和与云台通信连接的图像采集装置,所述图像采集装置为具有手动镜头或自动镜头的相机,所述方法包括:
    控制所述图像采集装置采集图像,并获取目标对象在所述图像中的采集位置,所述采集位置是通过所述图像采集装置所确定的;
    将所述采集位置传输至所述云台;
    控制所述云台按照控制参数进行运动,以实现对所述目标对象进行跟随操作,其中,所述控制参数是基于所述采集位置所确定的。
  114. 一种云台的控制方法,其特征在于,用于云台,所述云台通信连接有图像采集装置,所述方法包括:
    获取采集图像,所述采集图像中包括目标对象;
    在所述采集图像中确定所述目标对象的位置,以基于所述目标对象的位置对所述目标对象进行跟随操作;
    将根据所述目标对象的位置发送至所述图像采集装置,以使得所述图像采集装置基于所述目标对象的位置,确定与所述目标对象相对应的对焦位置,并基于所述对焦位置对所述目标对象进行对焦操作。
  115. 一种云台***的控制方法,其特征在于,所述云台***包括云台和与所述云台通信连接的图像采集装置,所述方法包括:
    控制所述图像采集装置采集图像,所述图像包括目标对象;
    在所述图像中确定所述目标对象的位置;
    基于所述目标对象的位置控制所述云台对所述目标对象进行跟随操作,并根据所述目标对象的位置控制所述图像采集装置对所述目标对象进行对焦操作。
  116. 一种云台***的控制方法,其特征在于,所述云台***包括云台和与所述云台通信连接的图像采集装置,所述方法包括:
    获取第一对象在所述图像采集装置采集的采集图像中的采集位置,所述第一对象的采集位置用于所述云台对所述第一对象进行跟随操作,以及用于所述图像采集装置对所述第一对象进行对焦操作;
    在所述第一对象改变为第二对象时,获取所述第二对象在所述图像采集装置采集的采集图像中的采集位置,以使得所述云台由对所述第一对象的跟随操作变化为基于所述第二对象的采集位置对所述第二对象进行跟随操作,以及使得所述图像采集装置由对所述第一对象的对焦操作变化为基于所述第二对象的位置对所述第二对象进行对焦操作。
  117. 一种云台的控制装置,其特征在于,所述装置包括:
    存储器,用于存储计算机程序;
    处理器,用于运行所述存储器中存储的计算机程序以实现:
    获取目标对象在采集图像中的采集位置,所述采集位置是通过图像采集装置所确定的,所述图像采集装置为具有手动镜头或自动镜头的相机,且所述图像采集装置与所述云台通信连接;
    基于所述采集位置,确定用于对所述目标对象进行跟随操作的控制参数;
    根据所述控制参数对所述云台进行控制,以实现对所述目标对象进行跟随操作。
  118. 根据权利要求117所述的装置,其特征在于,在所述处理器获取目标对象在采集图像中的采集位置时,所述处理器用于:
    通过图像采集装置获取与所述目标对象相对应的目标对焦位置;
    将所述目标对焦位置确定为目标对象在采集图像中的采集位置。
  119. 根据权利要求118所述的装置,其特征在于,在所述处理器通过图像采集装置获取与所述目标对象相对应的目标对焦位置时,所述处理器用于:
    通过图像采集装置获取与目标对象相对应的历史对焦位置和当前对焦位置;
    基于所述历史对焦位置和所述当前对焦位置,确定与所述目标对象相对应的目标对焦位置。
  120. 根据权利要求119所述的装置,其特征在于,在所述处理器基于所述历史对焦位置和所述当前对焦位置,确定与所述目标对象相对应的目标对焦位置时,所述处理器用于:
    确定与所述历史对焦位置相对应的历史对象部位和与所述当前对焦位置相对应的当前对象部位;
    根据所述历史对象部位与所述当前对象部位,确定与所述目标对象相对应的目标对焦位置。
  121. 根据权利要求120所述的装置,其特征在于,在所述处理器根据所述历史对象部位与所述当前对象部位,确定与所述目标对象相对应的目标对焦位置时,所述处理器用于:
    在所述历史对象部位与所述当前对象部位为同一目标对象的不同部位时,则获取所述历史对象部位与所述当前对象部位之间的相对位置信息;
    基于所述相对位置信息对所述当前对焦位置进行调整,获得与所述目标对象相对应的目标对焦位置。
  122. 根据权利要求121所述的装置,其特征在于,在所述处理器基于所述相对位置信息对所述当前对焦位置进行调整,获得与所述目标对象相对应的目标对焦位置时,所述处理器用于:
    在所述相对位置信息大于或等于预设阈值时,基于所述相对位置信息对所述当前对焦位置进行调整,获得与所述目标对象相对应的目标对焦位置;
    在所述相对位置信息小于所述预设阈值时,将所述当前对焦位置确定为与所述目标对象相对应的目标对焦位置。
  123. 根据权利要求120所述的装置,其特征在于,在所述处理器根据所述历史对象部位与所述当前对象部位,确定与所述目标对象相对应的目标对焦位置时,所述处理器用于:
    在所述历史对象部位与所述当前对象部位为同一目标对象的不同部位时,基于所述当前对焦位置对构图目标位置进行更新,获得第一更新后构图目标位置;
    基于所述第一更新后构图目标位置对所述目标对象进行跟随操作。
  124. 根据权利要求117所述的装置,其特征在于,所述处理器用于:
    检测进行跟随操作的目标对象是否发生改变;
    在所述目标对象由第一对象改变为第二对象时,获取所述第二对象在所述采集图像中的采集位置;
    基于所述第二对象在所述采集图像中的采集位置对构图目标位置进行更新,获得与所述第二对象相对应的第二更新后构图目标位置,以基于所述第二更新后构图目标位置对所述第二对象进行跟随操作。
  125. 根据权利要求117所述的装置,其特征在于,在所述处理器基于所述采集位置,确定用于对所述目标对象进行跟随操作的控制参数时,所述处理器用于:
    计算与所述采集位置相对应的当前位置预测值;
    基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数。
  126. 根据权利要求125所述的装置,其特征在于,在所述处理器计算与所述采集位置相对应的当前位置预测值时,所述处理器用于:
    确定与所述采集位置相对应的延时时间,所述延时时间用于指示所述云台经由所述图像采集装置获得所述采集位置所需要的时长;
    基于所述延时时间和所述采集位置,确定与所述采集位置相对应的当前位置预测值。
  127. 根据权利要求126所述的装置,其特征在于,在所述处理器确定与所述采集位置相对应的延时时间时,所述处理器用于:
    获取与所述采集图像相对应的曝光时间;
    在所述云台获取到所述采集位置时,确定与所述采集位置相对应的当前接收时间;
    将所述当前接收时间与所述曝光时间之间的时间间隔,确定为与所述采集位置相对应的延时时间。
  128. 根据权利要求127所述的装置,其特征在于,在所述处理器基于所述延时时间和所述采集位置,确定与所述采集位置相对应的当前位置预测值时,所述处理器用于:
    在所述云台获取到所述目标对象在前一采集图像中的前一采集位置时,确定与前一采集位置相对应的前一接收时间;
    确定与所述前一采集位置相对应的前一位置预测值;
    根据所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,计算与所述采集位置相对应的当前位置预测值。
  129. 根据权利要求128所述的装置,其特征在于,在所述处理器根据所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,计算与所述采集位置相对应的当前位置预测值时,所述处理器用于:
    基于所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,确定与所述采集位置相对应的位置调整值;
    将所述位置调整值与所述采集位置的和值,确定为与所述采集位置相对应的当前位置预测值。
  130. 根据权利要求129所述的装置,其特征在于,在所述处理器基于所述采集位置、所述曝光时间、所述延时时间、所述前一接收时间和所述前一位置预测值,确定与所述采集位置相对应的位置调整值时,所述处理器用于:
    基于所述采集位置、所述前一位置预测值、所述曝光时间和所述前一接收时间,确定与所述目标对象相对应的移动速度;
    将所述移动速度与所述延时时间之间的乘积值,确定为与所述采集位置相对应的位置调整值。
  131. 根据权利要求130所述的装置,其特征在于,在所述处理器基于所述采集位置、所述前一位置预测值、所述曝光时间和所述前一接收时间,确定与所述目标对象相对应的移动速度时,所述处理器用于:
    获取所述采集位置与前一位置预测值之间的位置差值以及所述曝光时间与所述前一接收时间之间的时间差值;
    将所述位置差值与所述时间差值之间的比值,确定为与所述目标对象相对应的移动速度。
  132. 根据权利要求125所述的装置,其特征在于,在所述处理器基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数时,所述处理器用于:
    确定所述当前位置预测值与构图目标位置之间的位置偏差;
    基于所述位置偏差,确定用于对所述目标对象进行跟随操作的控制参数。
  133. 根据权利要求132所述的装置,其特征在于,在所述处理器基于所述位置偏差,确定用于对所述目标对象进行跟随操作的控制参数时,所述处理器用于:
    获取与所述采集图像相对应的画面视场角;
    基于所述画面视场角和所述位置偏差,确定用于对所述目标对象进行跟随操作的控制参数。
  134. 根据权利要求133所述的装置,其特征在于,所述控制参数的大小与所述画面视场角的大小呈负相关。
  135. 根据权利要求125所述的装置,其特征在于,在所述处理器基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的控制参数时,所述处理器用于:
    获取与所述云台相对应的跟随模式,所述跟随模式包括以下之一:单轴跟随模式、双轴跟随模式、全跟随模式;
    基于所述当前位置预测值和跟随模式,确定用于对所述目标对象进行跟随操作的控制参数。
  136. 根据权利要求135所述的装置,其特征在于,在所述处理器基于所述当前位置预测值和跟随模式,确定用于对所述目标对象进行跟随操作的控制参数时,所述处理器用于:
    基于所述当前位置预测值,确定用于对所述目标对象进行跟随操作的备选控制参数;
    在所述备选控制参数中,确定与所述跟随模式相对应的目标控制参数。
  137. 根据权利要求136所述的装置,其特征在于,在所述处理器在所述备选控制参数中,确定与所述跟随模式相对应的目标控制参数时,所述处理器用于:
    在所述跟随模式为所述单轴跟随模式时,在所述备选控制参数中,确定与所述单轴跟随模式相对应的单轴的控制参数,并将其他备选控制参数置零;
    在所述跟随模式为所述双轴跟随模式时,在所述备选控制参数中,确定与所述双轴跟随模式相对应的双轴的控制参数,并将其他备选控制参数置零;
    在所述跟随模式为所述全跟随模式时,将所述备选控制参数确定为与所述全跟随模式相对应的三轴的控制参数。
  138. 根据权利要求117-137中任意一项所述的装置,其特征在于,在所述处理器根据所述控制参数对所述云台进行控制时,所述处理器用于:
    获取与所述目标对象所对应的云台的运动状态;
    基于所述云台的运动状态和所述控制参数对所述云台进行控制。
  139. 根据权利要求138所述的装置,其特征在于,在所述处理器基于所述云台的运动状态和所述控制参数对所述云台进行控制时,所述处理器用于:
    获取用于对所述目标对象进行跟随操作所对应的时长信息;
    在所述时长信息小于第一时间阈值时,则基于所述云台的运动状态对所述控制参数进行更新,获得更新后的控制参数,并基于所述更新后的控制参数对所述云台进行控制;
    在所述时长信息大于或等于第一时间阈值时,利用所述控制参数对所述云台进行控制。
  140. 根据权利要求139所述的装置,其特征在于,在所述处理器基于所述云台的运动状态对所述控制参数进行更新,获得更新后的控制参数时,所述处理器用于:
    基于所述云台的运动状态,确定与所述控制参数相对应的更新系数,其中,所述更新系数小于1;
    将所述更新系数与所述控制参数的乘积值,确定为所述更新后的控制参数。
  141. 根据权利要求140所述的装置,其特征在于,在所述处理器基于所述云台的运动状态,确定与所述控制参数相对应的更新系数时,所述处理器用于:
    在所述云台的运动状态为特定运动状态时,基于所述时长信息与所述第一时间阈值之间的比值确定与所述控制参数相对应的更新系数。
  142. 根据权利要求117-137中任意一项所述的装置,其特征在于,在所述处理器根据所述控制参数对所述云台进行控制时,所述处理器用于:
    获取与所述目标对象相对应的跟随状态;
    基于所述跟随状态和所述控制参数对所述云台进行控制。
  143. 根据权利要求142所述的装置,其特征在于,在所述处理器获取与所述目标对象相对应的跟随状态时,所述处理器用于:
    检测进行跟随操作的目标对象是否发生改变;
    在所述目标对象由第一对象改变为第二对象时,则确定所述第一对象为丢失状态。
  144. 根据权利要求142所述的装置,其特征在于,在所述处理器基于所述跟随状态和所述控制参数对所述云台进行控制时,所述处理器用于:
    在所述目标对象为丢失状态时,则获取对所述目标对象进行跟随操作过程中所对应的丢失时长信息;
    根据所述丢失时长信息对所述控制参数进行更新,获得更新后的控制参数;
    基于所述更新后的控制参数对所述云台进行控制。
  145. 根据权利要求144所述的装置,其特征在于,在所述处理器根据所述丢失时长信息对所述控制参数进行更新,获得更新后控制参数时,所述处理器用于:
    在所述丢失时长信息大于或等于第二时间阈值时,将所述控制参数更新为零;
    在所述丢失时长信息小于第二时间阈值时,获取所述丢失时长信息与所述第二时间阈值之间的比值,并将1与所述比值之间的差值确定为与所述控制参数相对应的更新系数,将所述更新系数与所述控制参数的乘积值确定为所述更新后的控制参数。
  146. 根据权利要求117-137中任意一项所述的装置,其特征在于,在所述处理器根据所述控制参数对所述云台进行控制时,所述处理器用于:
    获取所述目标对象的对象类型;
    根据所述对象类型和所述控制参数对所述云台进行控制。
  147. 根据权利要求146所述的装置,其特征在于,在所述处理器根据所述对象类型和所述控制参数对所述云台进行控制时,所述处理器用于:
    根据所述对象类型对所述控制参数进行调整,获得调整后的参数;
    基于所述调整后的参数对所述云台进行控制。
  148. 根据权利要求147所述的装置,其特征在于,在所述处理器根据所述对象类型对所述控制参数进行调整,获得调整后的参数时,所述处理器用于:
    在所述目标对象为静止对象时,则降低所述云台在偏航方向所对应的控制带宽和所述云台在俯仰方向所对应的控制带宽;
    在所述目标对象为移动对象、且所述移动对象的高度大于或等于高度阈值时,则提高所述云台在偏航方向所对应的控制带宽,并降低所述云台在俯仰方向所对应的控制带宽;
    在所述目标对象为移动对象、且所述移动对象的高度小于高度阈值时,则提高所述云台在偏航方向所对应的控制带宽和所述云台在俯仰方向所对应的控制带宽。
  149. 根据权利要求117-137中任意一项所述的装置,其特征在于,所述处理器用于:
    通过显示界面获取用户针对所述图像采集装置所输入的执行操作;
    根据所述执行操作对所述图像采集装置进行控制,以使得所述图像采集装置确定所述采集位置。
  150. 根据权利要求117-137中任意一项所述的装置,其特征在于,所述处理器用于:
    通过设置于图像采集装置上的测距传感器获取与所述目标对象相对应的距离信息;
    将所述距离信息发送至所述图像采集装置,以使所述图像采集装置结合所述距离信息确定目标对象在采集图像中的采集位置。
  151. 根据权利要求117-137中任意一项所述的装置,其特征在于,所述处理器用于:
    确定所述图像采集装置所对应的工作模式,所述工作模式包括以下之一:先跟随后对焦模式、先对焦后跟随模式;
    利用所述工作模式对所述图像采集装置进行控制。
  152. 根据权利要求117-137中任意一项所述的装置,其特征在于,所述云台设有通信串行总线USB接口,所述USB接口用于与所述图像采集装置有线通信连接。
  153. 一种云台***的控制装置,其特征在于,其中,所述云台***包括云台和与云台通信连接的图像采集装置,所述图像采集装置为具有手动镜头或自动镜头的相机,所述装置包括:
    存储器,用于存储计算机程序;
    处理器,用于运行所述存储器中存储的计算机程序以实现:
    控制所述图像采集装置采集图像,并获取目标对象在所述图像中的采集位置,所述采集位置是通过图像采集装置所确定的;
    将所述采集位置传输至所述云台;
    控制所述云台按照控制参数进行运动,以实现对所述目标对象进行跟随操作,其中,所述控制参数是基于所述采集位置所确定的。
  154. 一种云台的控制装置,其特征在于,用于云台,所述云台通信连接有图像采集装置,所述控制装置包括:
    存储器,用于存储计算机程序;
    处理器,用于运行所述存储器中存储的计算机程序以实现:
    获取采集图像,所述采集图像中包括目标对象;
    在所述采集图像中确定所述目标对象的位置,以基于所述目标对象的位置对所述目标对象进行跟随操作;
    将根据所述目标对象的位置发送至所述图像采集装置,以使得所述图像采集装置基于所述目标对象的位置,确定与所述目标对象相对应的对焦位置,并基于所述对焦位置对所述目标对象进行对焦操作。
  155. 一种云台***的控制装置,其特征在于,所述云台***包括云台和与所述云台通信连接的图像采集装置,所述控制装置包括:
    存储器,用于存储计算机程序;
    处理器,用于运行所述存储器中存储的计算机程序以实现:
    控制所述图像采集装置采集图像,所述图像包括目标对象;
    在所述图像中确定所述目标对象的位置;
    基于所述目标对象的位置控制所述云台对所述目标对象进行跟随操作,并根据所述目标对象的位置控制所述图像采集装置对所述目标对象进行对焦操作。
  156. 一种云台***的控制装置,其特征在于,所述云台***包括云台和与所述云台通信连接的图像采集装置,所述控制装置包括:
    存储器,用于存储计算机程序;
    处理器,用于运行所述存储器中存储的计算机程序以实现:
    获取第一对象在所述图像采集装置采集的采集图像中的采集位置,所述第一对象的采集位置用于所述云台对所述第一对象进行跟随操作,以及用于所述图像采集装置对所述第一对象进行对焦操作;
    在所述第一对象改变为第二对象时,获取所述第二对象在所述图像采集装置采集的采集图像中的采集位置,以使得所述云台由对所述第一对象的跟随操作变化为基于所述第二对象的采集位置对所述第二对象进行跟随操作,以及使得所述图像采集装置由对所述第一对象的对焦操作变化为基于所述第二对象的位置对所述第二对象进行对焦操作。
  157. 一种云台的控制***,其特征在于,包括:
    云台;
    权利要求117-152中任一项所述的云台的控制装置,设置于所述云台上,且用于与所述图像采集装置通信连接,并用于通过图像采集装置对所述云台进行控制。
  158. 根据权利要求157所述的控制***,其特征在于,所述控制***还包括:
    测距传感器,设置于所述图像采集装置上,用于获取与所述目标对象相对应的距离信息;
    其中,所述云台的控制装置与所述测距传感器通信连接,用于将所述距离信息发送至图像采集装置,以使所述图像采集装置结合所述距离信息确定目标对象在采集图像中的采集位置。
  159. 一种云台的控制***,其特征在于,包括:
    云台;
    权利要求153所述的云台***的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
  160. 一种云台的控制***,其特征在于,包括:
    云台;
    权利要求154所述的云台的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于通过所述云台对所述图像采集装置进行控制。
  161. 一种云台的控制***,其特征在于,包括:
    云台;
    权利要求155所述的云台***的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
  162. 一种云台的控制***,其特征在于,包括:
    云台;
    权利要求156所述的云台***的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
  163. 一种可移动平台,其特征在于,包括:
    云台;
    支撑机构,用于连接所述云台;
    权利要求117-152中任一项所述的云台的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于通过图像采集装置对所述云台进行控制。
  164. 一种可移动平台,其特征在于,包括:
    云台;
    支撑机构,用于连接所述云台;
    权利要求153所述的云台***的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
  165. 一种可移动平台,其特征在于,包括:
    云台;
    支撑机构,用于连接所述云台;
    权利要求154所述的云台的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于通过所述云台对所述图像采集装置进行控制。
  166. 一种可移动平台,其特征在于,包括:
    云台;
    支撑机构,用于连接所述云台;
    权利要求155所述的云台***的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
  167. 一种可移动平台,其特征在于,包括:
    云台;
    支撑机构,用于连接所述云台;
    权利要求156所述的云台***的控制装置,设置于所述云台上,且用于与图像采集装置通信连接,并用于分别对图像采集装置以及所述云台进行控制。
  168. 一种计算机可读存储介质,其特征在于,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现权利要求77-112中任意一项所述的云台的控制方法。
  169. 一种计算机可读存储介质,其特征在于,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现权利要求113所述的云台***的控制方法。
  170. 一种计算机可读存储介质,其特征在于,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现权利要求114所述的云台的控制方法。
  171. 一种计算机可读存储介质,其特征在于,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现权利要求115所述的云台***的控制方法。
  172. 一种计算机可读存储介质,其特征在于,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现权利要求116所述的云台***的控制方法。
PCT/CN2021/135818 2020-12-30 2021-12-06 基于图像采集装置的控制方法、云台的控制方法及装置 WO2022143022A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180086440.1A CN116783568A (zh) 2020-12-30 2021-12-06 基于图像采集装置的控制方法、云台的控制方法及装置
US18/215,871 US20230341079A1 (en) 2020-12-30 2023-06-29 Control method based on image capturing apparatus, control method and apparatus for gimbal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/141400 WO2022141197A1 (zh) 2020-12-30 2020-12-30 云台的控制方法、装置、可移动平台和存储介质
CNPCT/CN2020/141400 2020-12-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/215,871 Continuation US20230341079A1 (en) 2020-12-30 2023-06-29 Control method based on image capturing apparatus, control method and apparatus for gimbal

Publications (1)

Publication Number Publication Date
WO2022143022A1 true WO2022143022A1 (zh) 2022-07-07

Family

ID=82258765

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2020/141400 WO2022141197A1 (zh) 2020-12-30 2020-12-30 云台的控制方法、装置、可移动平台和存储介质
PCT/CN2021/135818 WO2022143022A1 (zh) 2020-12-30 2021-12-06 基于图像采集装置的控制方法、云台的控制方法及装置

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/141400 WO2022141197A1 (zh) 2020-12-30 2020-12-30 云台的控制方法、装置、可移动平台和存储介质

Country Status (3)

Country Link
US (1) US20230341079A1 (zh)
CN (2) CN114982217A (zh)
WO (2) WO2022141197A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017034320A (ja) * 2015-07-29 2017-02-09 キヤノン株式会社 移動機構を持った撮像装置
CN107295244A (zh) * 2016-04-12 2017-10-24 深圳市浩瀚卓越科技有限公司 一种稳定器的跟踪拍摄控制方法及***
CN108259703A (zh) * 2017-12-31 2018-07-06 深圳市秦墨科技有限公司 一种云台的跟拍控制方法、装置及云台
CN108475075A (zh) * 2017-05-25 2018-08-31 深圳市大疆创新科技有限公司 一种控制方法、装置及云台
CN109688323A (zh) * 2018-11-29 2019-04-26 深圳市中科视讯智能***技术有限公司 无人机视觉跟踪***及其控制方法
CN112073641A (zh) * 2020-09-18 2020-12-11 深圳市众志联城科技有限公司 影像拍摄方法、装置、移动终端以及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3963261B2 (ja) * 2002-06-07 2007-08-22 中央電子株式会社 動画像の自動追尾撮影方法および自動追尾撮影装置
CN103019024B (zh) * 2012-11-29 2015-08-19 浙江大学 实时精确观测和分析乒乓球旋转***与***运行方法
CN104616322A (zh) * 2015-02-10 2015-05-13 山东省科学院海洋仪器仪表研究所 船载红外目标图像辨识跟踪方法及其装置
CN204859351U (zh) * 2015-03-31 2015-12-09 深圳市莫孚康技术有限公司 基于视频目标跟踪的自动跟焦装置
CN112055158B (zh) * 2020-10-16 2022-02-22 苏州科达科技股份有限公司 目标跟踪方法、监控设备、存储介质及***
CN112616019B (zh) * 2020-12-16 2022-06-03 重庆紫光华山智安科技有限公司 目标跟踪方法、装置、云台及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017034320A (ja) * 2015-07-29 2017-02-09 キヤノン株式会社 移動機構を持った撮像装置
CN107295244A (zh) * 2016-04-12 2017-10-24 深圳市浩瀚卓越科技有限公司 一种稳定器的跟踪拍摄控制方法及***
CN108475075A (zh) * 2017-05-25 2018-08-31 深圳市大疆创新科技有限公司 一种控制方法、装置及云台
CN108259703A (zh) * 2017-12-31 2018-07-06 深圳市秦墨科技有限公司 一种云台的跟拍控制方法、装置及云台
CN109688323A (zh) * 2018-11-29 2019-04-26 深圳市中科视讯智能***技术有限公司 无人机视觉跟踪***及其控制方法
CN112073641A (zh) * 2020-09-18 2020-12-11 深圳市众志联城科技有限公司 影像拍摄方法、装置、移动终端以及存储介质

Also Published As

Publication number Publication date
WO2022141197A1 (zh) 2022-07-07
CN114982217A (zh) 2022-08-30
CN116783568A (zh) 2023-09-19
US20230341079A1 (en) 2023-10-26

Similar Documents

Publication Publication Date Title
US10638046B2 (en) Wearable device, control apparatus, photographing control method and automatic imaging apparatus
US9667856B2 (en) Auto focus adjusting method, auto focus adjusting apparatus, and digital photographing apparatus including the same
WO2022126436A1 (zh) 延时检测方法、装置、***、可移动平台和存储介质
US9641751B2 (en) Imaging apparatus, imaging method thereof, and computer readable recording medium
US10944895B2 (en) Accessory apparatus, image-capturing apparatus, control apparatus, lens apparatus, control method, computer program and storage medium storing computer program
CN103384309A (zh) 图像拍摄装置及控制方法
US10708503B2 (en) Image capture system, image capturing apparatus, lens unit, control methods therefor, and storage medium
CN102957862A (zh) 摄像设备和摄像设备的控制方法
CN114449173B (zh) 光学防抖控制方法、装置、存储介质与电子设备
US11102410B2 (en) Camera parameter setting system and camera parameter setting method
WO2023272485A1 (zh) 控制方法、拍摄装置和拍摄***
JP2021078124A (ja) インテリジェント撮影システムの撮影制御方法、装置、記憶媒体及びシステム
WO2022143022A1 (zh) 基于图像采集装置的控制方法、云台的控制方法及装置
JP7131541B2 (ja) 画像処理装置、画像処理方法および画像処理プログラム
JP2019145885A (ja) 制御装置および撮像システム
JP6136189B2 (ja) 補助撮像装置および主撮像装置
JP2537270B2 (ja) レンズ交換可能なカメラ
JP2018046415A (ja) 表示制御装置、表示制御方法及びプログラム
JP6833381B2 (ja) 撮像装置、制御方法、プログラム、および、記憶媒体
JP6080825B2 (ja) 撮像装置及びその制御方法
JP6399851B2 (ja) 撮像装置、その制御方法、および制御プログラム
JP2017216599A (ja) 撮像装置及びその制御方法
JP7335463B1 (ja) 携帯用の撮像装置、映像処理システム、映像処理方法、及び映像処理プログラム
JP2018017756A (ja) 制御装置、撮像装置、レンズ装置、制御方法、プログラム、および、記憶媒体
CN110785691B (zh) 可互换透镜装置、成像装置、成像***、方法以及计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913763

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180086440.1

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21913763

Country of ref document: EP

Kind code of ref document: A1