Intelligent analysis processing method for high-definition video content based on image recognition
Technical Field
The invention relates to the technical field of intelligent analysis of high-definition video content, in particular to an intelligent analysis processing method of high-definition video content based on image recognition.
Background
The intelligent analysis of the high-definition video content is to analyze and understand each frame in the video through computer vision and artificial intelligence technology, and the intelligent analysis comprises tasks such as target detection and motion tracking, and an unmanned aerial vehicle aerial photographing mode is adopted when the motion tracking is carried out, so that the high definition and the integrity of the subsequent video are ensured, and unmanned aerial vehicle aerial photographing setting is required to be processed.
The existing aerial unmanned aerial vehicle mainly carries out shooting attitude adjustment by adjusting aerial angles, and obviously, the shooting attitude adjustment mode also has the following problems: 1. the effect level of the aerial video is only considered by adjusting the aerial angle to carry out shooting gesture adjustment, so that the integrity of the aerial video is not guaranteed.
2. The depth analysis is not carried out according to the position duty ratio and the size of the tracking object in the photographed image, so that the reliability and the rationality of the photographing gesture adjustment of the aerial unmanned aerial vehicle are reduced, and the flexibility of the photographing gesture adjustment of the aerial unmanned aerial vehicle is reduced.
3. The aerial flight speed analysis of the data is carried out without combining the shooting conditions of the dynamic tracking targets, so that the high-definition effect of the aerial video of the dynamic object is poor, and meanwhile, the reliability of the aerial unmanned aerial vehicle for adjusting the shooting gesture of the dynamic object is reduced.
Disclosure of Invention
In view of this, in order to solve the problems set forth in the background art, an intelligent analysis processing method for high-definition video content based on image recognition is now provided.
The aim of the invention can be achieved by the following technical scheme: the invention provides an intelligent analysis processing method for high-definition video content based on image recognition, which comprises the following steps: s1, inputting a target to be tracked: dividing a contour image of a target to be tracked into contour areas, extracting RGB values of the contour areas, and further inputting the type of the target to be tracked, the contour image and the RGB values of the contour areas into a management background of the target aerial unmanned aerial vehicle, wherein the type comprises moving and non-moving.
S2, tracking target video acquisition: and starting the target aerial unmanned aerial vehicle according to the input type of the target to be tracked, the contour image and the RGB value of each contour area, carrying out video acquisition on each similar target to be tracked, and positioning video information from each similar target to be tracked.
S3, demand tracking target confirmation: and calculating the tracking similarity of each similar target to be tracked according to the video information of each similar target to be tracked, so as to lock the target to be tracked of the target aerial unmanned aerial vehicle, and recording the target to be tracked as a target to be tracked.
S4, acquiring shooting positions: the method comprises the steps of obtaining a video of a demand tracking target collected by a target aerial unmanned aerial vehicle, locating the outline of the demand tracking target from the video, and extracting aerial setting parameters of the target aerial unmanned aerial vehicle.
S5, shooting attitude adaptability adjustment: according to the target unmanned aerial vehicle demand tracking target's profile, calculate the unmanned aerial vehicle's of target aerial photography shooting and set up the adaptation degree to take photo by plane unmanned aerial vehicle's shooting gesture adaptability adjustment carries out.
Specifically, the video information is RGB values of each contour region of each divided image.
Specifically, the calculating the tracking similarity of each similar target to be tracked includes the following specific calculating process: a1, locating the contour volume of the target to be tracked from the contour image of the target to be tracked, and marking as V To be treated 。
A2, overlapping and comparing the contour of each similar target to be tracked with the contour of the target to be tracked to obtain the overlapping volume of the contour of each similar target to be tracked and the contour of the target to be tracked, extracting the maximum overlapping volume from the overlapping volume, and marking asWhere i represents a similar object number to be tracked, i=1, 2,..n.
A3, calculating the tracking similarity beta of the profile layer corresponding to each similar target to be tracked i ,Where K represents the contour overlap volume ratio of the set reference, and e represents a natural constant.
A4, extracting RGB values of each contour area in each divided image from video information of each similar target to be tracked.
And A5, calculating the tracking similarity χi of the corresponding color layers of the similar targets to be tracked.
A6, calculating the tracking similarity delta of each similar target to be tracked i 。
Specifically, the calculating the tracking similarity of the corresponding color layers of the similar targets to be tracked includes the following specific calculating process: b1, marking RGB values of each contour region corresponding to the target to be tracked in the target segmentation image corresponding to each similar target to be tracked as R respectively ij 、G ij And B ij Where j represents the contour region number, j=1, 2,..m.
B2, respectively marking RGB values of the target to be tracked corresponding to each contour area asAnd->
B3, calculating the tracking similarity χi of the corresponding color layers of the similar targets to be tracked,
specifically, the calculation formula of the tracking similarity of each similar target to be tracked is as follows:wherein a is 1 And a 2 Respectively representing the set target similarity evaluation duty ratio weights of the outline layer and the color layer 1 Indicating the set tracking similarity assessment correction factor.
Specifically, the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is calculated, and the specific calculation process is as follows: c1, comparing the contour of the target to be tracked with the input contour of the target to be tracked, if the contour is completely overlapped, extracting the image area occupation ratio of the contour of the target to be tracked and the offset distance of the center point of the contour of the target to be tracked, and respectively marking asX 1 。
C2, calculating shooting setting adaptation degree theta of target aerial unmanned aerial vehicle under complete superposition 1 ,
Wherein K is Ginseng radix And x Ginseng radix The area ratio and the center point offset of the set reference are respectively represented, and ΔK and Δx represent the area ratio deviation and the center point offset deviation of the set reference, b 1 And b 2 Respectively representing the set area duty ratio deviation under complete coincidence and the set center point offset deviation, and correspondingly shooting the set adaptation evaluation duty ratio weight,γ 2 the camera settings under the complete coincidence representing the settings adapt the evaluation correction factors.
C3, comparing the contour of the target to be tracked with the input contour of the target to be tracked, if the contour is partially overlapped, extracting the image area occupation ratio of the contour of the target to be tracked, the area of the non-overlapped contour and the offset distance of the center point of the contour of the target to be tracked, and respectively marking asS and x 2 。
C4, calculating shooting setting adaptation degree theta of target aerial unmanned aerial vehicle under partial superposition 2 ,
Wherein S is Ginseng radix Representing the area of non-overlapping contours of the set reference, b 3 、b 4 And b 5 Respectively representing the set area occupation ratio deviation under partial coincidence, the center point offset deviation and the non-coincident contour area corresponding shooting setting adaptation evaluation occupation ratio weight gamma 3 The shooting setting under the partial coincidence representing the setting is adapted to evaluate the correction factor.
Specifically, the aerial photographing setting parameters include aerial photographing speed and GPS position coordinates.
Specifically, the shooting gesture adaptive adjustment of the aerial unmanned aerial vehicle includes shooting gesture adaptive adjustment when the type of the required tracking target is moving and shooting gesture adaptive adjustment when the type of the required tracking target is non-moving, wherein the specific adjustment process of shooting gesture adaptive adjustment when the type of the required tracking target is moving is as follows: and D1, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than a set reference value and the contour of the target to be tracked is completely overlapped with the input contour of the target to be tracked, extracting the area occupation ratio of the contour image of the target to be tracked and the offset distance and the relative azimuth of the center point of the contour of the target to be tracked, and extracting the current GPS position coordinate from the aerial setting parameters.
D2, when the area occupation ratio of the profile image of the target required to be tracked is smaller than a set value, sending a descending instruction to the target aerial unmanned aerial vehicle, and enabling the target aerial unmanned aerial vehicle to be subjected to the descending instructionAs a decrease value, wherein H Ginseng radix The reference movement height value corresponding to the set unit area duty deviation is shown.
And D3, when the area occupation ratio of the profile image of the target required to be tracked is larger than a set value, sending a rising instruction to the target aerial unmanned aerial vehicle, and taking H as a rising value.
D4, when the position of the center point of the profile of the target to be tracked is positioned at the left side of the position of the center point of the shot image, a left shift instruction is sent to the target aerial unmanned aerial vehicle, and x is set 1 As a left shift distance value, when the position of the center point of the profile of the target to be tracked is positioned on the right side of the position of the center point of the photographed image, a right shift instruction is sent to the target aerial unmanned aerial vehicle, and x is calculated 1 As a right shift distance value.
And D5, when the contour of the target to be tracked and the input contour of the target to be tracked are partially overlapped, extracting the relative azimuth of the current contour image of the target to be tracked and the offset distance and the relative azimuth of the center point of the contour of the target to be tracked, and extracting the current GPS position coordinate and the aerial photographing speed from the aerial photographing setting parameters.
D6, when the profile image of the target to be tracked is above the shot image, extracting the offset distance of the center point of the profile of the target to be tracked, and marking the offset distance as x Upper part Sending an acceleration instruction to a target aerial unmanned aerial vehicle, and enabling v=x to be Upper part *v Ginseng radix As acceleration value, v Ginseng radix The reference corresponding to the set unit distance deviation adjusts the shooting speed.
And D7, when the profile image of the target to be tracked is positioned below the shot image, extracting the offset distance of the central point of the profile of the target to be tracked, and marking the offset distance as x Lower part(s) Transmitting a deceleration instruction to a target aerial unmanned aerial vehicle, and transmitting v=x Lower part(s) *v Ginseng radix As a deceleration value.
And D8, when the positions of the center points of the required tracking target contour are positioned at the left side and the right side of the positions of the center points of the photographed images, obtaining the required tracking target contour according to the mode of adaptively adjusting the photographing postures of the positions of the center points of the required tracking target contour at the left side and the right side of the positions of the center points of the photographed images under complete coincidence.
And D9, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is larger than a set reference value, maintaining the current GPS position coordinate and the aerial speed.
Specifically, the specific adjustment process for adaptively adjusting the shooting gesture when the type of the target to be tracked is non-moving is as follows: and E1, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than a set reference value and the profile of the target to be tracked is completely overlapped with the input profile of the target to be tracked, obtaining the target aerial unmanned aerial vehicle in the same way according to the mode of shooting gesture adaptation adjustment of the target aerial unmanned aerial vehicle under the complete overlapping when the type of the target to be tracked is moving.
And E2, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than a set reference value and the contour of the target to be tracked is partially overlapped with the input contour of the target to be tracked, extracting the relative azimuth of the current contour image of the target to be tracked and the offset distance and the relative azimuth of the center point of the contour of the target to be tracked, and extracting the current GPS position coordinate from the aerial setting parameters.
And E3, when the profile image of the target required to be tracked is above the shot image, extracting the offset distance of the central point of the profile of the target required to be tracked, recording the offset distance as x, sending a forward moving instruction to the target aerial unmanned aerial vehicle, and taking the x as a forward moving distance value.
And E4, when the profile image of the target required to be tracked is positioned below the shot image, extracting the offset distance of the central point of the profile of the target required to be tracked, marking the offset distance as x, sending a backward moving instruction to the target aerial unmanned aerial vehicle, and taking the x as a backward moving distance value.
And E5, when the positions of the center points of the required tracking target contour are positioned at the left side and the right side of the positions of the center points of the photographed images, obtaining the required tracking target contour by the same way according to the mode of adaptively adjusting the photographing postures of the positions of the center points of the required tracking target contour at the left side and the right side of the positions of the center points of the photographed images under complete coincidence.
Compared with the prior art, the embodiment of the invention has at least the following advantages or beneficial effects: (1) According to the method, the depth analysis of the tracking similarity of each similar target to be tracked is carried out through the profile layer and the color layer, so that the multidimensional analysis of the tracking similarity of each similar target to be tracked is realized, the error in the confirmation result of the tracked target is reduced, and the accuracy of the confirmation of the tracked target is improved.
(2) According to the invention, through the requirement of tracking the outline of the target, the shooting setting adaptation degree analysis of the target aerial unmanned aerial vehicle in the two states of complete coincidence and partial coincidence is carried out, the data analysis of the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is realized, the shooting setting adaptation state of the target aerial unmanned aerial vehicle is intuitively displayed, the accuracy and the adaptation of the shooting setting adaptation degree analysis are improved, and a reliable data support basis is provided for the follow-up shooting posture adaptation adjustment of the aerial unmanned aerial vehicle.
(3) According to the invention, the shooting gesture adaptability adjustment of the moving and non-moving requirement tracking target is performed by combining the requirement tracking target outline and different positions between the two shooting images, so that the integrity and high definition of the shooting images are improved, and the coverage and reliability of the shooting gesture adaptability adjustment of the aerial unmanned aerial vehicle are improved.
(4) According to the invention, the shooting gesture adaptability adjustment analysis of the moving demand tracking target is carried out through the azimuth of the demand tracking target profile in the shot image, the offset distance and the azimuth of the central point, the aerial shooting speed of the aerial unmanned aerial vehicle and the GOS position coordinate, so that the flexibility adjustment of the shooting gesture of the moving demand tracking target is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the steps of the method of the present invention.
FIG. 2 is a schematic diagram of the area ratio of the present invention.
FIG. 3 is a schematic view of the left and right directions of the center point of the present invention.
FIG. 4 is a schematic view of the center point of the present invention in a vertical direction.
Description of the drawings: 1. shooting an image, 2, requiring tracking of a target contour image, 3, shooting an image center point, 4, requiring tracking of a target contour center point, 5, and requiring tracking of a target contour center point offset distance.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the invention provides a high-definition video content intelligent analysis processing method based on image recognition, which comprises the following steps: s1, inputting a target to be tracked: dividing a contour image of a target to be tracked into contour areas, extracting RGB values of the contour areas, and further inputting the type of the target to be tracked, the contour image and the RGB values of the contour areas into a management background of the target aerial unmanned aerial vehicle, wherein the type comprises moving and non-moving.
S2, tracking target video acquisition: and starting the target aerial unmanned aerial vehicle according to the input type of the target to be tracked, the contour image and the RGB value of each contour area, carrying out video acquisition on each similar target to be tracked, and positioning video information from each similar target to be tracked.
It should be noted that, the method for confirming each similar target to be tracked is that an object similar to the outline of the target to be tracked is extracted from a cloud database, is used as each similar target to be tracked, and is input into the management background of the target aerial unmanned aerial vehicle.
In a specific embodiment of the present invention, the video information is RGB values of each contour area of each divided image.
It should be noted that, each divided image is obtained by traversing each frame of the video and storing each frame as an independent image file, and each contour area is obtained in the same way according to each contour area dividing mode of the object to be tracked.
S3, demand tracking target confirmation: and calculating the tracking similarity of each similar target to be tracked according to the video information of each similar target to be tracked, so as to lock the target to be tracked of the target aerial unmanned aerial vehicle, and recording the target to be tracked as a target to be tracked.
In a specific embodiment of the present invention, the calculating the tracking similarity of each similar target to be tracked specifically includes: a1, locating the contour volume of the target to be tracked from the contour image of the target to be tracked, and marking as V To be treated 。
A2, overlapping and comparing the contour of each similar target to be tracked with the contour of the target to be tracked to obtain the overlapping volume of the contour of each similar target to be tracked and the contour of the target to be tracked, extracting the maximum overlapping volume from the overlapping volume, and marking asWhere i represents a similar object number to be tracked, i=1, 2,..n.
A3, calculating the tracking similarity beta of the profile layer corresponding to each similar target to be tracked i ,Where K represents the contour overlap volume ratio of the set reference, and e represents a natural constant.
A4, extracting RGB values of each contour area in each divided image from video information of each similar target to be tracked.
And A5, calculating the tracking similarity χi of the corresponding color layers of the similar targets to be tracked.
In a specific embodiment of the present invention, the specific calculation process includes B1, dividing the target to be tracked corresponding to each contour region in the image corresponding to each similar target to be trackedThe RGB values of the fields are respectively labeled R ij 、G ij And B ij Where j represents the contour region number, j=1, 2,..m.
It should be noted that, the target segmented image is a segmented image corresponding to the maximum overlapping volume of each similar target to be tracked.
B2, respectively marking RGB values of the target to be tracked corresponding to each contour area asAnd->
B3, calculating the tracking similarity χ of the corresponding color layers of the similar targets to be tracked i ,
A6, calculating the tracking similarity delta of each similar target to be tracked i 。
In a specific embodiment of the present invention, a calculation formula of the tracking similarity of each similar target to be tracked is:wherein a is 1 And a 2 Respectively representing the set target similarity evaluation duty ratio weights of the outline layer and the color layer 1 Indicating the set tracking similarity assessment correction factor.
The method for locking the target to be tracked of the target aerial unmanned aerial vehicle is to extract the maximum tracking similarity from the tracking similarity of each similar target to be tracked, and take the target to be tracked corresponding to the maximum tracking similarity as the target to be tracked of the target aerial unmanned aerial vehicle.
According to the embodiment of the invention, the depth analysis of the tracking similarity of each similar target to be tracked is carried out through the profile layer and the color layer, so that the multidimensional analysis of the tracking similarity of each similar target to be tracked is realized, the error in the confirmation result of the tracked target is reduced, and the accuracy of the confirmation of the tracked target is improved.
S4, acquiring shooting positions: acquiring a video of a target to be tracked, which is acquired by a target aerial unmanned aerial vehicle, locating the outline of the target to be tracked from the video, and extracting aerial setting parameters of the target aerial unmanned aerial vehicle;
in a specific embodiment of the present invention, the aerial photographing setting parameters include an aerial photographing speed and a GPS position coordinate.
S5, shooting attitude adaptability adjustment: according to the target unmanned aerial vehicle demand tracking target's profile, calculate the unmanned aerial vehicle's of target aerial photography shooting and set up the adaptation degree to take photo by plane unmanned aerial vehicle's shooting gesture adaptability adjustment carries out.
In a specific embodiment of the present invention, the calculating the shooting setting adaptation degree of the target aerial unmanned aerial vehicle specifically includes: c1, comparing the contour of the target to be tracked with the input contour of the target to be tracked, if the contour is completely overlapped, extracting the image area occupation ratio of the contour of the target to be tracked and the offset distance of the center point of the contour of the target to be tracked, and respectively marking asX 1 。
It should be noted that the offset refers to a distance between a position of a center point of the profile of the target to be tracked and a position of a center point of the captured image.
C2, calculating shooting setting adaptation degree theta of target aerial unmanned aerial vehicle under complete superposition 1 ,
Wherein K is Ginseng radix And x Ginseng radix The area ratio and the center point offset of the set reference are respectively represented, and ΔK and Δx represent the area ratio deviation and the center point offset deviation of the set reference, b 1 And b 2 Respectively representing the set area occupation ratio deviation under complete coincidence and the corresponding shooting setting adaptation evaluation occupation ratio weight of the center point offset deviationHeavy, gamma 2 The camera settings under the complete coincidence representing the settings adapt the evaluation correction factors.
C3, comparing the contour of the target to be tracked with the input contour of the target to be tracked, if the contour is partially overlapped, extracting the image area occupation ratio of the contour of the target to be tracked, the area of the non-overlapped contour and the offset distance of the center point of the contour of the target to be tracked, and respectively marking asS and x 2 。
C4, calculating shooting setting adaptation degree theta of target aerial unmanned aerial vehicle under partial superposition 2 ,
Wherein S is Ginseng radix Representing the area of non-overlapping contours of the set reference, b 3 、b 4 And b 5 Respectively representing the set area occupation ratio deviation under partial coincidence, the center point offset deviation and the non-coincident contour area corresponding shooting setting adaptation evaluation occupation ratio weight gamma 3 The shooting setting under the partial coincidence representing the setting is adapted to evaluate the correction factor.
According to the embodiment of the invention, through the requirement of tracking the outline of the target, the shooting setting adaptation degree analysis of the target aerial unmanned aerial vehicle in the two states of complete coincidence and partial coincidence is carried out, the data analysis of the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is realized, the shooting setting adaptation state of the target aerial unmanned aerial vehicle is intuitively displayed, the accuracy and adaptation of the shooting setting adaptation degree analysis are improved, and a reliable data support basis is provided for the follow-up shooting posture adaptation adjustment of the aerial unmanned aerial vehicle.
It should be noted that in a specific embodiment, the moving target may be a traveling vehicle or a moving animal, and when the aerial unmanned aerial vehicle is not relatively stationary with the moving target, the photographed image may be incomplete, so that the photographing posture of the aerial unmanned aerial vehicle needs to be adaptively adjusted.
In a specific embodiment of the present invention, the adaptive adjustment of the shooting gesture of the aerial photographing unmanned aerial vehicle includes performing adaptive adjustment of the shooting gesture when the type of the target to be tracked is moving and performing adaptive adjustment of the shooting gesture when the type of the target to be tracked is non-moving, where a specific adjustment process of performing adaptive adjustment of the shooting gesture when the type of the target to be tracked is moving is: referring to fig. 2 to 4, D1, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than the set reference value and the profile of the target to be tracked and the input profile of the target to be tracked are completely overlapped, the area occupation ratio of the profile image 2 of the target to be tracked and the offset distance 5 and the relative azimuth of the center point of the profile of the target to be tracked are extracted, and the current GPS position coordinates are extracted from the aerial setting parameters.
D2, when the occupation ratio of the area 2 of the profile image of the target required to be tracked is smaller than a set value, sending a descending instruction to the target aerial unmanned aerial vehicle, and enabling the descending instruction to be carried outAs a decrease value, wherein H Ginseng radix The reference movement height value corresponding to the set unit area duty deviation is shown.
And D3, when the area 2 ratio of the required tracking target outline image is larger than a set value, sending a rising instruction to the target aerial unmanned aerial vehicle, and taking H as a rising value.
The area ratio of the profile image of the target to be tracked determines the integrity and definition of the captured image, and thus the area ratio of the profile image of the target to be tracked needs to be analyzed.
D4, when the center point position 4 of the target profile needing to be tracked is positioned at the left side of the center point position 3 of the shot image, a left shift instruction is sent to the target aerial unmanned aerial vehicle, and x is set 1 As a left shift distance value, when the position 4 of the center point of the profile of the target to be tracked is positioned on the right side of the position 3 of the center point of the photographed image, a right shift instruction is sent to the target aerial unmanned aerial vehicle, and x is calculated 1 As a right shift distance value.
And D5, when the contour of the target to be tracked and the input contour of the target to be tracked are partially overlapped, extracting the relative azimuth of the current target contour image 2 to be tracked, the offset distance 5 of the center point of the target contour to be tracked and the relative azimuth, and extracting the current GPS position coordinate and the aerial photographing speed from the aerial photographing setting parameters.
D6, when the profile image 2 of the target to be tracked is above the shot image 1, extracting the offset 5 of the center point of the profile of the target to be tracked, and marking as x Upper part Sending an acceleration instruction to a target aerial unmanned aerial vehicle, and enabling v=x to be Upper part *v Ginseng radix As acceleration value, v Ginseng radix The reference corresponding to the set unit distance deviation adjusts the shooting speed.
D7, when the profile image 2 of the target to be tracked is below the shot image 1, extracting the offset 5 of the center point of the profile of the target to be tracked, and marking as x Lower part(s) Transmitting a deceleration instruction to a target aerial unmanned aerial vehicle, and transmitting v=x Lower part(s) *v Ginseng radix As a deceleration value.
The unmanned aerial vehicle is characterized in that the head direction is the upper direction, the tail direction is the lower direction, the left wing is the left direction, and the right wing is the right direction in the navigation of the unmanned aerial vehicle.
And D8, when the required tracking target contour center point position 4 is positioned on the left side and the right side of the photographed image center point position 3, obtaining the required tracking target contour center point position according to the mode of adaptively adjusting the photographing postures of the required tracking target contour center point position 4 positioned on the left side and the right side of the photographed image center point position 3 under complete coincidence.
And D9, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is larger than a set reference value, maintaining the current GPS position coordinate and the aerial speed.
According to the embodiment of the invention, the shooting gesture adaptability adjustment analysis of the moving demand tracking target is carried out through the azimuth of the demand tracking target profile in the shooting image, the offset distance and the azimuth of the central point, the aerial shooting speed of the aerial unmanned aerial vehicle and the GOS position coordinate, so that the flexibility adjustment of the shooting gesture of the moving demand tracking target is realized.
In a specific embodiment of the present invention, the specific adjustment process for adaptively adjusting the shooting gesture when the type of the target to be tracked is non-moving is: and E1, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than a set reference value and the profile of the target to be tracked is completely overlapped with the input profile of the target to be tracked, obtaining the target aerial unmanned aerial vehicle in the same way according to the mode of shooting gesture adaptation adjustment of the target aerial unmanned aerial vehicle under the complete overlapping when the type of the target to be tracked is moving.
And E2, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than a set reference value and the contour of the target to be tracked is partially overlapped with the input contour of the target to be tracked, extracting the relative position of the current target contour image 2 to be tracked and the offset distance 5 and the relative position of the center point of the contour of the target to be tracked, and extracting the current GPS position coordinate from the aerial setting parameters.
And E3, when the required tracking target profile image 2 is above the photographed image 1, extracting a required tracking target profile center point offset distance 5, marking as x front, sending a forward moving instruction to the target aerial unmanned aerial vehicle, and taking x front as a forward moving distance value.
And E4, when the required tracking target contour image 2 is positioned below the photographed image 1, extracting a required tracking target contour center point offset distance 5, marking as x, sending a backward moving instruction to the target aerial unmanned aerial vehicle, and taking the x as a backward moving distance value.
And E5, when the required tracking target contour center point position 3 is positioned at the left side and the right side of the photographed image center point position 4, obtaining the required tracking target contour center point position according to the mode of adaptively adjusting the photographing postures of the required tracking target contour center point position 3 positioned at the left side and the right side of the photographed image center point position 4 under complete coincidence.
According to the embodiment of the invention, the shooting gesture adaptability adjustment of the moving and non-moving target tracking requirements is performed by combining the target contour required to be tracked and different positions between the shooting images, so that the integrity and high definition of the shooting images are improved, and the coverage and reliability of the shooting gesture adaptability adjustment of the aerial unmanned aerial vehicle are improved.
The foregoing is merely illustrative and explanatory of the principles of this invention, as various modifications and additions may be made to the specific embodiments described, or similar arrangements may be substituted by those skilled in the art, without departing from the principles of this invention or beyond the scope of this invention as defined in the claims.