CN107564054A - A kind of low-noise micro-size unmanned plane reconnaissance equipment monitoring method - Google Patents
A kind of low-noise micro-size unmanned plane reconnaissance equipment monitoring method Download PDFInfo
- Publication number
- CN107564054A CN107564054A CN201710800415.0A CN201710800415A CN107564054A CN 107564054 A CN107564054 A CN 107564054A CN 201710800415 A CN201710800415 A CN 201710800415A CN 107564054 A CN107564054 A CN 107564054A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- msup
- mfrac
- msubsup
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Landscapes
- Image Processing (AREA)
Abstract
In order to reduce data processing power consumption load to caused by processor and communication unit in three-dimensional video acquisition, the invention provides a kind of low-noise micro-size unmanned plane reconnaissance equipment monitoring method, comprise the following steps:(1) two are gathered positioned at different height and the image of different angle;(2) image is pre-processed;(3) monitoring client is sent the images to.
Description
Technical field
The present invention relates to three-dimensional video acquisition technical field, more particularly, to a kind of low-noise micro-size unmanned plane-scouting
Apparatus monitoring method.
Background technology
Investigation unmanned vehicle, especially four-axle aircraft with take photo by plane science and technology development obtained suitable development with
Using.This to take photo by plane investigation in a professional environment, time length of finding a view, photographing request is high, operator's specialty, so typically use
All it is repeated multiple times shooting, then later stage editing is spliced, and obtains the investigation shooting side for the optimal presentation effect that user can enjoy
Formula.Also therefore, in a professional environment, shooting effect to be promoted to those skilled in the art of interest, such as:Stabilization, stabilization etc..
The passback of existing investigation video image, most of is to be based on analog video signal, fogging image, meanwhile, nothing
It is man-machine to continuously acquire the big high-precision sequential images of degree of overlapping, but the image obtained can lose depth information.Based on image
Three-dimensional reconstruction, refer to the method and technology that scene three-dimensional structure is automatically recovered using several digital camera images.In recent years
Carry out three-dimensional reconstruction and obtain huge success in video, 3-dimensional reconstruction process field, apply it to unmanned plane figure
As process field, the full-automatic application rebuild related application, unmanned plane can be expanded is carried out to unmanned plane image, is improved
The application level of unmanned plane.But the research for unmanned plane sequential images three-dimensional reconstruction is still in the starting stage at present, mainly deposits
In problems with:(1) relative to ground image, the three-dimensional reconstruction based on unmanned plane sequential images is usually big data quantity large scene
Three-dimensional reconstruction;(2) directly algorithm ripe in computer vision is applied in unmanned plane sequential images three-dimensional reconstruction mostly;
(3) the not high auxiliary information of precision is not made full use of.
In the prior art, Application No. CN201610987031.X Chinese invention patent application discloses a kind of unmanned plane
Sequential images batch processing three-dimensional rebuilding method, comprises the following steps:Step 1: merge the image of low precision GPS/INS information
Match somebody with somebody;Step 2: establish polar figure;Step 3: calculate the rotary collecting of global coherency;Step 4: initialization image center point;Step
Rapid five, the character pair locus of points is generated;Step 6: initialization 3D structures;Step 7: bundle adjustment;Step 8: dense point cloud
Rebuild;Step 9: texture mapping;Technical scheme realizes the large scene batch to big data quantity unmanned plane sequential images
Three-dimensional reconstruction is handled, images match is carried out by using low precision GPS/IMU prior informations, establish polar figure and draws multi views
The technological means such as the track at midpoint and new bundle adjustment majorized function, improve the precision and efficiency of three-dimensional reconstruction.
However, these prior art operands are excessive, especially the operand in 3-D view processing often leads to fly
Power consumption is too high in terms of the transmission of processing and data of the device to image.
The content of the invention
In order to reduce data processing power consumption load, this hair to caused by processor and communication unit in three-dimensional video acquisition
It is bright to provide a kind of low-noise micro-size unmanned plane-reconnaissance equipment monitoring method, comprise the following steps:
(1) two are gathered positioned at different height and the image of different angle;
(2) image is pre-processed;
(3) monitoring client is sent the images to.
Further, the step (2) includes:
(21) the training image compressed coefficient;
(22) view data of the multiple directions of different altitude height is gathered, carries out image Compression.
Further, the step (21) includes:
A, it is in the first moment of first level direction t1 to the second moment t2 of α angles in the θ angles relative to heading
Gather image video signal I1 (t) and relative to heading θ angles in β angles the second horizontal direction the 3rd when
T1 to the 4th moment t2 collection image video signal I2 (t) are carved, α is different from β;
B, altitude information h2 corresponding to altitude information h1 corresponding to first level direction and the second horizontal direction is gathered;
C, makeThe signal I1 (t) and I2 (t) collected is entered respectively
Row such as down conversion:
Obtain J1 (t) and J2 (t);
D, carry out Fourier transform respectively to J1 (t) and J2 (t) and determine the two different spectrum component;
E, the different frequency content is subjected to inverse Fourier transform, and carries out binomial expansion, obtain its constant term
Coefficient C simultaneously obtains the phase angle ψ after inverse transformation;
F, the compressed coefficient is calculated to I1 (t) and I2 (t):
P in formulaijRepresent image video signal I1 (t) pixel, P 'ijRepresent image video signal I2 (t) pixel;
Further, the step (22) includes:
A, in the 3rd horizontal direction at angle γ of the θ angles relative to heading and in the θ relative to heading
Fiveth moment t3 to sixth moment t4 collection image/video of the angle in the 4th horizontal direction of ξ angles after the 4th moment t2
Signal I3 (t) and I4 (t), γ and ξ are different, gather altitude information h3 corresponding to the 3rd horizontal direction and the 4th horizontal direction is corresponding
Altitude information h4;
B, I3 (t) and I4 (t) wavelet transformation basic function are calculated:
Wherein, QijAnd Q 'ijCorrespond respectively to I3 (t) and I4 (t) pixel;
C, using w1 and w2 as basic function, wavelet transformation is carried out to I3 (t) and I4 (t) respectively, obtains V3 and V4;
D, makeTo signal I3 (t) and I4 (t) difference collected
Carry out such as down conversion:
Obtain J ' 1 (t) and J ' 2 (t);
To J ' 1 (t) and J ' 2 (t) carry out binomial expansion respectively, obtain constant term C '1And C '2;
E, V3 is made for C '1It is normalized, makes V4 for C '2It is normalized;
F, inverse wavelet transform is carried out for the result after normalization, and the result of inverse wavelet transform is sent to the equipment
Communication unit.
Further, the step (3) includes:
(31) it is encrypted to sent image;
(32) data after encryption are sent to monitoring client.
Further, the step (31) includes:
A, picture material to be sent is subjected to analog-to-digital conversion;
B, the digital information obtained after analog-to-digital conversion is encrypted based on chaos encryption algorithm.
Further, the angle [alpha] determines with β and γ and ξ according to thermoinduction tracking direction.
Further, the angle [alpha] should meet with β and γ and ξ:
The beneficial effects of the invention are as follows:
(1) present invention by the way of different angle and different height obtain image, is reduced using based on multiple cameras
The situation of the higher picture pick-up device of cost is relied on during to obtaining 3 D video, significantly reduce the buying of video capture device into
Sheet and O&M cost.
(2) mode of the present invention creatively based on data training obtains the compressed coefficient of acceptable definition, Jin Ertong
Overcompression coefficient reduces the data volume for the video data for needing to transmit, and avoids and carries out angle for video data in the prior art
A large amount of operands of the routine operations such as conversion.
(3) present invention improves the supply of electric power stability of monitoring process by way of data processing amount reduction, favorably
In improving monitor duration, so as to advantageously in the endurance for improving MAV formula investigation equipment.
(4) video acquisition direction of the invention is according to thermo-responsive tracking direction, drastically increases the video that collects
Definition and practicality.
Brief description of the drawings
Fig. 1 shows the FB(flow block) of the method according to the invention.
Embodiment
As shown in figure 1, according to a preferred embodiment of the invention, the invention provides a kind of low-noise micro-size unmanned plane-detect
Apparatus monitoring method is examined, is comprised the following steps:
(1) two are gathered positioned at different height and the image of different angle;
(2) image is pre-processed;
(3) monitoring client is sent the images to.
Preferably, the step (2) includes:
(21) the training image compressed coefficient;
(22) view data of the multiple directions of different altitude height is gathered, carries out image Compression.
Preferably, the step (21) includes:
A, it is in the first moment of first level direction t1 to the second moment t2 of α angles in the θ angles relative to heading
Gather image video signal I1 (t) and relative to heading θ angles in β angles the second horizontal direction the 3rd when
T1 to the 4th moment t2 collection image video signal I2 (t) are carved, α is different from β;
B, altitude information h2 corresponding to altitude information h1 corresponding to first level direction and the second horizontal direction is gathered;
C, makeThe signal I1 (t) and I2 (t) collected is entered respectively
Row such as down conversion:
Obtain J1 (t) and J2 (t);
D, carry out Fourier transform respectively to J1 (t) and J2 (t) and determine the two different spectrum component;
E, the different frequency content is subjected to inverse Fourier transform, and carries out binomial expansion, obtain its constant term
Coefficient C simultaneously obtains the phase angle ψ after inverse transformation;
F, the compressed coefficient is calculated to I1 (t) and I2 (t):
P in formulaijRepresent image video signal I1 (t) pixel, P 'ijRepresent image video signal I2 (t) pixel;
Preferably, the step (22) includes:
A, in the 3rd horizontal direction at angle γ of the θ angles relative to heading and in the θ relative to heading
Fiveth moment t3 to sixth moment t4 collection image/video of the angle in the 4th horizontal direction of ξ angles after the 4th moment t2
Signal I3 (t) and I4 (t), γ and ξ are different, gather altitude information h3 corresponding to the 3rd horizontal direction and the 4th horizontal direction is corresponding
Altitude information h4;
B, I3 (t) and I4 (t) wavelet transformation basic function are calculated:
Wherein, QijAnd Q 'ijCorrespond respectively to I3 (t) and I4 (t) pixel;
C, using w1 and w2 as basic function, wavelet transformation is carried out to I3 (t) and I4 (t) respectively, obtains V3 and V4;
D, makeTo signal I3 (t) and I4 (t) difference collected
Carry out such as down conversion:
Obtain J ' 1 (t) and J ' 2 (t);
To J ' 1 (t) and J ' 2 (t) carry out binomial expansion respectively, obtain constant term C '1And C '2;
E, V3 is made for C '1It is normalized, makes V4 for C '2It is normalized;
F, inverse wavelet transform is carried out for the result after normalization, and the result of inverse wavelet transform is sent to the equipment
Communication unit.
Preferably, the step (3) includes:
(31) it is encrypted to sent image;
(32) data after encryption are sent to monitoring client.
Preferably, the step (31) includes:
A, picture material to be sent is subjected to analog-to-digital conversion;
B, the digital information obtained after analog-to-digital conversion is encrypted based on chaos encryption algorithm.
Preferably, the angle [alpha] determines with β and γ and ξ according to thermoinduction tracking direction.
Preferably, the angle [alpha] should meet with β and γ and ξ:
The narration made above for presently preferred embodiments of the present invention is the purpose to illustrate, and is not intended to limit essence of the invention
It is really disclosed form, based on teaching above or learns from embodiments of the invention and make an amendment or change to be possible
, embodiment is to explain the principle of the present invention and allowing those skilled in the art to exist with various embodiments using the present invention
Selected in practical application and narration, technological thought of the invention attempt to be determined by claim and its equalization.
Claims (8)
1. a kind of low-noise micro-size unmanned plane-reconnaissance equipment monitoring method, it is characterised in that comprise the following steps:
(1) two are gathered positioned at different height and the image of different angle;
(2) image is pre-processed;
(3) monitoring client is sent the images to.
2. according to the method for claim 1, it is characterised in that the step (2) includes:
(21) the training image compressed coefficient;
(22) view data of the multiple directions of different altitude height is gathered, carries out image Compression.
3. according to the method for claim 2, it is characterised in that the step (21) includes:
A, gathered in the θ angles relative to heading in the first moment of first level direction t1 to the second moment t2 of α angles
Image video signal I1 (t) and relative to heading θ angles in β angles the second horizontal direction in the 3rd moment t1
It is different from β to the 4th moment t2 collection image video signal I2 (t), α;
B, altitude information h2 corresponding to altitude information h1 corresponding to first level direction and the second horizontal direction is gathered;
C, makeTo the signal I collected1And I (t)2(t) carry out respectively
Such as down conversion:
<mrow>
<msub>
<mi>J</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>-</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mn>6</mn>
<msup>
<mi>&pi;</mi>
<mn>3</mn>
</msup>
</mrow>
</mfrac>
<msubsup>
<mo>&Integral;</mo>
<mrow>
<mo>-</mo>
<mi>Y</mi>
</mrow>
<mrow>
<mo>+</mo>
<mi>Y</mi>
</mrow>
</msubsup>
<msubsup>
<mo>&Integral;</mo>
<mrow>
<mo>-</mo>
<mi>X</mi>
</mrow>
<mrow>
<mo>+</mo>
<mi>X</mi>
</mrow>
</msubsup>
<mfrac>
<mrow>
<msub>
<mi>I</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
<msub>
<mi>H</mi>
<mn>1</mn>
</msub>
</mfrac>
<mo>&times;</mo>
<msub>
<mi>H</mi>
<mn>3</mn>
</msub>
<mo>&times;</mo>
<msup>
<mi>e</mi>
<mrow>
<mo>-</mo>
<msub>
<mi>H</mi>
<mn>1</mn>
</msub>
<mo>+</mo>
<msub>
<mi>iH</mi>
<mn>2</mn>
</msub>
</mrow>
</msup>
<mi>d</mi>
<mi>x</mi>
<mi>d</mi>
<mi>y</mi>
</mrow>
<mrow>
<msub>
<mi>J</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>-</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mn>6</mn>
<msup>
<mi>&pi;</mi>
<mn>3</mn>
</msup>
</mrow>
</mfrac>
<msubsup>
<mo>&Integral;</mo>
<mrow>
<mo>-</mo>
<mi>Y</mi>
</mrow>
<mrow>
<mo>+</mo>
<mi>Y</mi>
</mrow>
</msubsup>
<msubsup>
<mo>&Integral;</mo>
<mrow>
<mo>-</mo>
<mi>X</mi>
</mrow>
<mrow>
<mo>+</mo>
<mi>X</mi>
</mrow>
</msubsup>
<mfrac>
<mrow>
<msub>
<mi>I</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
<msub>
<mi>H</mi>
<mn>1</mn>
</msub>
</mfrac>
<mo>&times;</mo>
<msub>
<mi>H</mi>
<mn>2</mn>
</msub>
<mo>&times;</mo>
<msup>
<mi>e</mi>
<mrow>
<mo>-</mo>
<msub>
<mi>H</mi>
<mn>1</mn>
</msub>
<mo>+</mo>
<msub>
<mi>iH</mi>
<mn>3</mn>
</msub>
</mrow>
</msup>
<mi>d</mi>
<mi>x</mi>
<mi>d</mi>
<mi>y</mi>
</mrow>
Obtain J1And J (t)2(t);
D, to J1And J (t)2(t) Fourier transform is carried out respectively and determines the two different spectrum component;
E, the different frequency content is subjected to inverse Fourier transform, and carries out binomial expansion, obtain its constant term coefficient C
And obtain the phase angle ψ after inverse transformation;
F, to I1And I (t)2(t) compressed coefficient is calculated:
<mrow>
<mi>E</mi>
<mi>n</mi>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msqrt>
<mi>C</mi>
</msqrt>
</mfrac>
<mo>&times;</mo>
<mfrac>
<mi>&alpha;</mi>
<mi>&psi;</mi>
</mfrac>
<mo>&times;</mo>
<mfrac>
<mi>&beta;</mi>
<mi>&psi;</mi>
</mfrac>
<msqrt>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>H</mi>
<mn>2</mn>
</msub>
<mo>&times;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mn>255</mn>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mn>255</mn>
</munderover>
<msub>
<mi>P</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<msub>
<mi>log</mi>
<mn>2</mn>
</msub>
<msub>
<mi>P</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>H</mi>
<mn>1</mn>
</msub>
<mo>&times;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mn>255</mn>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mn>255</mn>
</munderover>
<msub>
<msup>
<mi>P</mi>
<mo>,</mo>
</msup>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<msub>
<mi>log</mi>
<mn>2</mn>
</msub>
<msub>
<msup>
<mi>P</mi>
<mo>,</mo>
</msup>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
</mrow>
P in formulaijRepresent image video signal I1(t) pixel, P 'ijRepresent image video signal I2(t) pixel;
4. according to the method for claim 3, it is characterised in that the step (22) includes:
A, in the 3rd horizontal direction at angle γ of the θ angles relative to heading and in the θ angles relative to heading
In the 4th horizontal direction of ξ angles in the 4th moment t2The 5th moment t afterwards3To the 6th moment t4Gather image video signal
I3And I (t)4(t), γ and ξ is different, gathers altitude information h corresponding to the 3rd horizontal direction3With the 4th horizontal direction corresponding to sea
Pull out information h4;
B, I is calculated3And I (t)4(t) wavelet transformation basic function:
<mrow>
<msub>
<mi>w</mi>
<mn>1</mn>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mi>E</mi>
<mi>n</mi>
</mrow>
</mfrac>
<mo>&times;</mo>
<mfrac>
<mi>&gamma;</mi>
<mi>&pi;</mi>
</mfrac>
<mo>&times;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mn>255</mn>
</munderover>
<msub>
<mi>Q</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<msub>
<mi>log</mi>
<mn>2</mn>
</msub>
<msub>
<mi>Q</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
</mrow>
<mrow>
<msub>
<mi>w</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mi>E</mi>
<mi>n</mi>
</mrow>
</mfrac>
<mo>&times;</mo>
<mfrac>
<mi>&xi;</mi>
<mi>&pi;</mi>
</mfrac>
<mo>&times;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mn>255</mn>
</munderover>
<msubsup>
<mi>Q</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
<mo>,</mo>
</msubsup>
<msub>
<mi>log</mi>
<mn>2</mn>
</msub>
<msubsup>
<mi>Q</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
<mo>,</mo>
</msubsup>
</mrow>
Wherein, QijAnd Q'ijCorrespond respectively to I3And I (t)4(t) pixel;
C, with w1And w2For basic function, respectively to I3And I (t)4(t) wavelet transformation is carried out, obtains V3And V4;
D, makeTo the signal I collected3And I (t)4(t) carry out respectively
Such as down conversion:
<mrow>
<msub>
<msup>
<mi>J</mi>
<mo>,</mo>
</msup>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>-</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mn>6</mn>
<msup>
<mi>&pi;</mi>
<mn>3</mn>
</msup>
</mrow>
</mfrac>
<msubsup>
<mo>&Integral;</mo>
<mrow>
<mo>-</mo>
<mi>Y</mi>
</mrow>
<mrow>
<mo>+</mo>
<mi>Y</mi>
</mrow>
</msubsup>
<msubsup>
<mo>&Integral;</mo>
<mrow>
<mo>-</mo>
<mi>X</mi>
</mrow>
<mrow>
<mo>+</mo>
<mi>X</mi>
</mrow>
</msubsup>
<mfrac>
<mrow>
<msub>
<mi>I</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<msub>
<msup>
<mi>H</mi>
<mo>,</mo>
</msup>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>&times;</mo>
<msub>
<msup>
<mi>H</mi>
<mo>,</mo>
</msup>
<mn>3</mn>
</msub>
<mo>&times;</mo>
<msup>
<mi>e</mi>
<mrow>
<mo>-</mo>
<msub>
<msup>
<mi>H</mi>
<mo>,</mo>
</msup>
<mn>1</mn>
</msub>
<mo>+</mo>
<msub>
<msup>
<mi>iH</mi>
<mo>,</mo>
</msup>
<mn>2</mn>
</msub>
</mrow>
</msup>
<mi>d</mi>
<mi>x</mi>
<mi>d</mi>
<mi>y</mi>
</mrow>
<mrow>
<msub>
<msup>
<mi>J</mi>
<mo>,</mo>
</msup>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>-</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mn>6</mn>
<msup>
<mi>&pi;</mi>
<mn>3</mn>
</msup>
</mrow>
</mfrac>
<msubsup>
<mo>&Integral;</mo>
<mrow>
<mo>-</mo>
<mi>Y</mi>
</mrow>
<mrow>
<mo>+</mo>
<mi>Y</mi>
</mrow>
</msubsup>
<msubsup>
<mo>&Integral;</mo>
<mrow>
<mo>-</mo>
<mi>X</mi>
</mrow>
<mrow>
<mo>+</mo>
<mi>X</mi>
</mrow>
</msubsup>
<mfrac>
<mrow>
<msub>
<mi>I</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<msub>
<msup>
<mi>H</mi>
<mo>,</mo>
</msup>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>&times;</mo>
<msub>
<msup>
<mi>H</mi>
<mo>,</mo>
</msup>
<mn>2</mn>
</msub>
<mo>&times;</mo>
<msup>
<mi>e</mi>
<mrow>
<mo>-</mo>
<msub>
<msup>
<mi>H</mi>
<mo>,</mo>
</msup>
<mn>1</mn>
</msub>
<mo>+</mo>
<msub>
<msup>
<mi>iH</mi>
<mo>,</mo>
</msup>
<mn>3</mn>
</msub>
</mrow>
</msup>
<mi>d</mi>
<mi>x</mi>
<mi>d</mi>
<mi>y</mi>
</mrow>
Obtain J '1And J ' (t)2(t);
To J '1And J ' (t)2(t) binomial expansion is carried out respectively, obtains constant term C '1With C '2;
E, V is made3For C '1It is normalized, makes V4For C '2It is normalized;
F, inverse wavelet transform is carried out for the result after normalization, and the result of inverse wavelet transform is sent to the logical of the equipment
Believe unit.
5. according to the method for claim 1, it is characterised in that the step (3) includes:
(31) it is encrypted to sent image;
(32) data after encryption are sent to monitoring client.
6. according to the method for claim 5, it is characterised in that the step (31) includes:
A, picture material to be sent is subjected to analog-to-digital conversion;
B, the digital information obtained after analog-to-digital conversion is encrypted based on chaos encryption algorithm.
7. according to the method for claim 4, it is characterised in that the angle [alpha] chases after with β and γ and ξ according to thermoinduction
Track direction determines.
8. according to the method for claim 7, it is characterised in that the angle [alpha] should meet with β and γ and ξ:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710800415.0A CN107564054A (en) | 2017-09-07 | 2017-09-07 | A kind of low-noise micro-size unmanned plane reconnaissance equipment monitoring method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710800415.0A CN107564054A (en) | 2017-09-07 | 2017-09-07 | A kind of low-noise micro-size unmanned plane reconnaissance equipment monitoring method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107564054A true CN107564054A (en) | 2018-01-09 |
Family
ID=60979403
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710800415.0A Withdrawn CN107564054A (en) | 2017-09-07 | 2017-09-07 | A kind of low-noise micro-size unmanned plane reconnaissance equipment monitoring method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107564054A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104967783A (en) * | 2015-07-01 | 2015-10-07 | 西北工业大学 | Multi-channel micro image acquisition system for micro nanosatellite |
CN106210145A (en) * | 2016-09-12 | 2016-12-07 | 北海和思科技有限公司 | A kind of agricultural environment monitoring system based on Internet of Things and method |
CN106295682A (en) * | 2016-08-02 | 2017-01-04 | 厦门美图之家科技有限公司 | A kind of judge the method for the picture quality factor, device and calculating equipment |
CN106687883A (en) * | 2015-01-28 | 2017-05-17 | 京瓷办公信息***株式会社 | Power supply device and image processing device |
-
2017
- 2017-09-07 CN CN201710800415.0A patent/CN107564054A/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106687883A (en) * | 2015-01-28 | 2017-05-17 | 京瓷办公信息***株式会社 | Power supply device and image processing device |
CN104967783A (en) * | 2015-07-01 | 2015-10-07 | 西北工业大学 | Multi-channel micro image acquisition system for micro nanosatellite |
CN106295682A (en) * | 2016-08-02 | 2017-01-04 | 厦门美图之家科技有限公司 | A kind of judge the method for the picture quality factor, device and calculating equipment |
CN106210145A (en) * | 2016-09-12 | 2016-12-07 | 北海和思科技有限公司 | A kind of agricultural environment monitoring system based on Internet of Things and method |
Non-Patent Citations (1)
Title |
---|
张淼等: "傅里叶变换与反变换在点阵图像信息压缩处理中的应用", 《中国校外教育》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11107275B2 (en) | System and methods for improved aerial mapping with aerial vehicles | |
CN108428255B (en) | Real-time three-dimensional reconstruction method based on unmanned aerial vehicle | |
CN109238173B (en) | Three-dimensional live-action reconstruction system for coal storage yard and rapid coal quantity estimation method | |
WO2017114507A1 (en) | Method and device for image positioning based on ray model three-dimensional reconstruction | |
US10445924B2 (en) | Method and device for processing DVS events | |
Li et al. | Large scale image mosaic construction for agricultural applications | |
CN109141396B (en) | Unmanned aerial vehicle pose estimation method with fusion of auxiliary information and random sampling consistency algorithm | |
CN106875451A (en) | Camera calibration method, device and electronic equipment | |
WO2020062434A1 (en) | Static calibration method for external parameters of camera | |
CN106910217A (en) | Vision map method for building up, computing device, computer-readable storage medium and intelligent vehicle | |
DE202014010843U1 (en) | Align ground based images with aerial images | |
CN105825518A (en) | Sequence image rapid three-dimensional reconstruction method based on mobile platform shooting | |
WO2019100219A1 (en) | Output image generation method, device and unmanned aerial vehicle | |
CN105466399B (en) | Quickly half global dense Stereo Matching method and apparatus | |
EP2263214A1 (en) | Platform for the production of seamless orthographic imagery | |
CN111161154A (en) | Real-time and rapid orthoscopic splicing system and method for videos of unmanned aerial vehicle | |
US8509522B2 (en) | Camera translation using rotation from device | |
CN105551043B (en) | Unmanned plane image data real-time processing method | |
CN112200854A (en) | Leaf vegetable three-dimensional phenotype measurement method based on video image | |
CN107426548A (en) | A kind of microminiature low power image transmission equipment | |
CN115239912A (en) | Three-dimensional inside reconstruction method based on video image | |
CN107529014A (en) | A kind of rotor wing unmanned aerial vehicle of heavy-duty overlength endurance | |
CN114972646A (en) | Method and system for extracting and modifying independent ground objects of live-action three-dimensional model | |
WO2020181510A1 (en) | Image data processing method, apparatus, and system | |
CN107063191B (en) | A kind of method of photogrammetric regional network entirety relative orientation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20180109 |
|
WW01 | Invention patent application withdrawn after publication |