CN112906475B - Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle - Google Patents

Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle Download PDF

Info

Publication number
CN112906475B
CN112906475B CN202110070839.2A CN202110070839A CN112906475B CN 112906475 B CN112906475 B CN 112906475B CN 202110070839 A CN202110070839 A CN 202110070839A CN 112906475 B CN112906475 B CN 112906475B
Authority
CN
China
Prior art keywords
image
macro block
compensation
motion vector
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110070839.2A
Other languages
Chinese (zh)
Other versions
CN112906475A (en
Inventor
杨慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Gaosun Information Technology Co ltd
Original Assignee
Zhengzhou Kaiwen Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Kaiwen Electronic Technology Co ltd filed Critical Zhengzhou Kaiwen Electronic Technology Co ltd
Priority to CN202110070839.2A priority Critical patent/CN112906475B/en
Publication of CN112906475A publication Critical patent/CN112906475A/en
Application granted granted Critical
Publication of CN112906475B publication Critical patent/CN112906475B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/689Motion occurring during a rolling shutter mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of artificial intelligence, in particular to an artificial intelligence-based rolling shutter imaging method and system for an urban surveying and mapping unmanned aerial vehicle. Matching a model frame of the urban building image in the CIM according to the key points of the urban building image acquired in real time; dividing the urban building image and the model frame into N macro block images correspondingly, performing offset compensation by comparing each corresponding macro block image to obtain an initial macro block compensation image, and further performing compensation optimization on the initial macro block compensation image according to the motion vector of each macro block image. By matching the model frame in the CIM model, the matching accuracy can be improved, and the jelly effect can be well eliminated by performing offset compensation and motion vector compensation on each macro block image divided by the urban building image, so that the image quality of the image is better.

Description

Urban surveying and mapping unmanned aerial vehicle rolling shutter imaging method and system based on artificial intelligence
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an artificial intelligence-based rolling shutter imaging method and system for an unmanned aerial vehicle for urban surveying and mapping.
Background
The phenomena of shaking, twisting, tilting, and glume shedding, etc., which are generated when a camera using a rolling shutter photographs an object in high-speed motion, or when the camera itself photographs in high-speed motion, are called a jelly effect.
The global shutter can effectively avoid the jelly effect, but the rolling shutter has the advantages of uncomplicated readout, high cost efficiency, less transistor heat, low electronic noise and the like, so the rolling shutter is the most widely used shutter technology at present.
In urban surveying and mapping, the unmanned aerial vehicle is often used to match with the roller shutter camera for high-altitude operation, but during high-altitude operation, due to the balance of the propeller and other reasons, the vibration of the frame and the sensor is inevitable, and further the unmanned aerial vehicle is caused to generate the jelly effect. At present, the shaking of the unmanned aerial vehicle is reduced by configuring the holder for the unmanned aerial vehicle, so that the stability of the picture is increased. Although can avoid the shake effect for unmanned aerial vehicle configuration cloud platform, when the phenomenon such as screw looseness appears in the cloud platform, the cloud platform inevitable becomes the vibration source, can lead to cloud platform vibrations to lead to the jelly effect to produce.
In the prior art, in order to eliminate the jelly effect, an image acquired in real time is generally subjected to blocking processing, and then image compensation optimization is performed according to an offset vector or a motion vector between corresponding blocked images in previous and next frame images.
In practice, the inventors found that the above prior art has the following disadvantages: when a larger jelly effect occurs, offset prediction or motion vector estimation is performed by using the previous frame image as a reference image of the next frame image, and when the selected reference image has the jelly effect, a predicted result has a larger error.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide an artificial intelligence-based rolling shutter imaging method and system for an unmanned aerial vehicle for urban surveying and mapping, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides an artificial intelligence-based shutter imaging method for an unmanned aerial vehicle for urban mapping, where the method includes:
acquiring a city building image and a depth image of the city building image by using image acquisition equipment, and acquiring key points of a building rigid body from the city building image by using a key point detection network, wherein the key points are each angular point and an anchor frame of the building rigid body;
combining the key points and the depth image to obtain a key point three-dimensional point cloud, and matching the key point three-dimensional point cloud to a corresponding model frame of the urban building image in a CIM (common information model);
correspondingly dividing the city building image and the model frame into
Figure DEST_PATH_IMAGE001
Calculating the area intersection ratio of key point Gaussian hot spots between each macro block image in the urban building image and the macro block image at the corresponding position in the model frame, wherein the key point Gaussian hot spots are obtained by processing the key points by using a Gaussian core, and when the area intersection ratio is smaller than an area threshold, performing offset compensation on the macro block image of the urban building image to obtain an initial macro block compensation image; whether or notStoring the macro block image of the city building image in an image buffer area;
when being most adjacent
Figure 292596DEST_PATH_IMAGE002
When the initial macro block compensation image of the frame can be matched with a corresponding reference macro block image, acquiring a motion vector of the initial macro block compensation image, wherein the reference macro block image is a macro block image which is most similar to the current initial macro block compensation image in the reference frame, and the reference frame is composed of macro block images in urban building images with area thresholds larger than or equal to the area thresholds; otherwise, when the nearest neighbor is
Figure 710939DEST_PATH_IMAGE002
When the initial macro block compensation image of the frame can not be matched with the corresponding reference macro block image, searching an adjacent macro block image adjacent to the initial macro block compensation image in the image buffer area, and calculating the motion vector of the adjacent macro block image to obtain the motion vector of the initial macro block compensation image;
and correspondingly performing compensation optimization on the initial macro block compensation image by using the motion vector.
Further, when the model frame of a new building image cannot be matched in the CIM, the compensation method for the new building image comprises the following steps:
using the front of the current frame of the new building image
Figure DEST_PATH_IMAGE003
Predicting a forward motion vector of the current frame by the macro block image of the frame;
using the background of the current frame
Figure 144325DEST_PATH_IMAGE003
The macro block image of a frame predicts a backward motion vector of the current frame backward;
and performing image compensation on the current frame by combining the forward motion vector and the backward motion vector.
Further, the offset compensation method comprises:
and obtaining an offset vector of the macro block image according to the three-dimensional coordinates of the key points in the macro block image in the city building image and the three-dimensional coordinates of the key points in the corresponding macro block image in the model frame, and performing offset compensation on the macro block image by using the offset vector.
Further, the nearest neighbor
Figure 939106DEST_PATH_IMAGE002
And when the initial macro block compensation image of the frame can be matched with the corresponding reference macro block image, acquiring the motion vector of the initial macro block compensation image through a search algorithm of motion estimation.
Further, the nearest neighbor
Figure 836655DEST_PATH_IMAGE002
When none of the initial macroblock compensation images of a frame can match the corresponding reference macroblock image, the method for obtaining the motion vector of the initial macroblock compensation image comprises the following steps:
obtaining the motion vector of each adjacent macro block image through a search algorithm of motion estimation;
and calculating the average motion vector of the adjacent macro block image, and taking the average motion vector as the motion vector of the initial macro block compensation image.
In a second aspect, another embodiment of the present invention provides an artificial intelligence-based rolling shutter imaging system for urban mapping unmanned aerial vehicles, the system including:
the system comprises a key point detection unit, a data processing unit and a data processing unit, wherein the key point detection unit is used for acquiring a city building image and a depth image of the city building image by using image acquisition equipment, and acquiring key points of a building rigid body from the city building image by using a key point detection network, wherein the key points are each angular point and an anchor frame of the building rigid body;
the image matching unit is used for obtaining a key point three-dimensional point cloud by combining the key point and the depth image, and matching the key point three-dimensional point cloud in a CIM (common information model) to a corresponding model frame of the urban building image;
an offset compensation unit for correspondingly dividing the city building image and the model frame into
Figure 691478DEST_PATH_IMAGE001
Calculating the area intersection ratio of key point Gaussian hotspots between each macro block image in the urban building image and the macro block image at the corresponding position in the model frame, wherein the key point Gaussian hotspots are obtained by processing the key points by utilizing a Gaussian core, and when the area intersection ratio is smaller than an area threshold value, performing offset compensation on the macro block image of the urban building image to obtain an initial macro block compensation image; otherwise, storing the macro block image of the city building image in an image buffer area;
motion vector prediction unit for prediction of current nearest neighbor
Figure 674478DEST_PATH_IMAGE002
When the initial macro block compensation image of the frame can be matched with a corresponding reference macro block image, acquiring a motion vector of the initial macro block compensation image, wherein the reference macro block image is a macro block image which is most similar to the current initial macro block compensation image in the reference frame, and the reference frame is composed of macro block images in the urban building image with the area greater than or equal to the area threshold value; otherwise, when the nearest neighbor is
Figure 7370DEST_PATH_IMAGE002
When the initial macro block compensation image can not be matched with the corresponding reference macro block image, searching an adjacent macro block image adjacent to the initial macro block compensation image in the image buffer area, and calculating the motion vector of the adjacent macro block image to obtain the motion vector of the initial macro block compensation image;
and the compensation optimization unit is used for correspondingly performing compensation optimization on the initial macro block compensation image by using the motion vector.
Further, when the model frame of a new building image cannot be matched in the CIM model in the image matching unit, the method for compensating the new building image comprises the following steps:
a forward vector obtaining unit for utilizing the front of the current frame appearing in the new building image
Figure 493846DEST_PATH_IMAGE003
Predicting a forward motion vector of the current frame by the macro block image of the frame;
a backward vector obtaining unit for utilizing the backward of the current frame
Figure 785150DEST_PATH_IMAGE003
Predicting a reverse motion vector of the current frame by the macro block image of the frame;
and the image compensation unit is used for carrying out image compensation on the current frame by combining the forward motion vector and the reverse motion vector.
Further, the method of offset compensation in the offset compensation unit includes:
and obtaining an offset vector of the macro block image according to the three-dimensional coordinates of the key points in the macro block image in the city building image and the three-dimensional coordinates of the key points in the corresponding macro block image in the model frame, and performing offset compensation on the macro block image by using the offset vector.
Further, the motion vector prediction unit includes a first motion vector detection unit for detecting the nearest neighbor
Figure 989867DEST_PATH_IMAGE002
And when the initial macro block compensation image of the frame can be matched with the corresponding reference macro block image, acquiring the motion vector of the initial macro block compensation image through a search algorithm of motion estimation.
Further, the motion vector prediction unit may further includeIncluding a second motion vector detection unit for detecting a motion vector of the nearest neighbor
Figure 126450DEST_PATH_IMAGE002
When none of the initial macroblock compensated images of a frame can match the corresponding reference macroblock image, the motion vector of the initial macroblock compensated image is obtained, and the second motion vector detection unit further includes:
the vector analysis unit is used for obtaining the motion vector of each adjacent macro block image through a search algorithm of motion estimation;
and the vector processing unit is used for calculating the average motion vector of the adjacent macro block image and taking the average motion vector as the motion vector of the initial macro block compensation image.
The embodiment of the invention has at least the following beneficial effects: (1) the model frame is matched in the CIM model, and because the model frame does not generate the jelly effect, the matching accuracy can be improved by taking the model frame as a reference image, and the jelly effect can be well eliminated by carrying out offset compensation and motion vector compensation on each macro block image divided by the urban building image, so that the image quality of the image is better.
(2) When the motion vector of the current macro block image can not be predicted through the relationship between adjacent frames, according to the principle that the correlation between adjacent macro block images belonging to the same moving object is large, the adjacent macro block image adjacent to the current macro block image in the image buffer area is utilized, and the offset of the adjacent macro block image is small, so that the motion vector of the adjacent macro block image is used as the motion vector of the current macro block image, the accuracy of motion vector optimization compensation can be improved, and errors can be reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a rolling shutter imaging method of an unmanned aerial vehicle for urban mapping based on artificial intelligence according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating steps of a method for rolling shutter imaging of an unmanned aerial vehicle for urban mapping based on artificial intelligence according to an embodiment of the present invention;
fig. 3 is a block diagram of a rolling shutter imaging system of an artificial intelligence-based urban surveying and mapping unmanned aerial vehicle according to another embodiment of the present invention;
FIG. 4 is a block diagram of an image matching unit according to an embodiment of the present invention;
FIG. 5 is a block diagram of a motion vector prediction unit according to an embodiment of the present invention;
fig. 6 is a block diagram of a second motion vector detection unit according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the rolling shutter imaging method and system of unmanned aerial vehicle for urban surveying and mapping based on artificial intelligence according to the present invention with reference to the accompanying drawings and preferred embodiments shows the following detailed descriptions. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the artificial intelligence-based rolling shutter imaging method and system for the urban surveying and mapping unmanned aerial vehicle in detail by combining with the accompanying drawings.
Referring to the attached drawings 1 and 2, the embodiment of the invention provides an artificial intelligence-based rolling shutter imaging method for an unmanned aerial vehicle for urban surveying and mapping, which comprises the following specific steps:
and S001, acquiring a city building image and a depth image of the city building image by using image acquisition equipment, and acquiring key points of the building rigid body of the acquired city building image by using a key point detection network, wherein the key points are each corner point and each anchor frame of the building rigid body.
And S002, obtaining a key point three-dimensional point cloud by combining the key points and the depth image, and matching the key point three-dimensional point cloud in the CIM to a corresponding model frame of the urban building image.
Step S003, dividing the city building image and the model frame into corresponding parts
Figure 467433DEST_PATH_IMAGE001
Calculating the area intersection ratio of the key point Gaussian hotspots in each corresponding macro block image, and performing offset compensation on the macro block image to obtain an initial macro block compensation image when the area intersection ratio is smaller than an area threshold value; otherwise, storing the macro block image in the image buffer.
Step S004, when the nearest one is
Figure 929638DEST_PATH_IMAGE002
When the initial macro block compensation image of the frame can be matched with the corresponding reference macro block, the motion vector of the initial macro block compensation image is obtained; otherwise, when it is nearest
Figure 621651DEST_PATH_IMAGE002
When the initial macro block compensation image of the frame can not be matched with the corresponding reference macro block, searching an adjacent macro block image adjacent to the initial macro block compensation image in the image buffer area, and calculating the motion vector of the adjacent macro block image to obtain the motion vector of the initial macro block compensation image.
In step S005, the compensation optimization is performed on the initial macroblock compensation image using the motion vector.
Further, in step S001, in the embodiment of the present invention, an RGBD camera carried by an unmanned aerial vehicle is used to capture an urban building, and two-dimensional image data of an acquired urban building image is sent to a key point detection network.
Preferably, in the embodiment of the present invention, a key point detection network of an encoder-decoder structure is adopted to perform key point detection on a building rigid body with obvious features in an acquired urban building image, wherein a specific training process of the key point detection network is as follows:
1) the data set is a large number of urban building images collected by the unmanned aerial vehicle and containing typical building features. The city building image can be a city building, and can also be provided with a city road and other partial backgrounds. Wherein 60% of the data set was randomly selected as the training set, 20% as the validation set, and 20% as the test set.
2) The labels used in the data set are keypoint labels, i.e. the positions of objects are marked out using keypoints in the urban building image. In the embodiment of the invention, the key points to be detected are two types, namely each angular point and each anchor frame of the rigid body of the building. The process labeled is as follows: marking the positions of key points on a single channel with the same size as the data image, wherein the corner point is marked as 1, and the anchor frame is marked as 2; then, the Gaussian nucleus is used for processing, so that the key points form the Gaussian hot spots of the key points.
3) The loss function in the key point detection network adopts a mean square error loss function.
It should be noted that the embodiment of the present invention selects the key point for collecting the building rigid body because the building rigid body is a main feature group in the urban building image in the process of urban surveying and mapping by the unmanned aerial vehicle, and the key point for detecting the building rigid body is not only easy to collect but also has strong representativeness.
Further, in step S002, the embodiment of the present invention performs three-dimensional point cloud conversion by combining the detected key points of the building rigid body and the depth image of the city building image collected by the RGBD camera to obtain a key point three-dimensional point cloud.
Furthermore, the position of the unmanned aerial vehicle is positioned in real time by combining the air route planning of the unmanned aerial vehicle and the IMU and GPS information, the obtained position information of the unmanned aerial vehicle is utilized to carry out region division in the CIM, and the model frame corresponding to the urban building image acquired by the unmanned aerial vehicle is quickly matched in the CIM according to the region where the unmanned aerial vehicle is located.
Further, in the embodiment of the invention, point cloud registration is performed according to the real-time activity area of the unmanned aerial vehicle, which is positioned in the CIM model by the three-dimensional point cloud of the key point, so as to obtain the model frame corresponding to the urban building image, and the specific registration process is as follows:
1) and extracting the key points of the CIM model and the key points of the urban building image according to the same key point selection standard from the three-dimensional point cloud data of the CIM model and the urban building image.
2) And respectively calculating characteristic descriptors of the selected CIM model key points and the key points of the urban building image.
3) And estimating the corresponding relation between the CIM model and the urban building image by combining the coordinate positions of the feature descriptors in the CIM model and the urban building image in two data sets and taking the similarity of the features and the positions between the CIM model and the urban building image as a basis, and preliminarily estimating to obtain corresponding point pairs.
4) If the data set is noisy, the pairs of false correspondences that contribute to the registration are removed.
5) And estimating the rigid body change of the building by using the left correct corresponding relation, solving a corresponding rotation matrix and a translation matrix, and finishing the registration process.
It should be noted that (1) the point cloud registration process is divided into two stages, namely, coarse registration and fine registration.
Preferably, in the embodiment of the present invention, the coarse registration uses a registration algorithm based on feature matching, for example, an AO algorithm based on point SHOT features.
Preferably, in the embodiment of the present invention, the ICP algorithm is used for the fine registration.
(2) By using the key point matching, the matching accuracy can be improved, and the calculated amount in the point cloud registration process can be effectively reduced.
Further, in step S003, the embodiment of the present invention performs offset detection and offset compensation on the model frame and the city building image obtained in the CIM through point cloud registration, where the specific processes of the offset detection and the offset compensation are as follows:
1) dividing the city building image and the model frame which are collected in real time into N macro block images respectively, and comparing the city building image with each corresponding macro block image in the model frame.
2) Calculating the area intersection ratio of the key point Gaussian hot spots between each macro block image in the urban building image and the macro block image at the corresponding position in the model frame, and when the area intersection ratio is smaller than an area threshold value, considering that the key point generates large offset and needing to perform offset compensation on the macro block image of the urban building image; otherwise, when the area sum ratio is larger than or equal to the area threshold, the macro block image of the city building image is considered to be not offset or less offset, and the macro block image of the city building image is stored in the image buffer area.
3) And for the macro block image in the urban building image with the area intersection ratio smaller than the area threshold, calculating an offset vector according to the three-dimensional coordinates of the key points in the macro block image and the three-dimensional coordinates of the key points in the corresponding macro block image in the model frame, and performing offset compensation on the macro block image in the urban building image by using the offset vector to obtain an initial macro block compensation image.
Preferably, the embodiment of the invention divides the urban building image and the model frame into standard ones
Figure 19047DEST_PATH_IMAGE004
A macroblock picture.
Preferably, the area threshold value in the embodiment of the present invention is 70%.
It should be noted that, by using the area intersection ratio of the gaussian hot spots of the key points, it is determined whether the key points have large offsets, and the offset vectors of all the key points can be specifically calculated on the premise of non-quantitative calculation, so that the calculation amount can be effectively reduced.
Further, since the offset compensation is only a relatively rough compensation method and continuous change of the city building image in the direction of the continuous motion vector is not considered in the offset compensation process, in step S004, the embodiment of the present invention performs compensation optimization by obtaining the motion vector of the initial macroblock compensation image. The specific process of obtaining the motion vector of the initial macro block compensation image is as follows:
1) the displacement of all pixels in the non-overlapping macro block images divided by the urban building image is the same, namely the macro block images after the offset compensation are used for finding out the macro block image which is the most similar to the current initial macro block compensation image in the reference frame according to a certain matching criterion for each initial macro block compensation image, namely the reference macro block image, and the relative displacement between the reference macro block image and the current initial macro block compensation image is the motion vector.
2) The reference frame is composed of macro block images in the urban building image which is greater than or equal to the area threshold value in the process of deviation detection, and is stored in the image buffer area, namely, in the urban building image of one frame which is divided into N macro block images, only all the macro block images which are greater than or equal to the area threshold value in the urban building image are stored in the image buffer area as the reference macro block images of the reference frame.
3) The matching criterion between the macro block images in the embodiment of the invention adopts the absolute error sum criterion, and the absolute error sum criterion has the advantages of no need of multiplication and simple and convenient realization. In addition, in the embodiment of the present invention, a Diamond Search method (DS) is adopted as a Search algorithm for finding an optimal reference macro block image, and the Search algorithm has the characteristics of simplicity, robustness and high efficiency, wherein a main Search process of the DS algorithm is as follows:
a. in general, the motion vector is always highly concentrated near the center of the search window, and this phenomenon is particularly obvious in the case of a video sequence in which an object moves slowly, mainly because a stationary block and a slow moving block are dominant, and the method is very suitable for a scene of aerial surveying and mapping by an unmanned aerial vehicle. The center offset property of the motion vector suggests that the best reference macro block image can be quickly searched by searching only the points near the center of the window without searching all the points in the window.
b. The initial stage is repeated by using the large diamond search template to search until the best reference macro block image falls on the center of the large diamond. Because the step length of the large diamond searching template is large, the searching range is wide, coarse positioning can be realized, and searching is not limited to local minimum.
c. After the rough positioning is finished, the optimal reference macro block image is considered to be in a diamond region surrounded by 8 macro block images around the large diamond template, and then the small diamond search template is used for realizing the accurate positioning of the optimal reference macro block image so as not to generate large fluctuation, thereby improving the motion estimation precision.
4) Through the step 3), the two most similar macroblock images can be matched, namely, the reference macroblock image of the current initial macroblock compensation image can be found. In the embodiment of the invention, the reference macro block image is searched according to a near-to-far criterion, and the search window is set to be M frames. When M frames of reference frames nearest to the current initial macro block compensation image can be matched with the reference macro block image, estimating motion vectors among macro blocks by utilizing an EPZS enhanced prediction region search algorithm to obtain the motion vectors of the current initial macro block compensation image; when the nearest M frame reference frame can not be matched with the reference macro block image, the adjacent macro block image adjacent to the current initial macro block compensation image is found according to the principle that the correlation of the motion of two adjacent macro blocks belonging to the same moving object is large, the reference macro block of the adjacent macro block image is matched in the nearest M frame reference frame, the motion vector estimation of the adjacent macro block image is obtained by utilizing an EPZS enhanced prediction region search algorithm, and then the motion vector of the current initial macro block compensation image is obtained through a motion vector estimation formula. Wherein, the motion vector estimation formula is:
Figure 745695DEST_PATH_IMAGE006
wherein,
Figure DEST_PATH_IMAGE007
the motion vector of the picture is compensated for the current initial macroblock,
Figure 50906DEST_PATH_IMAGE008
is as follows
Figure DEST_PATH_IMAGE009
The weight of the motion vector of the adjacent macroblock picture,
Figure 902318DEST_PATH_IMAGE010
is as follows
Figure 380704DEST_PATH_IMAGE009
The motion vector of the picture of the adjacent macro blocks,
Figure DEST_PATH_IMAGE011
the number of neighboring macroblock pictures of the picture is compensated for the current initial macroblock.
It should be noted that the weights are obtained according to the number of adjacent macroblock images, and as an example, when a macroblock image has three adjacent macroblock images adjacent to it, then the weight of the motion vector of each adjacent macroblock image is 1/3 respectively.
The EPZS enhanced prediction region search algorithm is a search algorithm for integer pixel motion estimation, and adopts a prediction method with higher correlation, that is, a more existing condition is utilized to predict a motion vector. The EPZS enhanced prediction region search algorithm mainly comprises the following steps:
a. and selecting motion vector prediction, namely selecting a search starting point according to the correlation between a time domain and a space domain.
b. The adaptation terminates early. Since the match errors of neighboring macroblocks are correlated, termination conditions are introduced to speed up the estimation of motion vectors.
c. And (5) correcting the motion vector. If the condition for the adaptive early termination is not satisfied, a further search is needed at the location where the match error is the smallest.
Preferably, in the embodiment of the present invention, a value of M in the nearest M frame reference frame is 10.
Preferably, the EPZS enhanced prediction region search algorithm in the embodiment of the present invention is a minimum diamond search algorithm.
Further, in step S005, compensation optimization is performed on the initial macroblock compensation image obtained after the offset compensation according to the obtained motion vector of each macroblock image in the real-time acquired urban building image.
Further, when the unmanned aerial vehicle performs surveying and mapping within a long period, a new building image is acquired, and when the CIM model does not contain information of the new building image, motion offset compensation cannot be performed according to the model frame, and information of the new building image cannot be predicted according to the previously shot image. Therefore, the embodiment of the invention carries out reverse prediction compensation on the new building image appearing in the collected urban building image by setting the image buffer area, and the specific compensation process is as follows:
1) setting an image buffer area, assuming that the image buffer area is shared
Figure 899541DEST_PATH_IMAGE012
Taking the frame city building image as an image compensation window, wherein the current frame is
Figure DEST_PATH_IMAGE013
And (4) frame, wherein the current frame is the first appearance of a new building image, the frame is directly sent to a display picture for playing after being processed, then the window continuously slides backwards for one frame to be used as the current frame for processing and compensation, and the like.
2) In the image compensation window, taking the previous a frame of the current frame as a reference frame, and performing compensation optimization by combining the offset compensation and the motion vector prediction in the step S003; and the last A frame of the current frame comprises a city building image of the new building image as a reference frame for reverse compensation so as to perform reverse prediction compensation on the current frame. The specific compensation principle is as follows: taking the frame rate of hundreds of frames per second of the current mainstream camera as an example, the acquired image frames are not always transmitted to the display screen in real time and immediately for display, but are displayed by delaying the a frame. Because the time corresponding to the frame a is very short and the requirement for real-time performance in urban surveying and mapping is not too high, the delay of the frame a does not bring a significant hysteresis effect to human eyes under the condition of real playing, i.e. the delay of the frame a is negligible.
3) When the current frame is processed, the current frame is compensated according to the motion vector of the next frame obtained from the previous A frame of the current frame and the motion vector of the next frame obtained from the backward A frame of the current frame.
4) Because the model frame without the new building image in the CIM model cannot judge whether the jelly effect occurs according to the city building image with the new building image in the later A frame, the detection can be carried out according to the correlation of the motion of the adjacent macro blocks, and the specific process is as follows:
a. and detecting adjacent macro block images of the macro block image where the new building image is located, and if one of the adjacent macro block images in the last A +1 frame from the current frame is smaller than the area threshold, considering that the macro block image where the new building image is located has a jelly effect, otherwise, indicating that the macro block image is a stable macro block image.
b. If a stable macro block image exists in the last A +1 frame, the stable macro block image is used as a reference macro block image, the motion vector of the current frame is reversely obtained by utilizing an EPZS enhanced prediction region search algorithm, and compensation is carried out according to the motion vector so as to better eliminate the jelly effect; if no stable macro block exists in the last A +1 frame, finding an adjacent macro block image adjacent to the macro block image of the new building image in the current frame, obtaining a motion vector of the adjacent macro block image of the new building image in the last A frame by using an EPZS enhanced prediction region search algorithm, compensating the last A frame according to the obtained motion vector, reversely calculating the motion vector of the current frame through the compensated last A frame, and further compensating the current frame.
Preferably, in the embodiment of the present invention, the value of the frame number a is 10.
In summary, the embodiment of the present invention provides an artificial intelligence-based rolling shutter imaging method for an urban surveying and mapping unmanned aerial vehicle, which matches a model frame of an urban building image in a CIM model according to a key point of the urban building image acquired in real time; the corresponding division of the city building image and the model frame into standards
Figure 375653DEST_PATH_IMAGE004
The number of the macro-block images,offset compensation is carried out by comparing each corresponding macro block image to obtain an initial macro block compensation image, and compensation optimization is further carried out on the initial macro block compensation image according to the motion vector of each macro block image; and simultaneously performing image compensation on the newly appeared building image by using the motion vectors of the first 10 frames and the last 10 frames of the frame image. According to the method, the model frames are matched in the CIM, so that the matching accuracy can be improved, and the jelly effect can be well eliminated by performing offset compensation and motion vector compensation on each macro block image divided by the urban building image, so that the image quality of the image is better.
Based on the same inventive concept as the method, the embodiment of the invention provides an artificial intelligence-based rolling shutter imaging system of an unmanned aerial vehicle for urban surveying and mapping.
Referring to fig. 3, an embodiment of the present invention provides an artificial intelligence-based rolling shutter imaging system for an urban surveying and mapping unmanned aerial vehicle, where the system includes: a keypoint detection unit 10, an image matching unit 20, an offset compensation unit 30, a motion vector prediction unit 40, and a compensation optimization unit 50.
The key point detection unit 10 is configured to acquire a city building image and a depth image of the city building image by using an image acquisition device, and acquire a key point of a building rigid body from the city building image by using a key point detection network, where the key point is each corner point and anchor frame of the building rigid body; the image matching unit 20 is configured to obtain a three-dimensional point cloud of the key point by combining the key point and the depth image, and match the three-dimensional point cloud of the key point with a corresponding model frame of the city building image in the CIM model; the offset compensation unit 30 is used to divide the city building image and the model frame into corresponding parts
Figure 42258DEST_PATH_IMAGE001
Calculating area intersection ratio of key point Gaussian hot spots between macro block images at corresponding positions in each macro block image model frame in the urban building image, wherein the key point Gaussian hot spots are obtained by processing key points by utilizing Gaussian core, and when the area intersection ratio is smaller than an area threshold value, performing offset compensation on the macro block images of the urban building image to obtain an initial macro block compensation image(ii) a Otherwise, storing the macro block image of the city building image in an image buffer area; the motion vector prediction unit 40 is used for the current nearest neighbor
Figure 58755DEST_PATH_IMAGE002
When the initial macro block compensation image of the frame can be matched with the corresponding reference macro block image, the motion vector of the initial macro block compensation image is obtained; otherwise, when it is nearest
Figure 494416DEST_PATH_IMAGE002
When the initial macro block compensation image of the frame can not be matched with the corresponding reference macro block image, searching an adjacent macro block image adjacent to the initial macro block compensation image in an image buffer area, and calculating a motion vector of the adjacent macro block image to obtain the motion vector of the initial macro block compensation image; the compensation optimization unit 50 is configured to perform compensation optimization on the initial macroblock compensation image correspondingly by using the motion vector.
Further, referring to fig. 4, when the model frame of the new building image cannot be matched in the CIM model in the image matching unit 20, the compensation method for the new building image includes a forward vector acquisition unit 21, a reverse vector acquisition unit 22, and an image compensation unit 23:
the forward vector obtaining unit 21 is used for utilizing the front of the current frame where the new building image appears
Figure 938166DEST_PATH_IMAGE003
Predicting a forward motion vector of a current frame by a macro block image of the frame; the backward vector obtaining unit 22 is used for utilizing the backward of the current frame
Figure 92067DEST_PATH_IMAGE003
Predicting a reverse motion vector of a current frame by a macro block image of the frame; the image compensation unit 23 is used for image compensation of the current frame by combining the forward motion vector and the backward motion vector.
Further, the method of offset compensation in the offset compensation unit 30 includes:
and obtaining an offset vector of the macro block image according to the three-dimensional coordinates of the key points in the macro block image in the city building image and the three-dimensional coordinates of the key points in the corresponding macro block image in the model frame, and performing offset compensation on the macro block image by using the offset vector.
Further, referring to fig. 5 and 6, the motion vector prediction unit 40 includes a first motion vector detection unit 41, the first motion vector detection unit 41 being for detecting a current nearest neighbor motion vector
Figure 912256DEST_PATH_IMAGE002
When the initial macro block compensation image of the frame can be matched with the corresponding reference macro block image, the motion vector of the initial macro block compensation image is obtained through a search algorithm of motion estimation.
The motion vector prediction unit 40 further comprises a second motion vector detection unit 42 for the current nearest neighbor
Figure 468002DEST_PATH_IMAGE002
When none of the initial macroblock compensation images of the frame can match the corresponding reference macroblock image, the motion vector of the initial macroblock compensation image is obtained, and the second motion vector detection unit 42 further includes a vector analysis unit 421 and a vector processing unit 422:
the vector analysis unit 421 is configured to obtain a motion vector of each adjacent macroblock image through a search algorithm of motion estimation; the vector processing unit 422 is configured to calculate an average motion vector of the neighboring macroblock image, and use the average motion vector as the motion vector of the initial macroblock compensation image.
In summary, the embodiment of the present invention provides an artificial intelligence-based rolling shutter imaging system for an unmanned aerial vehicle for urban surveying and mapping, where the system inputs the acquired images of the urban buildings into a key point detection unit 10 to obtain key points of the urban buildings; matching the model frame of the city building image in the image matching unit 20 according to the key points of the city building; offset compensation is carried out on the urban building image and the model frame by comparing each corresponding macro block image through an offset compensation unit 30 so as to obtain an initial macro block compensation image; further performing compensation optimization on the initial macro block compensation image through a motion vector prediction unit 40 and a compensation optimization unit 50; and at the same time, the newly appeared building image is image-compensated by the forward vector acquisition unit 21, the reverse vector acquisition unit 22 and the image compensation unit 23. By matching the model frame in the CIM model, the matching accuracy can be improved, and the jelly effect can be well eliminated by performing offset compensation and motion vector compensation on each macro block image divided by the urban building image, so that the image quality of the image is better.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. An artificial intelligence-based rolling shutter imaging method for an unmanned aerial vehicle for urban surveying and mapping is characterized by comprising the following steps:
acquiring a city building image and a depth image of the city building image by using image acquisition equipment, and acquiring key points of a building rigid body from the city building image by using a key point detection network, wherein the key points are each angular point and an anchor frame of the building rigid body;
combining the key points and the depth image to obtain a key point three-dimensional point cloud, and matching the key point three-dimensional point cloud to a corresponding model frame of the urban building image in a CIM (common information model);
correspondingly dividing the city building image and the model frame into
Figure 388820DEST_PATH_IMAGE001
Calculating the area intersection ratio of key point Gaussian hot spots between each macro block image in the urban building image and the macro block image at the corresponding position in the model frame, wherein the key point Gaussian hot spots are obtained by processing the key points by using a Gaussian core, and when the area intersection ratio is smaller than an area threshold, performing offset compensation on the macro block image of the urban building image to obtain an initial macro block compensation image; otherwise, storing the macro block image of the city building image in an image buffer area;
when being most adjacent
Figure 499995DEST_PATH_IMAGE002
When the initial macro block compensation image of the frame can be matched with a corresponding reference macro block image, acquiring a motion vector of the initial macro block compensation image, wherein the reference macro block image is a macro block image which is most similar to the current initial macro block compensation image in the reference frame, and the reference frame is composed of macro block images in the urban building image with the area greater than or equal to the area threshold value; otherwise, when the nearest neighbor is
Figure 448360DEST_PATH_IMAGE002
When the initial macro block compensation image of the frame can not be matched with the corresponding reference macro block image, searching an adjacent macro block image adjacent to the initial macro block compensation image in the image buffer area, and calculating the motion vector of the adjacent macro block image to obtain the motion vector of the initial macro block compensation image;
and correspondingly performing compensation optimization on the initial macro block compensation image by using the motion vector.
2. The method of claim 1, wherein when the model frame of a new building image cannot be matched in the CIM model, the compensating method for the new building image comprises:
using the front of the current frame of the new building image
Figure 353999DEST_PATH_IMAGE003
Predicting a forward motion vector of the current frame by the macro block image of the frame;
using the background of the current frame
Figure 133953DEST_PATH_IMAGE003
The macro block image of a frame predicts a backward motion vector of the current frame backward;
and performing image compensation on the current frame by combining the forward motion vector and the backward motion vector.
3. The method of claim 1, wherein the method of offset compensation comprises:
and obtaining an offset vector of the macro block image according to the three-dimensional coordinates of the key points in the macro block image in the urban building image and the three-dimensional coordinates of the key points in the corresponding macro block image in the model frame, and performing offset compensation on the macro block image by using the offset vector.
4. The method of claim 1, wherein the nearest neighbor is
Figure 48820DEST_PATH_IMAGE002
And when the initial macro block compensation image of the frame can be matched with the corresponding reference macro block image, acquiring the motion vector of the initial macro block compensation image through a search algorithm of motion estimation.
5. The method of claim 1Method, characterized in that said nearest neighbors
Figure 117270DEST_PATH_IMAGE002
When none of the initial macroblock compensation images of a frame can match the corresponding reference macroblock image, the method for obtaining the motion vector of the initial macroblock compensation image comprises the following steps:
obtaining the motion vector of each adjacent macro block image through a search algorithm of motion estimation;
and calculating the average motion vector of the adjacent macro block image, and taking the average motion vector as the motion vector of the initial macro block compensation image.
6. The utility model provides a city survey and drawing unmanned aerial vehicle rolling shutter imaging system based on artificial intelligence which characterized in that, this system includes:
the system comprises a key point detection unit, a data processing unit and a data processing unit, wherein the key point detection unit is used for acquiring a city building image and a depth image of the city building image by using image acquisition equipment, and acquiring key points of a building rigid body from the city building image by using a key point detection network, wherein the key points are each angular point and an anchor frame of the building rigid body;
the image matching unit is used for obtaining a key point three-dimensional point cloud by combining the key point and the depth image, and matching the key point three-dimensional point cloud in a CIM (common information model) to a corresponding model frame of the urban building image;
an offset compensation unit for correspondingly dividing the city building image and the model frame into
Figure 459390DEST_PATH_IMAGE001
Calculating the area intersection ratio of key point Gaussian hot spots between each macro block image in the urban building image and the macro block image at the corresponding position in the model frame, wherein the key point Gaussian hot spots are obtained by processing the key points by utilizing a Gaussian core, and when the area intersection ratio is smaller than an area threshold value, the macro block image of the urban building image is subjected to area intersectionPerforming offset compensation on the image to obtain an initial macro block compensation image; otherwise, storing the macro block image of the city building image in an image buffer area;
motion vector prediction unit for prediction of current nearest neighbor
Figure 980501DEST_PATH_IMAGE002
When the initial macro block compensation image of the frame can be matched with a corresponding reference macro block image, acquiring a motion vector of the initial macro block compensation image, wherein the reference macro block image is a macro block image which is most similar to the current initial macro block compensation image in the reference frame, and the reference frame is composed of macro block images in the urban building image with the area greater than or equal to the area threshold value; otherwise, when the nearest neighbor is
Figure 167900DEST_PATH_IMAGE002
When the initial macro block compensation image can not be matched with the corresponding reference macro block image, searching an adjacent macro block image adjacent to the initial macro block compensation image in the image buffer area, and calculating the motion vector of the adjacent macro block image to obtain the motion vector of the initial macro block compensation image;
and the compensation optimization unit is used for correspondingly performing compensation optimization on the initial macro block compensation image by using the motion vector.
7. The system of claim 6, wherein the compensation method for the new architectural image when the model frame of the new architectural image cannot be matched in the CIM model in the image matching unit comprises:
a forward vector obtaining unit for utilizing the front of the current frame appearing in the new building image
Figure 90856DEST_PATH_IMAGE003
Predicting a forward motion vector of the current frame by the macro block image of the frame;
a backward vector obtaining unit for utilizing the backward of the current frame
Figure 603877DEST_PATH_IMAGE003
Predicting a reverse motion vector of the current frame by the macro block image of the frame;
and the image compensation unit is used for carrying out image compensation on the current frame by combining the forward motion vector and the reverse motion vector.
8. The system of claim 6, wherein the method of offset compensation in the offset compensation unit comprises:
and obtaining an offset vector of the macro block image according to the three-dimensional coordinates of the key points in the macro block image in the city building image and the three-dimensional coordinates of the key points in the corresponding macro block image in the model frame, and performing offset compensation on the macro block image by using the offset vector.
9. The system of claim 6, wherein the motion vector prediction unit comprises a first motion vector detection unit to detect the nearest neighbor as the most neighboring motion vector
Figure 612284DEST_PATH_IMAGE002
And when the initial macro block compensation image of the frame can be matched with the corresponding reference macro block image, acquiring the motion vector of the initial macro block compensation image through a search algorithm of motion estimation.
10. The system of claim 6, wherein the motion vector prediction unit further comprises a second motion vector detection unit, the second motion vector detection unit to detect the nearest neighbor as the nearest neighbor
Figure 603374DEST_PATH_IMAGE002
When the initial macro block compensation image of the frame can not be matched with the corresponding reference macro block image, acquiring the initial macro block compensation imageAnd the second motion vector detection unit further includes:
the vector analysis unit is used for obtaining the motion vector of each adjacent macro block image through a search algorithm of motion estimation;
and the vector processing unit is used for calculating the average motion vector of the adjacent macro block image and taking the average motion vector as the motion vector of the initial macro block compensation image.
CN202110070839.2A 2021-01-19 2021-01-19 Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle Active CN112906475B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110070839.2A CN112906475B (en) 2021-01-19 2021-01-19 Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110070839.2A CN112906475B (en) 2021-01-19 2021-01-19 Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN112906475A CN112906475A (en) 2021-06-04
CN112906475B true CN112906475B (en) 2022-08-02

Family

ID=76116006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110070839.2A Active CN112906475B (en) 2021-01-19 2021-01-19 Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN112906475B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470093B (en) * 2021-09-01 2021-11-26 启东市德立神起重运输机械有限公司 Video jelly effect detection method, device and equipment based on aerial image processing
CN115379123B (en) * 2022-10-26 2023-01-31 山东华尚电气有限公司 Transformer fault detection method for unmanned aerial vehicle inspection
CN115876785B (en) * 2023-02-02 2023-05-26 苏州誉阵自动化科技有限公司 Visual identification system for product defect detection

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102098440A (en) * 2010-12-16 2011-06-15 北京交通大学 Electronic image stabilizing method and electronic image stabilizing system aiming at moving object detection under camera shake
CN103096083A (en) * 2013-01-23 2013-05-08 北京京东方光电科技有限公司 Method and device of moving image compensation

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5661524A (en) * 1996-03-08 1997-08-26 International Business Machines Corporation Method and apparatus for motion estimation using trajectory in a digital video encoder
US8130277B2 (en) * 2008-02-20 2012-03-06 Aricent Group Method and system for intelligent and efficient camera motion estimation for video stabilization
CN101483713A (en) * 2009-01-16 2009-07-15 西安电子科技大学 Deinterleaving method based on moving target
CN101945284B (en) * 2010-09-29 2013-01-02 无锡中星微电子有限公司 Motion estimation device and method
CN102523419A (en) * 2011-12-31 2012-06-27 上海大学 Digital video signal conversion method based on motion compensation
CN102801995B (en) * 2012-06-25 2016-12-21 北京大学深圳研究生院 A kind of multi-view video motion based on template matching and disparity vector prediction method
CN102917217B (en) * 2012-10-18 2015-01-28 北京航空航天大学 Movable background video object extraction method based on pentagonal search and three-frame background alignment
CN103581647B (en) * 2013-09-29 2017-01-04 北京航空航天大学 A kind of depth map sequence fractal coding based on color video motion vector

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102098440A (en) * 2010-12-16 2011-06-15 北京交通大学 Electronic image stabilizing method and electronic image stabilizing system aiming at moving object detection under camera shake
CN103096083A (en) * 2013-01-23 2013-05-08 北京京东方光电科技有限公司 Method and device of moving image compensation

Also Published As

Publication number Publication date
CN112906475A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN112906475B (en) Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle
WO2017096949A1 (en) Method, control device, and system for tracking and photographing target
US10984583B2 (en) Reconstructing views of real world 3D scenes
CN102665041B (en) Process method, image processing circuit and the photographing unit of video data
US20090290809A1 (en) Image processing device, image processing method, and program
CN106210449B (en) Multi-information fusion frame rate up-conversion motion estimation method and system
CN111382613B (en) Image processing method, device, equipment and medium
WO2019221013A4 (en) Video stabilization method and apparatus and non-transitory computer-readable medium
CN106412441B (en) A kind of video stabilization control method and terminal
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
CN110243390B (en) Pose determination method and device and odometer
CN101923717A (en) Method for accurately tracking characteristic points of quick movement target
CN114399539A (en) Method, apparatus and storage medium for detecting moving object
CN111598775B (en) Light field video time domain super-resolution reconstruction method based on LSTM network
CN113610865B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN114973399A (en) Human body continuous attitude estimation method based on key point motion estimation
CN112884803B (en) Real-time intelligent monitoring target detection method and device based on DSP
CN113763544A (en) Image determination method, image determination device, electronic equipment and computer-readable storage medium
CN113034398A (en) Method and system for eliminating jelly effect in urban surveying and mapping based on artificial intelligence
US20230290061A1 (en) Efficient texture mapping of a 3-d mesh
JP2017011397A (en) Image processing apparatus, image processing method, and program
CN106934818B (en) Hand motion tracking method and system
CN112378409B (en) Robot RGB-D SLAM method based on geometric and motion constraint in dynamic environment
WO2021049281A1 (en) Image processing device, head-mounted display, and spatial information acquisition method
JP2023553914A (en) Apparatus and method for processing depth maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230803

Address after: 450000, Floor 20, Unit 1, Building 7, Lin8, Lvyin Road, High tech Industrial Development Zone, Zhengzhou City, Henan Province

Patentee after: Zhengzhou Gaosun Information Technology Co.,Ltd.

Address before: Room 195, 18 / F, unit 2, building 6, 221 Jinsuo Road, high tech Industrial Development Zone, Zhengzhou City, Henan Province, 450000

Patentee before: Zhengzhou Kaiwen Electronic Technology Co.,Ltd.