CN116091998A - Image processing method, device, computer equipment and storage medium - Google Patents

Image processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116091998A
CN116091998A CN202211661662.4A CN202211661662A CN116091998A CN 116091998 A CN116091998 A CN 116091998A CN 202211661662 A CN202211661662 A CN 202211661662A CN 116091998 A CN116091998 A CN 116091998A
Authority
CN
China
Prior art keywords
historical
current
matching
point
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211661662.4A
Other languages
Chinese (zh)
Inventor
黄炜昭
陈远
黄林超
吴新桥
吉丽娅
王晓蕾
张焕彬
辛拓
陈龙
余广译
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Power Supply Bureau Co Ltd
Original Assignee
Shenzhen Power Supply Bureau Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Power Supply Bureau Co Ltd filed Critical Shenzhen Power Supply Bureau Co Ltd
Priority to CN202211661662.4A priority Critical patent/CN116091998A/en
Publication of CN116091998A publication Critical patent/CN116091998A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an image processing method, an image processing device and computer equipment. The method comprises the following steps: acquiring a historical object image set and a current object image set corresponding to a target object; respectively carrying out object point matching based on the historical object image set and the current object image set to obtain historical object matching points and current object matching points; when the historical object matching point and the current feature matching point are the same object point, performing space pose transformation parameter calculation by using the historical object matching point to obtain a historical space pose transformation parameter, and performing three-dimensional space matching point calculation to obtain a three-dimensional coordinate of the historical feature point; calculating space pose transformation parameters by using the current feature matching points to obtain current space pose transformation parameters, and calculating three-dimensional space matching points to obtain three-dimensional coordinates of the current feature points; and calculating the difference value corresponding to the three-dimensional coordinates of the historical characteristic points and the three-dimensional coordinates of the current characteristic points to obtain the three-dimensional offset information corresponding to the target object. By adopting the method, the accuracy of deformation monitoring can be improved.

Description

Image processing method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing technology, and in particular, to an image processing method, an image processing apparatus, a computer device, a storage medium, and a computer program product.
Background
Along with the development of the urban process, in engineering construction, in order to ensure construction safety, deformation monitoring needs to be performed on a building or surrounding environment of the building, for example, abnormal deformation such as cracks, landslide and the like of a target object can be predicted by performing deformation monitoring on the target object. The conventional deformation monitoring method is to use a sensor to monitor the deformation of a target object.
However, the conventional deformation monitoring method requires direct contact with the target object and can only monitor local deformation of the target object, and when monitoring is performed on a large building and the sampling frequency is high, the accuracy of monitoring using a sensor is low, resulting in a problem of low accuracy of deformation monitoring.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image processing method, apparatus, computer device, computer-readable storage medium, and computer program product that can improve accuracy of deformation monitoring.
In a first aspect, the present application provides an image processing method. The method comprises the following steps:
Acquiring a historical object image set and a current object image set corresponding to a target object, wherein the historical object image set is obtained by image acquisition of the target object from different acquisition positions at a historical moment through image acquisition equipment, and the current object image set is obtained by image acquisition of the target object from different acquisition positions at the current moment through the image acquisition equipment;
respectively carrying out object point matching based on the historical object image set and the current object image set to obtain historical object matching points corresponding to each historical object image in the historical object image set and current object matching points corresponding to each current object image in the current object image set;
carrying out the same object point matching by using the historical object matching points corresponding to each historical object image and the current object matching points corresponding to each current object image;
when the historical object matching points and the current feature matching points are the same object point, performing space pose transformation parameter calculation by using the historical object matching points corresponding to each historical object image to obtain historical space pose transformation parameters, and performing three-dimensional space matching point calculation by using the historical pose transformation parameters and the historical object matching points corresponding to each historical object image to obtain three-dimensional coordinates of the historical feature points;
Performing space pose transformation parameter calculation by using the current feature matching points corresponding to each current object image to obtain current space pose transformation parameters, and performing three-dimensional space matching point calculation by using the current pose transformation parameters and the current feature matching points corresponding to each current object image to obtain current feature point three-dimensional coordinates;
and calculating the difference value corresponding to the three-dimensional coordinates of the historical characteristic points and the three-dimensional coordinates of the current characteristic points to obtain the three-dimensional offset information corresponding to the target object.
In a second aspect, the present application also provides an image processing apparatus. The device comprises:
the acquisition module is used for acquiring a historical object image set and a current object image set corresponding to the target object, wherein the historical object image set is obtained by image acquisition of the target object from different acquisition positions at a historical moment through the image acquisition equipment, and the current object image set is obtained by image acquisition of the target object from different acquisition positions at the current moment through the image acquisition equipment;
the matching module is used for respectively carrying out object point matching based on the historical object image set and the current object image set to obtain historical object matching points corresponding to all the historical object images in the historical object image set and current object matching points corresponding to all the current object images in the current object image set;
The matching point identification module is used for carrying out the same object point matching by using the historical object matching points corresponding to the historical object images and the current object matching points corresponding to the current object images;
the historical coordinate transformation module is used for performing space pose transformation parameter calculation by using the historical object matching points corresponding to each historical object image when the historical object matching points and the current characteristic matching points are the same object points to obtain historical space pose transformation parameters, and performing three-dimensional space matching point calculation by using the historical pose transformation parameters and the historical object matching points corresponding to each historical object image to obtain three-dimensional coordinates of the historical characteristic points;
the current coordinate transformation module is used for carrying out space pose transformation parameter calculation by using the current feature matching points corresponding to each current object image to obtain current space pose transformation parameters, and carrying out three-dimensional space matching point calculation by using the current pose transformation parameters and the current feature matching points corresponding to each current object image to obtain the three-dimensional coordinates of the current feature points;
and the offset calculation module is used for calculating the difference value corresponding to the three-dimensional coordinates of the historical characteristic points and the three-dimensional coordinates of the current characteristic points to obtain the three-dimensional offset information corresponding to the target object.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a historical object image set and a current object image set corresponding to a target object, wherein the historical object image set is obtained by image acquisition of the target object from different acquisition positions at a historical moment through image acquisition equipment, and the current object image set is obtained by image acquisition of the target object from different acquisition positions at the current moment through the image acquisition equipment;
respectively carrying out object point matching based on the historical object image set and the current object image set to obtain historical object matching points corresponding to each historical object image in the historical object image set and current object matching points corresponding to each current object image in the current object image set;
carrying out the same object point matching by using the historical object matching points corresponding to each historical object image and the current object matching points corresponding to each current object image;
when the historical object matching points and the current feature matching points are the same object point, performing space pose transformation parameter calculation by using the historical object matching points corresponding to each historical object image to obtain historical space pose transformation parameters, and performing three-dimensional space matching point calculation by using the historical pose transformation parameters and the historical object matching points corresponding to each historical object image to obtain three-dimensional coordinates of the historical feature points;
Performing space pose transformation parameter calculation by using the current feature matching points corresponding to each current object image to obtain current space pose transformation parameters, and performing three-dimensional space matching point calculation by using the current pose transformation parameters and the current feature matching points corresponding to each current object image to obtain current feature point three-dimensional coordinates;
and calculating the difference value corresponding to the three-dimensional coordinates of the historical characteristic points and the three-dimensional coordinates of the current characteristic points to obtain the three-dimensional offset information corresponding to the target object.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a historical object image set and a current object image set corresponding to a target object, wherein the historical object image set is obtained by image acquisition of the target object from different acquisition positions at a historical moment through image acquisition equipment, and the current object image set is obtained by image acquisition of the target object from different acquisition positions at the current moment through the image acquisition equipment;
respectively carrying out object point matching based on the historical object image set and the current object image set to obtain historical object matching points corresponding to each historical object image in the historical object image set and current object matching points corresponding to each current object image in the current object image set;
Carrying out the same object point matching by using the historical object matching points corresponding to each historical object image and the current object matching points corresponding to each current object image;
when the historical object matching points and the current feature matching points are the same object point, performing space pose transformation parameter calculation by using the historical object matching points corresponding to each historical object image to obtain historical space pose transformation parameters, and performing three-dimensional space matching point calculation by using the historical pose transformation parameters and the historical object matching points corresponding to each historical object image to obtain three-dimensional coordinates of the historical feature points;
performing space pose transformation parameter calculation by using the current feature matching points corresponding to each current object image to obtain current space pose transformation parameters, and performing three-dimensional space matching point calculation by using the current pose transformation parameters and the current feature matching points corresponding to each current object image to obtain current feature point three-dimensional coordinates;
and calculating the difference value corresponding to the three-dimensional coordinates of the historical characteristic points and the three-dimensional coordinates of the current characteristic points to obtain the three-dimensional offset information corresponding to the target object.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
Acquiring a historical object image set and a current object image set corresponding to a target object, wherein the historical object image set is obtained by image acquisition of the target object from different acquisition positions at a historical moment through image acquisition equipment, and the current object image set is obtained by image acquisition of the target object from different acquisition positions at the current moment through the image acquisition equipment;
respectively carrying out object point matching based on the historical object image set and the current object image set to obtain historical object matching points corresponding to each historical object image in the historical object image set and current object matching points corresponding to each current object image in the current object image set;
carrying out the same object point matching by using the historical object matching points corresponding to each historical object image and the current object matching points corresponding to each current object image;
when the historical object matching points and the current feature matching points are the same object point, performing space pose transformation parameter calculation by using the historical object matching points corresponding to each historical object image to obtain historical space pose transformation parameters, and performing three-dimensional space matching point calculation by using the historical pose transformation parameters and the historical object matching points corresponding to each historical object image to obtain three-dimensional coordinates of the historical feature points;
Performing space pose transformation parameter calculation by using the current feature matching points corresponding to each current object image to obtain current space pose transformation parameters, and performing three-dimensional space matching point calculation by using the current pose transformation parameters and the current feature matching points corresponding to each current object image to obtain current feature point three-dimensional coordinates;
and calculating the difference value corresponding to the three-dimensional coordinates of the historical characteristic points and the three-dimensional coordinates of the current characteristic points to obtain the three-dimensional offset information corresponding to the target object.
The image processing method, the image processing device, the computer equipment, the storage medium and the computer program product respectively perform object point matching on the historical object image set and the current object image set to obtain historical object matching points corresponding to all the historical object images in the historical object image set and current object matching points corresponding to all the current object images in the current object image set. When the historical object matching point and the current feature matching point are detected to be the same object point, corresponding space pose transformation parameter calculation is calculated according to the historical object matching point and the current feature matching point respectively, then three-dimensional coordinates of the historical feature point corresponding to the historical object matching point and three-dimensional coordinates of the current feature point corresponding to the current feature matching point are calculated according to the space pose transformation parameter, the conversion of the historical object matching point corresponding to the same object point and the current feature matching point from image coordinates into three-dimensional coordinates is achieved, the accuracy of the three-dimensional coordinates of the historical feature point and the three-dimensional coordinates of the current feature point is improved, and therefore three-dimensional offset information corresponding to a target object obtained through the three-dimensional coordinates of the historical feature point and the three-dimensional coordinates of the current feature point is more accurate, and the accuracy of deformation monitoring of the target object is further improved.
Drawings
FIG. 1 is a diagram of an application environment for an image processing method in one embodiment;
FIG. 2 is a flow chart of an image processing method in one embodiment;
FIG. 3 is a flow chart of acquiring three-dimensional offset information in one embodiment;
FIG. 4 is a flowchart of a specific process for obtaining three-dimensional offset information according to one embodiment;
FIG. 5 is a schematic representation of triangularization in one embodiment;
FIG. 6 is a block diagram showing the structure of an image processing apparatus in one embodiment;
FIG. 7 is an internal block diagram of a computer device in one embodiment;
fig. 8 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The image processing method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The image capturing devices 106 are respectively placed on the left and right sides of the target object a, and perform image capturing on the target object a. The image acquisition device 106 transmits the acquired image to the terminal 102 for processing. The terminal 102 acquires a historical object image set and a current object image set corresponding to the target object, wherein the historical object image set is obtained by image acquisition of the target object from different acquisition positions at a historical moment through the image acquisition equipment 106, and the current object image set is obtained by image acquisition of the target object from different acquisition positions at a current moment through the image acquisition equipment 106; the terminal 102 respectively performs object point matching based on the historical object image set and the current object image set to obtain historical object matching points corresponding to each historical object image in the historical object image set and current object matching points corresponding to each current object image in the current object image set; the terminal 102 performs the same object point matching by using the history object matching points corresponding to each history object image and the current object matching points corresponding to each current object image; when the historical object matching points and the current feature matching points are the same object point, the terminal 102 uses the historical object matching points corresponding to each historical object image to calculate space pose transformation parameters to obtain historical space pose transformation parameters, and uses the historical pose transformation parameters and the historical object matching points corresponding to each historical object image to calculate three-dimensional space matching points to obtain three-dimensional coordinates of the historical feature points; the terminal 102 performs space pose transformation parameter calculation by using the current feature matching points corresponding to each current object image to obtain current space pose transformation parameters, and performs three-dimensional space matching point calculation by using the current pose transformation parameters and the current feature matching points corresponding to each current object image to obtain the three-dimensional coordinates of the current feature points; the terminal 102 calculates the difference value corresponding to the three-dimensional coordinates of the historical feature point and the three-dimensional coordinates of the current feature point, and obtains the three-dimensional offset information corresponding to the target object. The terminal 102 may be, but not limited to, a personal computer, a notebook computer, a smart phone, or a tablet computer server 104, which may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
In an embodiment, as shown in fig. 2, an image processing method is provided, where the embodiment is applied to a terminal to illustrate the method, it is understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
step 202, acquiring a historical object image set and a current object image set corresponding to a target object, wherein the historical object image set is obtained by image acquisition of the target object from different acquisition positions at a historical moment through an image acquisition device, and the current object image set is obtained by image acquisition of the target object from different acquisition positions at a current moment through the image acquisition device.
And 204, respectively performing object point matching based on the historical object image set and the current object image set to obtain historical object matching points corresponding to all the historical object images in the historical object image set and current object matching points corresponding to all the current object images in the current object image set.
The target object refers to an object to be monitored whether deformation conditions such as cracks and landslide occur, and may be an object such as a building or a mountain. The history object matching points refer to object matching points between the history object images, and represent the same object point on the target object. The current object matching points refer to object matching points between the current object images, and represent the same object point on the target object.
Specifically, the terminal acquires a historical object image set and a current object image set corresponding to a target object sent by the image acquisition equipment. The terminal respectively performs feature extraction on each historical object image in the historical object image set and each current object image in the current object image set to obtain historical feature object points corresponding to each historical object image and current feature object points corresponding to each current object image. And then the terminal performs feature matching on the historical feature object points corresponding to each historical object image to obtain historical object matching points corresponding to each historical object image. And the terminal performs feature matching on the current feature object points corresponding to each current object image to obtain current object matching points corresponding to each current object image.
And 206, performing the same object point matching by using the historical object matching points corresponding to the historical object images and the current object matching points corresponding to the current object images.
The same object point refers to that the history object matching point and the current object matching point are the same object point on the target object.
Specifically, the terminal randomly determines target historical object matching points in the historical object matching points corresponding to each historical object image, calculates the similarity between the historical object matching points and the current object matching points corresponding to each current object image respectively, obtains a similarity result corresponding to each current object matching point, and determines target current object matching points corresponding to the target historical object matching points in each current object matching point according to the similarity result, wherein the target historical object matching points and the target current object matching points are the same object points. The terminal traverses each historical object matching point and each current object matching point to obtain the historical object matching points and the current object matching points corresponding to the same object points.
And step 208, when the historical object matching points and the current feature matching points are the same object point, performing space pose transformation parameter calculation by using the historical object matching points corresponding to each historical object image to obtain historical space pose transformation parameters, and performing three-dimensional space matching point calculation by using the historical pose transformation parameters and the historical object matching points corresponding to each historical object image to obtain three-dimensional coordinates of the historical feature points.
The spatial pose transformation parameters represent pose transformation relations among image acquisition devices for acquiring images of target objects from different positions. The history space pose transformation parameters refer to mapping relations among history object matching points corresponding to each history object image. The three-dimensional coordinates of the historical feature points refer to the coordinates of object points represented by the matching points of the historical object in the three-dimensional space.
Specifically, when the terminal detects a historical object matching point and a current object matching point corresponding to the same object point, image coordinates of the historical object matching points corresponding to the same object points are obtained, and the historical object matching points can be a pair of matching points. And the terminal calculates the space pose transformation parameters according to the image coordinates of the historical object matching points corresponding to the same object points to obtain the historical space pose transformation parameters. And then the terminal uses the historical pose transformation parameters and the historical object matching points corresponding to the same object points to perform three-dimensional space matching point calculation, so as to obtain the three-dimensional coordinates of the historical feature points corresponding to the same object points.
And 210, performing space pose transformation parameter calculation by using the current feature matching points corresponding to each current object image to obtain current space pose transformation parameters, and performing three-dimensional space matching point calculation by using the current pose transformation parameters and the current feature matching points corresponding to each current object image to obtain the three-dimensional coordinates of the current feature points.
The current space pose transformation parameters refer to mapping relations among matching points of the current object corresponding to the current object images. The three-dimensional coordinates of the current feature point refer to coordinates of an object point represented by the matching point of the current object in a three-dimensional space.
Specifically, the terminal calculates the space pose transformation parameters according to the image coordinates of the current object matching points corresponding to the same object points, and obtains the current space pose transformation parameters. And then the terminal uses the current pose transformation parameters and the current object matching points corresponding to the same object points to calculate three-dimensional space matching points, so as to obtain the three-dimensional coordinates of the current feature points corresponding to the same object points.
And step 212, calculating the difference value corresponding to the three-dimensional coordinates of the historical feature points and the three-dimensional coordinates of the current feature points to obtain the three-dimensional offset information corresponding to the target object.
The three-dimensional offset information refers to information that the target object is physically deformed.
Specifically, the terminal obtains three-dimensional coordinates of the historical feature points and three-dimensional coordinates of the current feature points corresponding to the same object points, calculates difference values corresponding to the same object points respectively, calculates an average value according to the difference values corresponding to the same object points, and obtains three-dimensional offset information corresponding to the target object.
In the image processing method, the historical object matching points corresponding to each historical object image in the historical object image set and the current object matching points corresponding to each current object image in the current object image set are obtained by respectively carrying out object point matching on the historical object image set and the current object image set. When the historical object matching point and the current feature matching point are detected to be the same object point, corresponding space pose transformation parameter calculation is calculated according to the historical object matching point and the current feature matching point respectively, then three-dimensional coordinates of the historical feature point corresponding to the historical object matching point and three-dimensional coordinates of the current feature point corresponding to the current feature matching point are calculated according to the space pose transformation parameter, the conversion of the historical object matching point corresponding to the same object point and the current feature matching point from image coordinates into three-dimensional coordinates is achieved, the accuracy of the three-dimensional coordinates of the historical feature point and the three-dimensional coordinates of the current feature point is improved, and therefore three-dimensional offset information corresponding to a target object obtained through the three-dimensional coordinates of the historical feature point and the three-dimensional coordinates of the current feature point is more accurate, and the accuracy of deformation monitoring of the target object is further improved.
In one embodiment, step 204, performing object point matching based on the historical object image set and the current object image set, to obtain a historical object matching point corresponding to each historical object image in the historical object image set and a current object matching point corresponding to each current object image in the current object image set, includes:
carrying out fuzzy processing on each historical object image to obtain a fuzzy image corresponding to each historical object image, and determining an extreme point based on pixel values in the fuzzy image;
obtaining object feature points corresponding to each historical object image respectively based on the extreme points;
extracting feature vectors corresponding to the object feature points, and calculating vector distances between the object feature points corresponding to each historical object image by using the feature vectors;
and carrying out feature matching on object feature points corresponding to each historical object image based on the vector distance to obtain historical object matching points corresponding to each historical object image in the historical object image set.
The blurred image refers to a history object image subjected to blurring processing. The object feature points refer to feature points on the target object in the history object image.
Specifically, the terminal performs fuzzy processing on each historical object image to obtain a fuzzy image corresponding to each historical object image, detects the variation of the pixel value in the fuzzy image, and determines an extreme point according to the point with the largest variation of the pixel value, wherein the extreme point can be the edge pixel value of the target object. The terminal takes the extreme points as object characteristic points to obtain object characteristic points corresponding to each historical object image respectively, wherein the number of the object characteristic points can be at least three. Each of the history object images may be a history object image acquired once from the left side of the target object and a history object image acquired once to the right side of the target object.
And then the terminal extracts the feature vectors of the object feature points, and calculates the vector distances between the object feature points corresponding to each historical object image respectively by using the feature vectors. And taking the object feature points with the minimum vector distance in each historical object image as the historical object matching points to obtain the historical object matching points corresponding to each historical object image.
In a specific embodiment, the server may extract a historical object matching point corresponding to each historical object image and a current object matching point corresponding to each current object image, respectively, using a SIFT algorithm (Scale Invariant Feature Transform, scale-invariant feature transform matching algorithm). Each history object image is specifically two history object images acquired from the left and right sides of the target object. The server takes the left-side collected historical object image as a reference image and the right-side collected historical object image as a target image.
The specific implementation steps of the SIFT algorithm comprise:
(1) And (5) detecting characteristic points. And constructing a Gaussian pyramid by carrying out convolution operation on the Gaussian function and the image, and obtaining a Gaussian Difference (DOG) pyramid by carrying out difference operation. In addition, the accurate positioning of the key points is realized by fitting a three-dimensional quadratic function to accurately determine the positions of the key points, so that the sub-pixel level precision is realized.
(2) And (5) determining a direction angle. After the feature points in each image are determined in the previous step, a direction is calculated for each feature point, and further operation is performed according to the direction. All subsequent operations are performed on the basis of the position, scale and angle of the feature points. The direction angle of the feature point is to construct a feature description vector by using a statistical histogram of the gradient directions of pixels in a feature point neighborhood window. The feature description vector is shown in formula (1):
Figure BDA0004014250310000111
where L represents the pixel gray value at the corresponding point, and m (x, y) and θ (x, y) are the gradient magnitude and direction of the pixel point (x, y), respectively. After more than 64 points were calculated, statistics were performed using histograms. The horizontal axis of the histogram is the gradient direction angle (0-360 degrees, one column per 10 degrees), and the vertical axis is the Gaussian weighted accumulated value of the corresponding gradient value.
(3) Feature point descriptors are generated. After the main direction and the amplitude of the feature points are obtained, the feature points are described, so that preparation is made for matching among the points. The coordinate axis is rotated as the main direction of the feature point, and only the feature point is described by taking the main direction as the zero direction, so that the feature point has rotation invariance. In order to enhance the robustness of matching, a total of 16 seed points of 4×4 can be used for describing each feature point, and since each seed point has 8 direction vectors, each feature point can generate 128-dimensional vectors, and the 128 values are SIFT feature point descriptors, and the feature point description vectors are not influenced by geometric factors such as scale change, rotation and the like. And finally, carrying out normalization processing on the length of the SIFT feature vector so as to remove the influence of illumination variation.
(4) And (5) matching the characteristic points. After each feature point descriptor of the two images is obtained, the Euclidean distance of the feature vector between the two feature points is used as a similarity measurement criterion of the feature points in the two images. The Euclidean distance calculation is shown in formula (2):
Figure BDA0004014250310000112
wherein xi and yi are feature point feature vector components to be matched in the two images respectively.
The server selects a certain characteristic point from the reference image, and traverses to obtain the first two characteristic points with the shortest Euclidean distance in the target image. And if the ratio of the nearest distance to the next nearest distance is smaller than the preset distance threshold, the point is considered as a matching point, and the matching point in the reference image and the matching point in the target image are taken as matching point pairs. The server traverses the characteristic points in the reference image and the target image to determine each matching point pair, sorts the matching point pairs according to Euclidean distance, and removes the matching point pair with larger distance to obtain the history object matching points corresponding to each history object image.
Repeating the execution logic, and obtaining the current object matching points corresponding to the current object images by the server.
In this embodiment, feature matching is performed on object feature points corresponding to each historical object image through vector distances, so as to obtain historical object matching points corresponding to each historical object image in the historical object image set, and accuracy of the historical object matching points is improved, so that three-dimensional coordinates of the historical feature points obtained through calculation of the historical object matching points are more accurate, and accuracy of three-dimensional offset information of a target object is improved.
In one embodiment, step 206, performing the same object point matching using the historical object matching point corresponding to each historical object image and the current object matching point corresponding to each current object image, includes:
extracting feature vectors corresponding to the historical object matching points and feature vectors corresponding to the current object matching points;
calculating the vector distance between the feature vector corresponding to the historical object matching point and the feature vector corresponding to the current object matching point;
when the vector distance is detected to be smaller than the preset distance threshold value, determining that the historical object matching point and the current object matching point are the same object matching point.
Specifically, the server extracts feature vectors corresponding to each of the history object matching points and feature vectors corresponding to each of the current object matching points. And respectively calculating vector distances between the feature vectors corresponding to the historical object matching points and the feature vectors corresponding to the current object matching points, determining the historical object matching points and the current object matching points with the vector distances smaller than a preset distance threshold as the same object matching points, and obtaining the same object matching points between the historical object images and the current object images.
In this embodiment, by determining whether the historical object matching point and the current object matching point are the same object matching point according to the vector distance, mismatching between the historical object matching point and the current object matching point is reduced, so that accuracy of coordinate differences corresponding to the same object points is improved.
In one embodiment, step 208, performing space pose transformation parameter calculation using the historical object matching points corresponding to each historical object image to obtain a historical space pose transformation parameter, including:
acquiring image coordinates of a history object matching point corresponding to each history object image, and calculating coordinate mapping relation parameters between each history object image based on the image coordinates;
and carrying out parameter decomposition based on the coordinate mapping relation parameters to obtain the historical space pose transformation parameters.
The coordinate mapping relation parameter refers to a parameter representing the coordinate mapping relation between the historical object images.
Specifically, the server acquires image coordinates of the history object matching points corresponding to each history object image, calculates coordinate mapping relation parameters between each history object image based on the image coordinates, and the coordinate mapping relation parameters may be basic matrices corresponding to each history object image. And the server carries out parameter decomposition on the basic matrix to obtain the historical space pose transformation parameters. The historical space pose transformation parameters can be pose transformation parameters when the image acquisition equipment acquires images on the left side and the right side of the target object, and are used for calculating three-dimensional coordinates of the historical feature points corresponding to the matching points of the historical object.
Repeating the execution logic, and obtaining the current space pose transformation parameters corresponding to each historical object image by the server.
In a specific embodiment, the basis matrix corresponding to each historical object image is calculated by a basis matrix estimation method. The basic matrix definition equation is shown in formula (3):
formula X' fx=0 (3)
Wherein, therein
Figure BDA0004014250310000131
Is any pair of matching points of the two images.
Since the matching of each set of points provides a linear equation for calculating the F-coefficients, the equation can calculate the unknown F, i.e. the coordinate mapping parameters, given at least 7 points (3 x 3 homogeneous matrix minus a scale, and a constraint of rank 2). Let the coordinates of the point be x= (X, y, 1) T ,X'=(x',y',1) T The corresponding equation is shown in equation (4):
Figure BDA0004014250310000132
after expansion, as shown in formula (5):
x'xf 11 +x'yf 12 +x'f 13 +y'xf 21 +y'yf 22 +y'f 23 +xf 11 +yf 22 +f 33 =0
formula (5)
Writing matrix F in the form of a column vector, then as shown in equation (6):
[ x 'x x' y x 'y' x y 'y y' x y 1]f =0 formula (6)
Given a set of n groups of points, then as shown in equation (7):
Figure BDA0004014250310000141
the algorithm flow for determining the basic matrix comprises the following steps:
(1) Normalization: according to
Figure BDA0004014250310000142
The image coordinates are transformed, where T and T' are normalized transforms implemented by translation and scaling.
(2) Solving the basic matrix of the corresponding match
Figure BDA0004014250310000143
Comprising the following steps:
solving a linear solution: by corresponding point sets
Figure BDA0004014250310000144
Determined coefficient matrix->
Figure BDA0004014250310000145
Singular vector determination of minimum singular value of +.>
Figure BDA0004014250310000146
Singularity constraint: by SVD (Singular Value Decomposition ) pairs
Figure BDA00040142503100001414
Decomposing to obtain +.>
Figure BDA0004014250310000147
Make->
Figure BDA0004014250310000148
(3) Removing normalization: order the
Figure BDA0004014250310000149
Matrix F is the dot->
Figure BDA00040142503100001410
A corresponding basic matrix.
The server calculates the basic matrix corresponding to each historical object image as follows:
Figure BDA00040142503100001411
server pair
Figure BDA00040142503100001412
Singular value decomposition is carried out, and the historical space pose transformation parameters are obtained as follows: />
Figure BDA00040142503100001413
Where R represents a rotation matrix and t represents a translation matrix.
In this embodiment, coordinate mapping relation parameters between each historical object image are calculated according to image coordinates of the historical object matching points, and parameter decomposition is performed on the coordinate mapping relation parameters to obtain historical space pose transformation parameters, so that three-dimensional coordinates of the historical feature points are calculated by using the historical space pose transformation parameters subsequently, and accuracy of the three-dimensional coordinates of the historical feature points is improved.
In one embodiment, step 208, the image capture device is a camera device; performing three-dimensional space matching point calculation by using the historical pose transformation parameters and the historical object matching points corresponding to each historical object image to obtain three-dimensional coordinates of the historical feature points, wherein the three-dimensional space matching point calculation comprises the following steps:
And acquiring calibration parameters corresponding to the camera equipment, and calculating three-dimensional space matching points based on the calibration parameters, the historical pose transformation parameters and the matching points of each historical object to obtain three-dimensional coordinates of the historical feature points.
The calibration parameters refer to intrinsic attribute parameters in the camera equipment and are used for calculating three-dimensional space matching points.
Specifically, the server obtains calibration parameters corresponding to the camera device, which may be internal parameters of the camera device. And the server calculates three-dimensional space matching points according to the calibration parameters, the historical pose transformation parameters and the matching points of each historical object to obtain three-dimensional coordinates of the historical feature points. The server may calculate three-dimensional coordinates of the historical feature points using a triangularization calculation method.
The trigonometric calculation represents the projection of a three-dimensional spatial point onto the image as shown in equation (8).
Figure BDA0004014250310000151
Where λ represents the camera depth; k represents a calibration parameter, namely an internal reference; a= [ u v 1] T Representing normalized plane coordinates, and normalizing the plane coordinates by pixel coordinates; p represents pose transformation parameters of the camera device; x represents homogeneous coordinates [ X y z 1 ] of three-dimensional space point] T . P1, P2, P3 respectively represent the pose conversion parameters of the left camera device to the right camera, the pose conversion parameters of the right camera device to the left camera, and the pose conversion parameters between the left and right cameras.
And (3) simultaneously cross-multiplying PX on two sides of the formula (8), as shown in the formula (9):
Figure BDA0004014250310000152
Figure BDA0004014250310000153
/>
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004014250310000154
representing the anti-symmetric matrix corresponding to vector a. The server calculates a singular vector of the minimum singular value of the homography matrix corresponding to the formula (9) by using an SVD algorithm to obtain a homogeneous coordinate X, namely a three-dimensional coordinate of the historical feature point.
Repeating the execution logic, and obtaining the three-dimensional coordinates of the current feature points corresponding to the current object matching points in each current object image by the server.
In the embodiment, the three-dimensional space matching point calculation is performed by using the calibration parameters, the historical pose transformation parameters and the matching points of each historical object, so that the accuracy of the three-dimensional coordinates of the historical feature points is improved, the three-dimensional offset information corresponding to the target object is calculated by using the three-dimensional coordinates of the historical feature points, and the accuracy of the three-dimensional offset information is improved.
In one embodiment, step 212, calculating the difference value corresponding to the three-dimensional coordinates of the historical feature point and the three-dimensional coordinates of the current feature point to obtain the three-dimensional offset information corresponding to the target object includes:
acquiring three-dimensional coordinates of a historical feature point and three-dimensional coordinates of a current feature point corresponding to each same object point, and respectively calculating differences between the three-dimensional coordinates of the historical feature point and the three-dimensional coordinates of the current feature point corresponding to each same object point to obtain coordinate differences corresponding to each same object point;
And carrying out mean value calculation based on the coordinate difference values corresponding to the same object points to obtain the three-dimensional offset information corresponding to the target object.
Specifically, the server acquires three-dimensional coordinates of a history feature point and three-dimensional coordinates of a current feature point corresponding to each same object point, calculates differences between the three-dimensional coordinates of the history feature point and the three-dimensional coordinates of the current feature point corresponding to each same object point, and obtains coordinate differences corresponding to each same object point, wherein the difference calculation is shown in formula (10):
Figure BDA0004014250310000161
wherein Δx, Δy, Δz represent three-dimensional differences between the three-dimensional coordinates of the history feature point and the three-dimensional coordinates of the current feature point. d represents the coordinate difference corresponding to the same object point.
The server screens the coordinate difference values corresponding to the same object points, and removes the rough difference to obtain the coordinate difference value of each target. And the server calculates the average value of the difference values of the target coordinates to obtain three-dimensional offset information corresponding to the target object.
In one embodiment, as shown in fig. 3, a flow chart for acquiring three-dimensional offset information is provided. And (3) calibrating the camera equipment in advance to obtain calibration parameters corresponding to the camera equipment. A set of historical object images and a set of current object images are acquired. And respectively carrying out feature extraction and feature matching on the historical object image set and the current object image set to obtain a historical object matching point corresponding to the historical object image set and a current object matching point corresponding to the current object image set. The server uses the history object matching points to estimate a basic matrix between history object images, calculates the history space pose transformation parameters corresponding to the history object matching points according to the basic matrix, estimates the basic matrix between current object images by using the current object matching points, and calculates the current space pose transformation parameters corresponding to the current object matching points according to the basic matrix.
And the server calculates and obtains three-dimensional coordinates of the historical feature points corresponding to the historical object matching points according to a triangulation calculation method by using pixel coordinates, calibration parameters and historical space pose transformation parameters corresponding to the historical object matching points. And calculating the three-dimensional coordinates of the current feature point corresponding to the current object matching point according to a triangulation calculation method by using the pixel coordinates, the calibration parameters and the current space pose transformation parameters corresponding to the current object matching point.
And the server calculates three-dimensional offset information corresponding to the target object according to the three-dimensional coordinates of the historical feature points and the three-dimensional coordinates of the current feature points.
In one embodiment, as shown in fig. 4, a specific flowchart for acquiring three-dimensional offset information is provided. The server acquires a historical object image set and a current object image set, performs Gaussian blur, builds a Gaussian pyramid, builds a Gaussian score pyramid, searches extreme points, accurately positions key points and determines main directions of the key points on the historical object image set and the current object image set respectively, and generates feature point descriptors, namely object feature points corresponding to all historical object images in the historical object image set and object feature points corresponding to all current object images in the current object image set.
The server traverses Euclidean distances of the image feature points corresponding to each historical object image and traverses Euclidean distances of the image feature points corresponding to each current object image. And the server sorts according to the Euclidean distance to obtain a historical object matching point and a current object matching point with shorter Euclidean distance. The server extracts the pixel coordinates corresponding to the historical object matching points and the pixel coordinates of the current object matching points respectively, and calculates three-dimensional offset information corresponding to the target object according to the pixel coordinates corresponding to the historical object matching points and the pixel coordinates of the current object matching points.
In one embodiment, as shown in FIG. 5, a triangulated schematic is provided. P in the figure j Representing characteristic points, P, on an object image acquired by an image acquisition device j And representing the three-dimensional space matching points corresponding to the characteristic points. K represents the calibration parameters corresponding to the image acquisition equipment.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiments of the present application also provide an image processing apparatus for implementing the above-mentioned image processing method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation of one or more embodiments of the image processing apparatus provided below may refer to the limitation of the image processing method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 6, there is provided an image processing apparatus 600 including: an acquisition module 602, a matching module 604, a matching point identification module 606, a historical coordinate transformation module 608, a current coordinate transformation module 610, and an offset calculation module 612, wherein:
the acquisition module 602 is configured to acquire a historical object image set and a current object image set corresponding to a target object, where the historical object image set is obtained by image acquisition of the target object from different acquisition positions at a historical moment by using an image acquisition device, and the current object image set is obtained by image acquisition of the target object from different acquisition positions at a current moment by using the image acquisition device;
the matching module 604 is configured to perform object point matching based on the historical object image set and the current object image set, so as to obtain a historical object matching point corresponding to each historical object image in the historical object image set and a current object matching point corresponding to each current object image in the current object image set;
A matching point identifying module 606, configured to perform the same object point matching using the historical object matching points corresponding to each historical object image and the current object matching points corresponding to each current object image;
the historical coordinate transformation module 608 is configured to perform space pose transformation parameter calculation using the historical object matching points corresponding to each historical object image when the historical object matching points and the current feature matching points are the same object point, obtain a historical space pose transformation parameter, and perform three-dimensional space matching point calculation using the historical pose transformation parameter and the historical object matching points corresponding to each historical object image, so as to obtain a three-dimensional coordinate of the historical feature point;
the current coordinate transformation module 610 is configured to perform space pose transformation parameter calculation using the current feature matching points corresponding to each current object image to obtain current space pose transformation parameters, and perform three-dimensional space matching point calculation using the current pose transformation parameters and the current feature matching points corresponding to each current object image to obtain three-dimensional coordinates of the current feature points;
the offset calculation module 612 is configured to calculate a difference value corresponding to the three-dimensional coordinates of the historical feature point and the three-dimensional coordinates of the current feature point, so as to obtain three-dimensional offset information corresponding to the target object.
In one embodiment, the matching module 604 includes:
the feature matching unit is used for carrying out fuzzy processing on each historical object image to obtain fuzzy images corresponding to each historical object image, and determining extreme points based on pixel values in the fuzzy images; obtaining object feature points corresponding to each historical object image respectively based on the extreme points; extracting feature vectors corresponding to the object feature points, and calculating vector distances between the object feature points corresponding to each historical object image by using the feature vectors; and carrying out feature matching on object feature points corresponding to each historical object image based on the vector distance to obtain historical object matching points corresponding to each historical object image in the historical object image set.
In one embodiment, the matching point identification module 606 includes:
the matching point detection unit is used for extracting the characteristic vector corresponding to the matching point of the historical object and the characteristic vector corresponding to the matching point of the current object; calculating the vector distance between the feature vector corresponding to the historical object matching point and the feature vector corresponding to the current object matching point; when the vector distance is detected to be smaller than the preset distance threshold value, determining that the historical object matching point and the current object matching point are the same object matching point.
In one embodiment, the historical coordinate transformation module 608 includes:
the parameter decomposition unit is used for acquiring image coordinates of the history object matching points corresponding to the history object images and calculating coordinate mapping relation parameters among the history object images based on the image coordinates; and carrying out parameter decomposition based on the coordinate mapping relation parameters to obtain the historical space pose transformation parameters.
In one embodiment, the historical coordinate transformation module 608 includes:
the three-dimensional point calculation unit is used for obtaining calibration parameters corresponding to the camera equipment, and performing three-dimensional space matching point calculation based on the calibration parameters, the historical pose transformation parameters and the matching points of each historical object to obtain three-dimensional coordinates of the historical feature points.
In one embodiment, the offset calculation module 612 includes:
the average value calculation unit is used for obtaining three-dimensional coordinates of the historical characteristic points and three-dimensional coordinates of the current characteristic points corresponding to the same object points, and calculating differences between the three-dimensional coordinates of the historical characteristic points and the three-dimensional coordinates of the current characteristic points corresponding to the same object points respectively to obtain coordinate differences corresponding to the same object points; and carrying out mean value calculation based on the coordinate difference values corresponding to the same object points to obtain the three-dimensional offset information corresponding to the target object.
The respective modules in the above-described image processing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing a set of historical object images and a set of current object images. The input/output interface of the computer device is used for the processor to exchange information with the external device correspondingly. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image processing method.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 8. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used for the processor to exchange information with the external device correspondingly. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image processing method. The display unit of the computer equipment is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device, wherein the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on a shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structures shown in fig. 7-8 are block diagrams of only some of the structures that are relevant to the present application and are not intended to limit the computer device on which the present application may be implemented, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. An image processing method, the method comprising:
acquiring a historical object image set and a current object image set corresponding to a target object, wherein the historical object image set is obtained by image acquisition of the target object from different acquisition positions at a historical moment through image acquisition equipment, and the current object image set is obtained by image acquisition of the target object from different acquisition positions at a current moment through the image acquisition equipment;
Respectively carrying out object point matching based on the historical object image set and the current object image set to obtain historical object matching points corresponding to each historical object image in the historical object image set and current object matching points corresponding to each current object image in the current object image set;
performing identical object point matching by using the historical object matching points corresponding to the historical object images and the current object matching points corresponding to the current object images;
when the historical object matching points and the current feature matching points are the same object point, performing space pose transformation parameter calculation by using the historical object matching points corresponding to each historical object image to obtain historical space pose transformation parameters, and performing three-dimensional space matching point calculation by using the historical pose transformation parameters and the historical object matching points corresponding to each historical object image to obtain three-dimensional coordinates of the historical feature points;
performing space pose transformation parameter calculation by using the current feature matching points corresponding to each current object image to obtain current space pose transformation parameters, and performing three-dimensional space matching point calculation by using the current pose transformation parameters and the current feature matching points corresponding to each current object image to obtain three-dimensional coordinates of the current feature points;
And calculating the difference value corresponding to the three-dimensional coordinates of the historical feature points and the three-dimensional coordinates of the current feature points to obtain the three-dimensional offset information corresponding to the target object.
2. The method according to claim 1, wherein the performing object point matching based on the historical object image set and the current object image set respectively to obtain a historical object matching point corresponding to each historical object image in the historical object image set and a current object matching point corresponding to each current object image in the current object image set includes:
performing fuzzy processing on each historical object image to obtain a fuzzy image corresponding to each historical object image, and determining an extreme point based on pixel values in the fuzzy image;
obtaining object feature points corresponding to the historical object images respectively based on the extreme points;
extracting feature vectors corresponding to the object feature points, and calculating vector distances between the object feature points corresponding to each historical object image by using the feature vectors;
and carrying out feature matching on object feature points corresponding to each historical object image based on the vector distance to obtain historical object matching points corresponding to each historical object image in the historical object image set.
3. The method according to claim 1, wherein the performing the same object point matching using the history object matching points corresponding to the respective history object images and the current object matching points corresponding to the respective current object images includes:
extracting a feature vector corresponding to the historical object matching point and a feature vector corresponding to the current object matching point;
calculating the vector distance between the feature vector corresponding to the historical object matching point and the feature vector corresponding to the current object matching point;
and when the vector distance is detected to be smaller than a preset distance threshold value, determining that the historical object matching point and the current object matching point are the same object matching point.
4. The method according to claim 1, wherein the calculating the spatial pose transformation parameters using the historical object matching points corresponding to the historical object images to obtain the historical spatial pose transformation parameters includes:
acquiring image coordinates of the history object matching points corresponding to the history object images, and calculating coordinate mapping relation parameters between the history object images based on the image coordinates;
and carrying out parameter decomposition based on the coordinate mapping relation parameters to obtain the historical space pose transformation parameters.
5. The method of claim 1, wherein the image acquisition device is a camera device; the step of calculating three-dimensional space matching points by using the history pose transformation parameters and the history object matching points corresponding to the history object images to obtain three-dimensional coordinates of history feature points comprises the following steps:
and acquiring calibration parameters corresponding to the camera equipment, and calculating three-dimensional space matching points based on the calibration parameters, the historical pose transformation parameters and the historical object matching points to obtain three-dimensional coordinates of the historical feature points.
6. The method according to claim 1, wherein calculating the difference value between the three-dimensional coordinates of the historical feature point and the three-dimensional coordinates of the current feature point to obtain the three-dimensional offset information corresponding to the target object includes:
acquiring three-dimensional coordinates of a historical feature point and three-dimensional coordinates of a current feature point corresponding to each same object point, and respectively calculating differences between the three-dimensional coordinates of the historical feature point and the three-dimensional coordinates of the current feature point corresponding to each same object point to obtain coordinate differences corresponding to each same object point;
and carrying out mean value calculation based on the coordinate difference values corresponding to the same object points to obtain the three-dimensional offset information corresponding to the target object.
7. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a historical object image set and a current object image set corresponding to a target object, wherein the historical object image set is obtained by image acquisition of the target object from different acquisition positions at a historical moment through the image acquisition equipment, and the current object image set is obtained by image acquisition of the target object from different acquisition positions at the current moment through the image acquisition equipment;
the matching module is used for respectively carrying out object point matching based on the historical object image set and the current object image set to obtain historical object matching points corresponding to each historical object image in the historical object image set and current object matching points corresponding to each current object image in the current object image set;
the matching point identification module is used for carrying out the same object point matching by using the historical object matching points corresponding to the historical object images and the current object matching points corresponding to the current object images;
the historical coordinate transformation module is used for performing space pose transformation parameter calculation by using the historical object matching points corresponding to each historical object image when the historical object matching points and the current characteristic matching points are the same object point to obtain historical space pose transformation parameters, and performing three-dimensional space matching point calculation by using the historical pose transformation parameters and the historical object matching points corresponding to each historical object image to obtain three-dimensional coordinates of the historical characteristic points;
The current coordinate transformation module is used for carrying out space pose transformation parameter calculation by using the current feature matching points corresponding to each current object image to obtain current space pose transformation parameters, and carrying out three-dimensional space matching point calculation by using the current pose transformation parameters and the current feature matching points corresponding to each current object image to obtain three-dimensional coordinates of the current feature points;
and the offset calculation module is used for calculating the difference value corresponding to the three-dimensional coordinates of the historical characteristic points and the three-dimensional coordinates of the current characteristic points to obtain the three-dimensional offset information corresponding to the target object.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202211661662.4A 2022-12-23 2022-12-23 Image processing method, device, computer equipment and storage medium Pending CN116091998A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211661662.4A CN116091998A (en) 2022-12-23 2022-12-23 Image processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211661662.4A CN116091998A (en) 2022-12-23 2022-12-23 Image processing method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116091998A true CN116091998A (en) 2023-05-09

Family

ID=86207435

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211661662.4A Pending CN116091998A (en) 2022-12-23 2022-12-23 Image processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116091998A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117116413A (en) * 2023-10-16 2023-11-24 深圳卡尔文科技有限公司 Oral planting optimization method, system and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117116413A (en) * 2023-10-16 2023-11-24 深圳卡尔文科技有限公司 Oral planting optimization method, system and storage medium
CN117116413B (en) * 2023-10-16 2023-12-26 深圳卡尔文科技有限公司 Oral planting optimization method, system and storage medium

Similar Documents

Publication Publication Date Title
CN108986152B (en) Foreign matter detection method and device based on difference image
Fan et al. Real-time stereo vision for road surface 3-d reconstruction
KR20120044484A (en) Apparatus and method for tracking object in image processing system
CN115526892B (en) Image defect duplicate removal detection method and device based on three-dimensional reconstruction
CN113192646A (en) Target detection model construction method and different target distance monitoring method and device
CN116958145B (en) Image processing method and device, visual detection system and electronic equipment
Liu et al. Multi-sensor image registration by combining local self-similarity matching and mutual information
CN111914756A (en) Video data processing method and device
CN116091998A (en) Image processing method, device, computer equipment and storage medium
WO2022062853A1 (en) Remote sensing image registration method and apparatus, device, storage medium, and system
Paul et al. High‐resolution optical‐to‐SAR image registration using mutual information and SPSA optimisation
CN117372487A (en) Image registration method, device, computer equipment and storage medium
CN117132649A (en) Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion
CN110472085B (en) Three-dimensional image searching method, system, computer device and storage medium
CN116977671A (en) Target tracking method, device, equipment and storage medium based on image space positioning
CN115063473A (en) Object height detection method and device, computer equipment and storage medium
CN116206139A (en) Unmanned aerial vehicle image upscaling matching method based on local self-convolution
CN116128919A (en) Multi-temporal image abnormal target detection method and system based on polar constraint
CN115830073A (en) Map element reconstruction method, map element reconstruction device, computer equipment and storage medium
Zhang et al. Shared contents alignment across multiple granularities for robust SAR-optical image matching
JP2016206909A (en) Information processor, and information processing method
CN114255398A (en) Method and device for extracting and matching features of satellite video image
Ezzaki et al. Edge detection algorithm for omnidirectional images, based on superposition laws on Blach’s sphere and quantum entropy
WO2017042852A1 (en) Object recognition appratus, object recognition method and storage medium
Zhao et al. IR saliency detection via a GCF-SB visual attention framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination