CN113409365B - Image processing method, related terminal, device and storage medium - Google Patents

Image processing method, related terminal, device and storage medium Download PDF

Info

Publication number
CN113409365B
CN113409365B CN202110713177.6A CN202110713177A CN113409365B CN 113409365 B CN113409365 B CN 113409365B CN 202110713177 A CN202110713177 A CN 202110713177A CN 113409365 B CN113409365 B CN 113409365B
Authority
CN
China
Prior art keywords
image
processing result
registered
local
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110713177.6A
Other languages
Chinese (zh)
Other versions
CN113409365A (en
Inventor
王求元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202110713177.6A priority Critical patent/CN113409365B/en
Publication of CN113409365A publication Critical patent/CN113409365A/en
Application granted granted Critical
Publication of CN113409365B publication Critical patent/CN113409365B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, a related terminal, equipment and a storage medium, wherein the image processing method comprises the steps that the terminal acquires an image to be registered; performing first image registration on the image to be registered by using a local target image to obtain a local processing result; the image to be registered is sent to the cloud end, so that the cloud end performs second image registration on the image to be registered by utilizing the cloud end target image to obtain a cloud end processing result; and obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result. By the method, the flexibility and reliability of image registration are improved.

Description

Image processing method, related terminal, device and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to an image processing method, and related terminal, device, and storage medium.
Background
Image registration and tracking are important research points in the fields of computer vision such as AR, VR and the like, and transformation parameters between a current image and a target image shot by a camera can be obtained through image registration and image tracking technologies, so that the position of the target image in the current image can be obtained through the transformation parameters later.
At present, a terminal runs an image processing algorithm, or the cloud end finishes a specific image processing process by uploading an acquired image to the cloud end through a network, and then feeds back a processing result to a local end; or only the local end uses the self calculation force to run the image processing algorithm so as to obtain the corresponding processing result. The method is easily affected by poor network transmission speed and slower cloud processing speed, so that the device cannot obtain a result in time, or the problem that the accuracy of detection image processing is low due to insufficient local computing capacity, and the like, and the further development of the technology is greatly hindered.
Therefore, how to improve the speed of the device in running the image processing algorithm and improve the accuracy of the image processing has very important significance.
Disclosure of Invention
The application provides an image processing method, a related terminal, a related device and a storage medium.
The first aspect of the present application provides an image processing method, including: the terminal acquires an image to be registered; performing first image registration on the image to be registered by using a local target image to obtain a local processing result; the image to be registered is sent to the cloud end, so that the cloud end performs second image registration on the image to be registered by utilizing the cloud end target image to obtain a cloud end processing result; and obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result.
Therefore, the local processing result is obtained by utilizing the first image registration, or the cloud processing result is obtained by utilizing the second image registration, so that the terminal can utilize the computing capacity of the cloud and the computing capacity of the local end, the mode of carrying out the image registration by the terminal is more flexible, the final processing result is obtained based on at least one of the local processing result and the cloud processing result, and even if the local processing result or the cloud processing result cannot be obtained, the final processing result can be obtained by utilizing the other processing result of the local processing result and the cloud processing result, and the reliability of the image registration is improved.
In some application scenarios, the first obtained processing result can be selected as the final processing result, so that the speed of image registration can be improved, in some application scenarios, the processing result with one end with better processing resources (such as stronger and more accurate registration capability) can be preferentially selected as the final processing result, and the accuracy of image registration can be improved.
The performing a first image registration on the image to be registered by using the local target image to obtain a local processing result includes: searching at least one first target image from the first target image set to serve as a local target image; registering the image to be registered and the local target image by using a local registration mode to obtain local transformation parameters between the image to be registered and the local target image; obtaining a local processing result based on the local transformation parameters; and/or the cloud processing result is obtained by registering the image to be registered and the cloud target image by the cloud in a cloud registration mode, wherein the cloud target image is from the second target image set.
Therefore, by finding at least one first target image from the first target image set as a local target image, it is possible to calculate a local transformation parameter based on the local target image, and finally obtain a local processing result.
Wherein at least partial images in the first target image set and the second target image set are the same; and/or the number of images in the first target image set is less than the number of images in the second target image set; and/or the computing power or computing time required by the local registration mode is smaller than the computing power or computing time required by the cloud registration mode.
Therefore, by setting that at least partial images exist in the first target image set and the second target image set, the partial images and the images to be registered can be subjected to image registration by utilizing the cloud end and the local end, and the robustness of the image processing method is improved. In addition, by setting the number of images in the first target image set to be smaller than the number of images in the second target image set, the number of images required to be subjected to image registration with the image to be registered is smaller when the local end performs first image registration, so that the registration speed of the first image registration can be increased, in addition, the cloud processing capacity is considered to be stronger, more target images are configured for the cloud, and more accurate registration of the cloud can be realized. In addition, the computing capacity required by the local registration mode is smaller than that required by the cloud registration mode, so that the requirement on the local computing capacity of the terminal can be reduced, and the speed of local registration can be increased; the speed of local registration can be increased by setting the calculation time required by the local registration mode to be smaller than the calculation time required by the cloud registration mode.
The searching at least one first target image from the first target image set as a local target image includes: determining feature similarity between the image to be registered and the first target image based on feature representations of feature points in the image to be registered and the first target image; selecting at least one first target image with characteristic similarity meeting the preset similarity requirement as a local target image; and/or, registering the image to be registered and the local target image by using a local registration mode to obtain local transformation parameters between the image to be registered and the local target image, including: determining at least one group of local matching point pairs between the image to be registered and the local target image based on the feature representations of the feature points in the image to be registered and the local target image; based on at least one set of local matching point pairs, local transformation parameters are obtained.
Therefore, by performing similarity calculation, the first target image with the highest similarity with the image to be registered can be rapidly determined, and the local registration mode of the local end can be accelerated.
The obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result includes: responding to the condition that the preset condition is met, and taking the cloud processing result as a first processing result of the image to be registered; and responding to the condition that the preset condition is not met, and taking the local processing result as a first processing result of the image to be registered.
Therefore, by judging whether the cloud processing result meets the preset condition or not, the computing capability of the cloud can be fully utilized when the preset condition is met; when the preset condition is not met, the local processing result is used as a first processing result of the image to be registered, so that image registration can be continuously performed.
The preset condition is that the cloud processing result is received within a preset time.
Therefore, by setting the preset condition as the cloud processing result received in the preset time, the cloud processing result is not utilized when the preset condition is not met, and the response time of the terminal is prevented from being too long.
The acquiring the image to be registered includes: acquiring an image frame obtained by shooting by a shooting device, wherein the image frame comprises a first image frame and a second image frame; taking the first image frame as an image to be registered; the method further comprises the steps of: and taking the second image frame as an image to be tracked in sequence, and obtaining a second processing result of the image to be tracked based on a reference processing result of the reference image frame, the image to be tracked and image information in the reference image frame, wherein the reference image frame is an image frame before the image to be tracked, and the reference processing result is determined based on the first processing result.
Thus, by taking the first image frame as the image to be registered, continuous image registration of the image frames captured by the capturing device can be achieved. In addition, the second image frames are sequentially used as images to be tracked, and image tracking of the second image frames is achieved subsequently.
The step of performing first image registration on the image to be registered by using the local target image to obtain a local processing result is performed by a first thread; a step of obtaining a second processing result of the image to be tracked based on a reference processing result of the reference image frame, the image to be tracked and image information in the reference image frame, a step of transmitting the image to be registered to the cloud end, a step of obtaining a first processing result of the image to be registered based on at least one of a local processing result and a cloud end processing result, and at least one of the steps is executed by a second thread; wherein the first thread and the second thread are asynchronously processed.
Therefore, by setting the first thread and the second thread to asynchronous processing, image registration can be performed and image tracking can be performed at the same time, and the result of image registration (first processing result) does not need to be waited, so that the terminal can obtain the tracking result (second processing result) in time, the response speed of the terminal is improved, and the delay is reduced.
The first processing result is a first transformation parameter between the image to be registered and a final target image, the final target image is a local target image or a cloud target image, and the second processing result is a second transformation parameter between the image to be tracked and a reference image frame; or the first processing result is the pose of the image to be registered, and the second processing result is the pose of the image to be tracked; or, the first processing result is a first transformation parameter, the second processing result is the pose of the image to be tracked, and the method further comprises executing the following steps by using the second thread: and obtaining the pose of the image to be registered by using the first transformation parameters.
Therefore, by setting the second processing result to a different type (the second transformation parameter or the pose of the image to be tracked), the subsequent selection can be made as needed.
Before the first image registration is performed on the image to be registered by using the local target image to obtain the local processing result, the method further includes executing the following steps by using the second thread: initializing a first thread; and/or the method further comprises performing, with the second thread, at least one of: rendering and displaying the image to be tracked based on the second processing result of the image to be tracked after the second thread obtains the second processing result of the image to be tracked; and rendering and displaying the image to be registered based on the first processing result of the image to be registered under the condition that the first thread obtains the first processing result of the image to be registered and the image to be registered is not displayed.
Therefore, the image frames can be processed by rendering and displaying the images to be tracked by using the second thread or rendering and displaying the images to be registered by using the second thread, so that interaction with a real environment can be realized.
A second aspect of the present application provides an image processing terminal including: the system comprises an image acquisition module, a local registration module, a cloud registration module and a determination module, wherein the image acquisition module is used for acquiring an image to be registered; the local registration module is used for carrying out first image registration on the image to be registered by utilizing the local target image so as to obtain a local processing result; the cloud registration module is used for sending the image to be registered to the cloud so that the cloud performs second image registration on the image to be registered by utilizing the cloud target image to obtain a cloud processing result; the determining module is used for obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result.
A third aspect of the present application provides an electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the image processing method of the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions which, when executed by a processor, implement the image processing method of the first aspect described above.
According to the scheme, the local processing result is obtained through the first image registration, or the cloud processing result is obtained through the second image registration, so that the terminal can utilize the computing capacity of the cloud and the computing capacity of the local terminal, the mode of carrying out image registration by the terminal is more flexible, the final processing result is obtained based on at least one of the local processing result and the cloud processing result, and even if the local processing result or the cloud processing result cannot be obtained, the final processing result can be obtained through the other processing result of the local processing result or the cloud processing result, and therefore the reliability of image registration is improved.
In some application scenarios, the first obtained processing result can be selected as the final processing result, so that the speed of image registration can be improved, in some application scenarios, the processing result with one end with better processing resources (such as stronger and more accurate registration capability) can be preferentially selected as the final processing result, the accuracy of image registration can be improved, and when an image registration algorithm is operated, the processing result can be obtained faster, or the more accurate registration result can be obtained.
Drawings
FIG. 1 is a first flow chart of a first embodiment of an image registration method of the present application;
FIG. 2 is a schematic diagram of a second flow chart of a first embodiment of the image processing method of the present application;
FIG. 3 is a third flow chart of a first embodiment of the image processing method of the present application;
FIG. 4 is a fourth flowchart of a first embodiment of the image processing method of the present application;
FIG. 5 is a flow chart of a second embodiment of the image processing method of the present application;
FIG. 6 is a schematic diagram of a frame of an embodiment of an image processing terminal of the present application;
FIG. 7 is a schematic diagram of a frame of an embodiment of an electronic device of the present application;
FIG. 8 is a schematic diagram of a frame of one embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of an image processing method according to the present application. Specifically, the method may include the steps of:
step S11: and acquiring an image to be registered.
The image processing method can be a mobile terminal, and can specifically comprise a mobile phone, a tablet computer, intelligent glasses and the like. The image to be registered may be an image obtained by shooting by a shooting device, for example, a mobile phone, a tablet camera, or a monitoring camera, etc., and the specific acquisition mode of the image to be registered is not limited.
In one embodiment, the image processing method of the present application may be executed in a web browser, that is, in a web side.
Step S12: and carrying out first image registration on the image to be registered by utilizing the local target image so as to obtain a local processing result.
In one implementation scenario, the local target image belongs to a first target image set having a first predetermined number of local target images. When the first image registration is performed, the first image registration is performed on all local target images in the first target image set and the image to be registered.
The first image registration of the image to be registered is performed by using the local target image, so that a general image registration method can be used. The image registration algorithm is, for example, a gray scale and template based algorithm, or a feature based matching method. For example, with respect to a feature-based matching method, a certain number of matching point pairs with respect to an image to be registered and a local target image may be obtained, and then a random consensus sampling algorithm (RANSAC) is used to calculate transformation parameters of the image to be registered and the local target image, so as to obtain a local processing result. In addition, in one implementation scenario, the local processing result can be directly determined as a transformation parameter of the image to be registered and the target image; in another implementation scenario, the local processing result may also be the pose of the terminal, i.e. the pose of the terminal (hereinafter referred to as the pose of the terminal) in the world coordinate system established based on the local target image is obtained by using the transformation parameters (e.g. homography matrix H) of the image to be registered and the local target image.
In one implementation scenario, the image to be registered and the local target image may be registered by using a local registration manner, so as to obtain a local processing result. The local registration is, for example, the first image registration described above.
Step S13: and sending the image to be registered to the cloud end so that the cloud end performs second image registration on the image to be registered by utilizing the cloud end target image to obtain a cloud end processing result.
In one implementation scenario, the cloud target image belongs to a second target image set. The second target image set has a second preset number of cloud target images. And when the second image registration is carried out, carrying out second image registration on all cloud target images in the second target image set and the image to be registered.
It can be understood that, because the cloud end and the local end can perform image registration, the execution sequence of the step S12 and the step S13 is not limited, and the step S12 may be performed first, or the step S13 may be performed first, or both may be performed simultaneously.
In one implementation scenario, because the computing power of the cloud is greater than that of the local terminal, the cloud uses the cloud target image to perform the second image registration on the image to be registered, which may be an image registration algorithm with a required computing power greater than that of the local terminal.
In one implementation scenario, the cloud processing result is obtained by registering the image to be registered and the cloud target image in a cloud registration mode. By utilizing the cloud registration mode, a cloud processing result which is more accurate than a local processing result can be obtained by means of larger computing capacity of the cloud.
Step S14: and obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result.
After the local processing result and the cloud processing result are obtained, at least one of the local processing result and the cloud processing result can be selected according to the requirement to obtain a first processing result of the image to be registered. For example, in order to meet the response speed requirement of the terminal, the local processing result may be preferentially selected; in order to meet the requirement of the image registration accuracy of the terminal, the cloud processing result can be preferentially selected.
In one implementation scenario, the first processing result is a first transformation parameter between the image to be registered and the final target image. The final target image is a local target image or a cloud target image.
Therefore, the local processing result is obtained by utilizing the first image registration, or the cloud processing result is obtained by utilizing the second image registration, so that the terminal can utilize the computing capacity of the cloud and the computing capacity of the local end, the mode of carrying out the image registration by the terminal is more flexible, the final processing result is obtained based on at least one of the local processing result and the cloud processing result, and even if the local processing result or the cloud processing result cannot be obtained, the final processing result can be obtained by utilizing the other processing result of the local processing result and the cloud processing result, and the reliability of the image registration is improved.
In some application scenarios, the first obtained processing result can be selected as the final processing result, so that the speed of image registration can be improved, in some application scenarios, the processing result with one end with better processing resources (such as stronger and more accurate registration capability) can be preferentially selected as the final processing result, and the accuracy of image registration can be improved.
In one implementation, at least some of the images in the first set of target images and the second set of target images are identical. For example, the images in the second set of target images may include all of the images in the first set of target images. By setting that at least partial images exist in the first target image set and the second target image set, the partial images and the images to be registered can be subjected to image registration by utilizing the cloud end and the local end, and the robustness of the image processing method is improved.
In one implementation, the number of images in the first set of target images is less than the number of images in the second set of target images. By setting the number of images in the first target image set to be smaller than the number of images in the second target image set, the number of images which need to be subjected to image registration with the images to be registered is smaller when the local end performs first image registration, so that the registration speed of the first image registration can be increased, the cloud processing capacity is higher, more target images are configured for the cloud, and more accurate registration of the cloud can be realized.
In one implementation scenario, the computing power or computing time required for the local registration approach is less than the computing power or computing time required for the cloud registration approach. Therefore, the requirement on the local computing capacity of the terminal can be reduced and the speed of local registration can be increased by setting the computing capacity required by the local registration mode to be smaller than that required by the cloud registration mode; the speed of local registration can be increased by setting the calculation time required by the local registration mode to be smaller than the calculation time required by the cloud registration mode.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating a second process of the first embodiment of the image processing method of the present application. The present embodiment is a further extension of the step of acquiring an image to be registered mentioned in the above step S11, and may specifically include the following steps:
step S111: and acquiring an image frame obtained by shooting by the shooting device, wherein the image frame comprises a first image frame and a second image frame.
The photographing device is, for example, a camera module of the terminal or other image acquisition equipment (such as a monitoring camera). The image frames captured by the capturing device may be divided into a first image frame and a second image frame. The first image frame may be used for image registration and the second image frame may be used for image tracking after image registration. The first image frame and the second image frame may be the same or different; when the two image frames are the same, the second image frame is the first image frame.
Step S112: the first image frame is taken as an image to be registered.
The terminal may acquire the first image frame as an image to be registered for image registration.
In one embodiment, the first image frames may be sequentially used as the images to be registered, and the first image frames may be sequentially obtained as the images to be registered according to the obtaining sequence of the first image frames.
Thus, by taking the first image frame as the image to be registered, continuous image registration of the image frames captured by the capturing device can be achieved.
Referring to fig. 3, fig. 3 is a third flow chart of the image processing method according to the first embodiment of the application. The present embodiment is a specific extension of the above-mentioned "performing first image registration on an image to be registered with a local target image to obtain a local processing result" in step S12 of the foregoing embodiment, and includes the following steps:
step S121: at least one first target image is searched from the first target image set to serve as a local target image.
When the first image registration is performed, at least one first target image can be first searched from the first target image set to serve as a local target image for later obtaining a local processing result. The searching method is, for example, the matching degree of the characteristic information of the first target image and the characteristic information of the image to be registered, or the similarity degree of the characteristic information of the first target image and the image to be registered, and the like.
In one implementation scenario, this step may specifically include step S1211 and step S1212.
Step S1211: and determining the feature similarity between the image to be registered and the first target image based on the feature representations of the feature points in the image to be registered and the first target image.
In one implementation scenario, some feature extraction algorithms may be used to perform feature extraction on the image to be registered and the first target image, so as to obtain feature points in the image, where the number of feature points is not specifically limited. In the present application, the feature points extracted from the image frames may include feature points obtained by feature extraction of a series of image frames in an image pyramid established based on the image frames. In the present embodiment, the feature points that perform feature extraction based on the image frame can be regarded as being in the same plane as the final target image.
The feature extraction algorithm is, for example, FAST (features from accelerated segment test) algorithm, SIFT (Scale-invariant feature transform) algorithm, ORB (Oriented FAST and Rotated BRIEF) algorithm, or the like. In one implementation scenario, the feature extraction algorithm is the ORB (Oriented FAST and Rotated BRIEF) algorithm. After the feature points are obtained, a feature representation corresponding to each feature point is also obtained, and the feature representation is, for example, a feature vector. Thus, each feature point has a feature representation corresponding thereto.
In one implementation scenario, the feature representation obtained by extracting features of all the first target images may be input into a "Bag of words" model as a local feature set of the first target images, for constructing a database for quickly retrieving the first target images.
Thereafter, the degree of similarity of the feature representations of the feature points in the image to be registered and each of the first target images may be calculated, for example, by calculating the distance of the feature representations of the feature points in the image to be registered and each of the first target images. In a specific implementation scenario, a feature representation obtained by extracting features of an image to be registered may be input into the word bag model, so as to quickly determine feature similarity between the image to be registered and the first target image.
Step S1212: and selecting at least one first target image with the feature similarity meeting the preset similarity requirement as a local target image.
The preset similarity requirement may be a first target image most similar to the image to be registered among all the first target images.
In one implementation scenario, the preset similarity requirement further includes that a distance between the feature representation of the feature points of the image to be registered and the feature representation of the feature points of the first target image meets a preset threshold requirement. For example, a first target image that is most similar to the image to be registered may be selected first, then, whether the distance between the feature representations of the feature points of the first target image and the second target image meets a preset threshold requirement is calculated, and if so, the first target image is used as a local target image. If not, selecting a first target image with a second similarity row with the image to be registered, then calculating whether the distance between the characteristic representations of the characteristic points of the first target image and the second target image meets the preset threshold requirement, and then the like.
By carrying out similarity calculation, the first target image with the highest similarity with the image to be registered can be rapidly determined, and the method is beneficial to accelerating the local registration mode of the local end.
Step S122: and registering the image to be registered and the local target image by using a local registration mode to obtain local transformation parameters between the image to be registered and the local target image.
After the local target image is determined, the image to be registered and the local target image can be registered in a local registration mode, so that the computing capacity of the terminal is fully utilized, and local transformation parameters between the image to be registered and the local target image are obtained.
In one implementation scenario, this step includes the following steps S1221 and S1222.
Step S1221: at least one set of local matching point pairs between the image to be registered and the local target image is determined based on the feature representations of the feature points in the image to be registered and the local target image.
Firstly, feature extraction can be performed on an image to be registered and a local target image so as to obtain feature representations of feature points in the image to be registered and the local target image. The feature points of the local target image are defined as first feature points and the feature points of the image to be registered are defined as second feature points. In one implementation scenario, in step S1211 described above, since the feature extraction has already been performed on the image to be registered and the local target image, it is possible to directly acquire at this time, reducing the processing steps.
The degree of matching of the feature points between the image to be registered and the local target image may then be calculated to obtain at least one set of local matching point pairs. The degree of matching of the feature points may specifically be the degree of matching of the feature representation between two feature points. In one implementation scenario, the degree of matching of each feature point in the image to be registered with each feature point in the local target graph may be calculated. In one implementation, the degree of matching between two feature points is based on the distance of the feature representation of the second feature point. For example, the size of the distance between the feature representations of two feature points is the degree of matching, and the closer the distance is, the more matching is; closest, then it may be considered the best match. In one implementation scenario, the feature representations are feature vectors, and the distance between feature representations is the distance between feature vectors. In the specific determination of the local matching point pairs, at least one group of local matching point pairs can be selected according to the matching degree from high to low. The first characteristic points are first matching points, and the second characteristic points are second matching points.
Step S1222: based on at least one set of local matching point pairs, local transformation parameters are obtained.
In one implementation, a random consensus sampling algorithm (RANSAC) may be utilized to calculate transformation parameters of the image to be registered with the local target image based on at least one set of local matching point pairs.
In another implementation scenario, after at least one set of local matching point pairs is obtained, direction information for each set of local matching point pairs may be calculated. The direction information of the local matching point pair may be obtained from the directions of the first matching point and the second matching point of the local matching point pair.
In one implementation scenario, the direction information of the local matching point pair may be a difference in directions of the first matching point and the second matching point. For example, when the feature points are extracted by the ORB algorithm, the direction of the first matching point is a corner direction angle, and the direction of the second matching point is also a corner direction angle, and the direction information of the local matching point pair may be a difference between the corner direction angle of the first matching point and the corner direction angle of the second matching point. Therefore, the rotation angle of the image to be registered relative to the local target image can be obtained by calculating the direction information of a group of local matching point pairs. After the direction information of a group of local matching point pairs is obtained, the rotation angle of the image to be registered, represented by the direction information of the group of local matching point pairs, relative to the local target image can be utilized to perform image registration, and finally, the local transformation parameters between the local target image and the image to be registered are obtained.
In one implementation scenario, a first image region centered at a first matching point may be extracted from a local target image, and a second image region centered at a second matching point may be extracted from an image to be registered. Then, a first deflection angle of the first image region and a second deflection angle of the second image region are determined. Finally, the transformation parameters are obtained based on the first deflection angle and the second deflection angle, and specifically, the transformation parameters can be obtained based on the direction information of the local matching point pair and the pixel coordinate information of the first matching point and the second matching point in the local matching point pair.
In one embodiment, the first deflection angle is a directional included angle between a line connecting a centroid of the first image region and a center of the first image region and a predetermined direction (for example, an X-axis of a world coordinate system). The second deflection angle is a directional included angle between a connecting line of the centroid of the second image area and the center of the second image area and a preset direction.
In another implementation scenario, the first deflection angle θ may be directly derived by:
θ=arctan(∑yI(x,y),∑xI(x,y)) (1)
in the above formula (1), (x, y) represents the offset of a certain pixel point in the first image area relative to the center of the first image area, I (x, y) represents the pixel value of the pixel point, Σ represents the summation, and the summation range is the pixel point in the first image area. Similarly, the second deflection angle may be calculated in the same manner.
In one implementation scenario, the final transformation parameters between the local target image and the image to be registered may be reached using the direction information of the local matching point pair, and the coordinate information of the first matching point and the second matching point of the local matching point pair, e.g. pixel coordinate information. Thereby enabling the calculation of local transformation parameters using a set of local matching point pairs.
In a specific embodiment, the transformation parameters between the image to be registered and the local target image may be obtained by the following steps a and b.
Step a: an angular difference between the first deflection angle and the second deflection angle is obtained.
The angle difference is, for example, the difference between the first deflection angle and the second deflection angle.
In one implementation scenario, equation (2) for calculating the angle difference is as follows:
wherein, theta is the angle difference,for a first deflection angle T represents the local target image,/for a first deflection angle>For the second angle of deflection, F represents the image to be registered.
Step b: and obtaining a first candidate transformation parameter based on the angle difference and the scale corresponding to the first matching point pair.
The first candidate transformation parameters are for example homography matrices corresponding between the image to be registered and the local target image. The calculation formula (3) of the homography matrix is as follows:
H=H l H s H R H r (3)
Wherein H is a homography matrix corresponding between the local target image and the image to be registered, namely a first candidate transformation parameter; h r Representing the amount of translation of the image to be registered relative to the local target image; h s The scale corresponding to the representative first matching point pair is the scale information when the local target image is scaled; h R Representing the rotation amount of the image to be registered relative to the local target image, H l Representing the amount of translation that is reset after translation.
In order to obtain the angle difference, the above formula (3) may be converted to obtain formula (4).
Wherein, the liquid crystal display device comprises a liquid crystal display device,pixel coordinates of the first matching point on the local target image; />Pixel coordinates of the second matching point on the image to be registered; s is the scale corresponding to the first matching point pair, i.e. point +.>Corresponding dimensions; θ is the angle difference.
Step S123: and obtaining a local processing result based on the local transformation parameters.
If the local processing result is determined to be the local transformation parameter, the local transformation parameter obtained in step S122 may be used as the local processing result.
If the local processing result is determined to be the pose of the image to be registered, that is, the pose of the terminal when shooting the image to be registered, conversion can be performed based on the local conversion parameters, so that the pose (local processing result) of the image to be registered is obtained. For example, the local transformation parameters may be processed by using a PnP (transparent-n-Point) algorithm, so as to obtain the pose of the image to be registered.
Therefore, by finding at least one first target image from the first target image set as a local target image, it is possible to calculate a local transformation parameter based on the local target image, and finally obtain a local processing result.
Referring to fig. 4, fig. 4 is a fourth flowchart of the first embodiment of the image processing method according to the present application. The present embodiment is a specific extension of step S14 of the above embodiment, and includes the following steps:
step S141: and judging whether the cloud processing result meets a preset condition.
The preset condition is, for example, accuracy requirement of the processing result of the cloud, processing time requirement of the processing result of the cloud, and the like, which is not limited herein.
In one embodiment, the preset condition is that the cloud processing result is received within a preset time. For example, after the image to be registered is sent to the cloud, if the cloud processing result is not received within the preset time, the cloud processing result may be considered to not meet the preset condition. By setting the preset condition as the cloud processing result received in the preset time, the cloud processing result is not utilized when the preset condition is not met, and the response time of the terminal is prevented from being too long.
Step S142: and responding to the condition that the preset condition is met, and taking the cloud processing result as a first processing result of the image to be registered.
The condition that the preset condition is met means that the cloud processing result can be used, at this time, the terminal can respond to the condition that the preset condition is met, and the cloud processing result is used as a first processing result of the image to be registered so as to utilize the cloud processing result, and therefore the computing capacity of the cloud can be utilized.
Step S143: and responding to the condition that the preset condition is not met, and taking the local processing result as a first processing result of the image to be registered.
Under the condition that the preset condition is not met, the cloud processing result cannot be used for image registration, and the terminal can respond to the condition that the preset condition is not met, and take the local processing result as a first processing result of the image to be registered, so that image registration can be continuously executed.
Therefore, by judging whether the cloud processing result meets the preset condition or not, the computing capability of the cloud can be fully utilized when the preset condition is met; when the preset condition is not met, the local processing result is used as a first processing result of the image to be registered, so that image registration can be continuously performed.
In one embodiment, after obtaining the first processing result of the image to be registered, the image processing method of the present application may further include the following step S21.
Step S21: and taking the second image frames as images to be tracked in sequence, and obtaining a second processing result of the images to be tracked based on the reference processing result of the reference image frames, the images to be tracked and the image information in the reference image frames, wherein the reference image frames are image frames before the images to be tracked.
The second image frames are sequentially used as images to be tracked, and the second processing result of the images to be tracked can be obtained based on the obtaining sequence of the second image frames. In one implementation scenario, the second processing result may be directly determined as a transformation parameter of the image to be tracked and the final target image; in another implementation scenario, the second processing result may also be a pose of the terminal. The specific method for obtaining the transformation parameters of the image to be tracked and the final target image may be the same image registration algorithm, and the method for obtaining the pose of the terminal may be a general image tracking algorithm, which will not be described in detail here. Therefore, the second image frames are sequentially used as images to be tracked, and image tracking of the second image frames is realized subsequently.
In one implementation, the first image frame and the second image frame are different image frames. For example, after the 1 st image frame is set as the first image frame, the following 2 nd image frame is set as the second image frame, and image tracking is performed. After the 10 th image frame is taken as the first image frame, the following 11 th image frame is taken as the second image frame for image tracking. In another implementation, at least a portion of the first image frame may be a second image frame. For example, the 10 th image frame may be regarded as the first image frame, while the 10 th image frame may also be regarded as the second image frame. By taking the first image frame as a different image frame than the second image frame or at least part of the first image frame as the second image frame, image registration may be performed for the first image frame or image tracking may be performed for the second image frame, respectively.
The image information in the image to be tracked and the reference image frame can be understood as all information obtained after the image to be tracked and the reference image frame are processed. For example, feature extraction may be performed on the image to be tracked and the reference image frame based on a feature extraction algorithm, respectively, to obtain feature information about feature points in the image to be tracked and the reference image frame, which may be regarded as image information in the image to be tracked and the reference image frame. And obtaining corresponding transformation parameters or corresponding pose variation between the image to be tracked and the reference image frame by utilizing the image information in the image to be tracked and the reference image frame. The corresponding transformation parameters or the corresponding pose change amounts can be obtained by the same image registration method or image tracking method, and will not be described in detail here.
In one implementation, the reference image frame is an image frame preceding the image to be tracked. In one implementation scenario, the reference image frame is the previous i-th frame of the image to be tracked, i being an integer greater than or equal to 1. If there is a portion of the second image frame before the second image frame that is the image to be tracked, the reference image frame may be either the first image frame or the second image frame.
In one implementation, the reference processing result is derived based on the first processing result.
In one embodiment, when the reference image frame is the first image frame, the reference image frame is the image to be registered, and the first processing result may be directly used as the reference processing result. When the first processing result is a transformation parameter of the image to be registered and the target image, the reference processing result may be a transformation parameter of the image to be registered and the target image, and the reference processing result may also be a pose of the terminal obtained based on the transformation parameter. When the first processing result is the pose of the terminal, the reference processing result can be directly determined as the pose of the terminal.
In another implementation scenario, when the reference image frame is the second image frame, the reference processing result may be determined based on the first processing result, and the specific determination method may be to obtain a relative processing result of the reference image frame with respect to its previous n image frames (n is 1 or more) and a processing result of the previous n frames, thereby obtaining the reference processing result, where the processing result of the previous n frames is obtained based on the first processing result. For example, when the 1 st image frame is a first image frame, the 2 nd image frame is a second image frame, and the 2 nd image frame is a reference image frame, the reference processing result may be a processing result (a homography matrix corresponding to two frames of images or a pose change amount of a terminal corresponding to two frames of images) of acquiring the 2 nd image frame relative to the 1 st image frame, and a processing result (a first processing result) of the 1 st image frame is acquired, so as to obtain a processing result (a reference processing result) of the 2 nd image frame. Thereafter, when the 3 rd image frame is the second image frame and is the reference image frame, the relative processing result of the 3 rd image frame with respect to the 2 nd image frame and the processing result of the 2 nd image frame may also be acquired at this time to obtain the processing result (reference processing result) of the 3 rd image frame, because the processing result of the 2 nd image frame is obtained based on the first processing result, the processing result (reference processing result) of the 3 rd image frame may also be regarded as being determined based on the first processing result. In another embodiment, the processing result (reference processing result) of the 3 rd image frame may be obtained by acquiring the relative processing result of the 3 rd image frame with respect to the 1 st image frame and acquiring the first processing result of the 1 st image frame. The specific determination method may be adjusted as needed, and is not limited herein. In one embodiment, after the first processing result is obtained, a first image frame corresponding to the first processing result and each subsequent image frame are used as reference image frames, so as to realize subsequent continuous tracking of the image frames.
In one implementation scenario, the first processing result is a first transformation parameter between the image to be registered and the final target image (local target image or cloud target image), and the second processing result is a second transformation parameter between the image to be tracked and the final target image. At this time, the reference processing result obtained based on the first processing result (first transformation parameter) may be a reference transformation parameter between the reference image frame and the final target image, and then the second transformation parameter may be obtained using the reference transformation parameter and a corresponding transformation parameter between the image to be tracked and the reference image frame.
In one implementation scenario, the first processing result is the pose of the image to be registered, and the second processing result is the pose of the image to be tracked. At this time, the reference processing result obtained based on the first processing result (pose of the image to be registered) may be the pose of the reference image frame (the pose when the terminal shoots the reference image frame), and then the pose of the image to be tracked is obtained by using the pose of the reference image frame, and the corresponding pose variation amount between the image to be tracked and the reference image frame.
In one implementation scenario, the first processing result is a first transformation parameter, and the second processing result is a pose of the image to be tracked, where the image processing method of the present application further includes: and obtaining the pose of the image to be registered by using the first transformation parameters. And obtaining the pose of the image to be registered, and finally obtaining the pose of the image to be tracked.
Therefore, by setting the second processing result to a different type (the second transformation parameter or the pose of the image to be tracked), the subsequent selection can be made as needed.
In one disclosed embodiment, the step of performing a first image registration of the image to be registered using the local target image to obtain a local processing result is performed by a first thread.
The step of obtaining the second processing result of the image to be tracked, the step of sending the image to be registered to the cloud end, the step of obtaining the first processing result of the image to be registered based on at least one of the local processing result and the cloud end processing result, and the step of executing the at least one step by the second thread. In addition, the first thread and the second thread are asynchronously processed. The first thread and the second thread are asynchronously processed, i.e. the first thread and the second thread may not execute synchronously. After the first processing result is obtained, a second processing result can be continuously obtained, so that the second processing result of the image to be tracked is continuously obtained, and asynchronous processing of image registration and image tracking is realized.
In general, the step of performing image registration ("performing first image registration on an image to be registered by using a local target image to obtain a local processing result" or "the step of sending the image to be registered to a cloud end") requires a longer time (algorithm running time) to obtain a result, and the time required for performing image tracking is shorter than that of image registration, so that by setting the first thread and the second thread to asynchronous processing, image tracking can be performed while performing image registration, without waiting for the result of image registration (first processing result), so that the terminal can obtain a tracking result (second processing result) in time, thereby improving the response speed of the terminal and reducing delay.
When the image processing method of the present application is executed in a browser, that is, in a web page end, the first thread is, for example, a worker (worker) thread. By creating and utilizing the worker thread at the webpage end, the webpage end can execute the multithreading task, and the running speed of the webpage end for running the image processing method is improved.
In one implementation scenario, some or all of the execution steps of the first thread or the second thread are implemented in the WebAssembly (WASN) programming language. By executing part or all of the execution steps of the first thread or the second thread by using the WASN programming language at the webpage end, the calculation power of the terminal can be fully utilized, the use efficiency of the equipment is improved, the running speed of the whole image processing method can be improved, and the delay is reduced.
Referring to fig. 5, fig. 5 is a flowchart illustrating a second embodiment of an image processing method according to the present application. In this embodiment, before performing the above-described "performing the first image registration on the image to be registered with the local target image to obtain the local processing result", the image processing method further includes performing the following steps with the second thread:
step S31: the first thread is initialized.
The initialization of the first thread may be a conventional thread initialization process, which is not described herein. By initializing the first thread, the steps of local image registration (step S12) may be performed subsequently with the first thread.
Step S32: and after the second thread obtains a second processing result of the image to be tracked, rendering and displaying the image to be tracked based on the second processing result of the image to be tracked.
Based on the second processing result of the image to be tracked, rendering and displaying the image to be tracked, specifically, rendering and displaying the image to be tracked according to the pose of the image to be tracked, namely, the pose when the terminal shoots the image to be tracked. It can be understood that if the second processing result is a transformation parameter between the image to be tracked and the final target image, the pose of the terminal can be obtained according to the transformation parameter; and if the second processing result is the pose of the image to be tracked, rendering and displaying the image to be tracked directly according to the pose of the image to be tracked.
Step S33: and rendering and displaying the image to be registered based on the first processing result of the image to be registered under the condition that the first thread obtains the first processing result of the image to be registered and the image to be registered is not displayed.
The fact that the first thread obtains the first processing result of the image to be registered and the image to be registered is not displayed means that the image to be registered can be rendered and displayed based on the first processing result. It can be understood that if the first processing result is a transformation parameter between the image to be tracked and the target image, the pose of the terminal can be obtained according to the transformation parameter; if the first processing result is the pose of the image to be registered, rendering and displaying the image to be registered directly according to the pose of the image to be registered.
Therefore, the image frames can be processed by rendering and displaying the images to be tracked by using the second thread or rendering and displaying the images to be registered by using the second thread, so that interaction with a real environment can be realized.
Referring to fig. 6, fig. 6 is a schematic diagram of a frame of an image processing terminal according to an embodiment of the application. The image processing terminal 60 includes an image acquisition module 61, a local registration module 62, and a cloud registration module 63 and a determination module 64. The image acquisition module 61 is used for acquiring an image to be registered; the local registration module 62 is configured to perform first image registration on an image to be registered by using a local target image, so as to obtain a local processing result; the cloud registration module 63 is configured to send the image to be registered to the cloud, so that the cloud performs second image registration on the image to be registered by using the cloud target image to obtain a cloud processing result; the determining module 64 is configured to obtain a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result.
The local registration module 62 is configured to perform first image registration on an image to be registered by using a local target image to obtain a local processing result, and includes: searching at least one first target image from the first target image set to serve as a local target image; registering the image to be registered and the local target image by using a local registration mode to obtain local transformation parameters between the image to be registered and the local target image; obtaining a local processing result based on the local transformation parameters; the cloud processing result is obtained by registering the image to be registered and the cloud target image in a cloud registration mode, wherein the cloud target image is from the second target image set.
Wherein at least partial images in the first target image set and the second target image set are the same; and/or the number of images in the first target image set is less than the number of images in the second target image set; and/or the computing power or computing time required by the local registration mode is smaller than the computing power or computing time required by the cloud registration mode.
Wherein the local registration module 62 is configured to find at least one first target image from the first target image set as a local target image, and includes: determining feature similarity between the image to be registered and the first target image based on feature representations of feature points in the image to be registered and the first target image; selecting at least one first target image with characteristic similarity meeting the preset similarity requirement as a local target image; the local registration module 62 is configured to register the image to be registered and the local target image by using a local registration manner, so as to obtain local transformation parameters between the image to be registered and the local target image, including: determining at least one group of local matching point pairs between the image to be registered and the local target image based on the feature representations of the feature points in the image to be registered and the local target image; based on at least one set of local matching point pairs, local transformation parameters are obtained.
The determining module 64 is configured to obtain a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result, and includes: responding to the condition that the preset condition is met, and taking the cloud processing result as a first processing result of the image to be registered; and responding to the condition that the preset condition is not met, and taking the local processing result as a first processing result of the image to be registered.
The preset condition is that the cloud processing result is received within a preset time.
The image acquiring module 61 is configured to acquire an image to be registered, and includes: acquiring an image frame obtained by shooting by a shooting device, wherein the image frame comprises a first image frame and a second image frame; the first image frame is taken as an image to be registered.
The image processing terminal 60 further includes an image tracking module, where the image tracking module is configured to sequentially use the second image frame as an image to be tracked, and obtain a second processing result of the image to be tracked based on a reference processing result of the reference image frame, the image to be tracked, and image information in the reference image frame, where the reference image frame is an image frame before the image to be tracked, and the reference processing result is determined based on the first processing result.
The step of performing first image registration on the image to be registered by using the local target image to obtain a local processing result is performed by a first thread; a step of obtaining a second processing result of the image to be tracked based on a reference processing result of the reference image frame, the image to be tracked and image information in the reference image frame, a step of transmitting the image to be registered to the cloud end, a step of obtaining a first processing result of the image to be registered based on at least one of a local processing result and a cloud end processing result, and at least one of the steps is executed by a second thread; wherein the first thread and the second thread are asynchronously processed.
The first processing result is a first transformation parameter between the image to be registered and a final target image, the final target image is a local target image or a cloud target image, and the second processing result is a second transformation parameter between the image to be tracked and a reference image frame; or the first processing result is the pose of the image to be registered, and the second processing result is the pose of the image to be tracked; or, the first processing result is a first transformation parameter, the second processing result is the pose of the image to be tracked, and the method further comprises executing the following steps by using the second thread: and obtaining the pose of the image to be registered by using the first transformation parameters.
Wherein, before the above-mentioned local registration module 62 is configured to perform the first image registration on the image to be registered by using the local target image to obtain the local processing result, the image processing method of the present application further includes performing the following steps by using the second thread: initializing a first thread; and/or the method further comprises performing, with the second thread, at least one of: rendering and displaying the image to be tracked based on the second processing result of the image to be tracked after the second thread obtains the second processing result of the image to be tracked; and rendering and displaying the image to be registered based on the first processing result of the image to be registered under the condition that the first thread obtains the first processing result of the image to be registered and the image to be registered is not displayed.
Referring to fig. 7, fig. 7 is a schematic diagram of a frame of an electronic device according to an embodiment of the application. The electronic device 70 comprises a memory 701 and a processor 702 coupled to each other, the processor 701 being adapted to execute program instructions stored in the memory 701 for implementing the steps of any of the image processing method embodiments described above. In one particular implementation scenario, electronic device 70 may include, but is not limited to: the microcomputer and the server, and the electronic device 70 may also include a mobile device such as a notebook computer and a tablet computer, which is not limited herein.
In particular, the processor 702 is adapted to control itself and the memory 701 to implement the steps of any of the image processing method embodiments described above. The processor 702 may also be referred to as a CPU (Central Processing Unit ). The processor 702 may be an integrated circuit chip with signal processing capabilities. The processor 702 may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 702 may be commonly implemented by an integrated circuit chip.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating a frame of an embodiment of a computer readable storage medium according to the present application. The computer readable storage medium 80 stores program instructions 801 that can be executed by a processor, the program instructions 801 being for implementing the steps of any of the image processing method embodiments described above.
According to the scheme, the local processing result is obtained through the first image registration or the cloud processing result is obtained through the second image registration, so that the terminal can use the computing capacity of the cloud and the computing capacity of the local terminal, and the processing result can be obtained quickly or the accurate registration result can be obtained when the terminal runs the image registration algorithm.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
The elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over network elements. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (12)

1. An image processing method, comprising:
the terminal acquires an image to be registered;
performing first image registration on the image to be registered by using a local target image to obtain a local processing result; and
the image to be registered is sent to a cloud end, so that the cloud end performs second image registration on the image to be registered by utilizing a cloud end target image to obtain a cloud end processing result;
obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result;
the acquiring the image to be registered comprises:
acquiring an image frame shot by a shooting device, wherein the image frame comprises a first image frame and a second image frame;
taking the first image frame as the image to be registered;
the method further comprises the steps of:
sequentially taking a second image frame as an image to be tracked, and obtaining a second processing result of the image to be tracked based on a reference processing result of a reference image frame, the image to be tracked and image information in the reference image frame, wherein the reference image frame is the image frame before the image to be tracked, and the reference processing result is a reference transformation parameter between the reference image frame and the target image determined based on the first processing result or a pose of a terminal when the terminal shoots the reference image frame;
The first processing result is a first transformation parameter between the image to be registered and a final target image, the final target image is the local target image or a cloud target image, and the second processing result is a second transformation parameter between the image to be tracked and the reference image frame; or the first processing result is the pose of the terminal when shooting the image to be registered, and the second processing result is the pose of the terminal when shooting the image to be tracked; or the first processing result is the first transformation parameter, and the second processing result is the pose of the terminal when the terminal shoots the image to be tracked.
2. The method of claim 1, wherein the performing a first image registration of the image to be registered with a local target image to obtain a local processing result comprises:
searching at least one first target image from the first target image set to serve as the local target image;
registering the image to be registered and the local target image by using a local registration mode to obtain local transformation parameters between the image to be registered and the local target image;
Obtaining the local processing result based on the local transformation parameters;
and/or the cloud processing result is obtained by registering the image to be registered and the cloud target image by the cloud in a cloud registration mode, wherein the cloud target image is from a second target image set.
3. The method of claim 2, wherein at least a portion of the images in the first set of target images and the second set of target images are identical;
and/or the number of images in the first target image set is less than the number of images in the second target image set;
and/or the computing power or computing time required by the local registration mode is smaller than the computing power or computing time required by the cloud registration mode.
4. A method according to claim 2 or 3, wherein said finding at least one first target image from a first set of target images as the local target image comprises:
determining feature similarity between the image to be registered and the first target image based on feature representations of feature points in the image to be registered and the first target image;
selecting at least one first target image with the feature similarity meeting a preset similarity requirement as the local target image;
And/or, registering the image to be registered and the local target image by using a local registration mode to obtain local transformation parameters between the image to be registered and the local target image, including:
determining at least one set of local matching point pairs between the image to be registered and the local target image based on feature representations of feature points in the image to be registered and the local target image;
and obtaining the local transformation parameters based on the at least one group of local matching point pairs.
5. A method according to any one of claims 1 to 3, wherein the obtaining a first processing result of the image to be registered based on at least one of the local processing result and a cloud processing result comprises:
responding to the condition that a preset condition is met, and taking the cloud processing result as a first processing result of the image to be registered;
and responding to the condition that the preset condition is not met, and taking the local processing result as a first processing result of the image to be registered.
6. The method of claim 5, wherein the predetermined condition is receipt of the cloud processing result within a predetermined time.
7. A method according to any one of claims 1 to 3, wherein the step of first image registering the image to be registered with a local target image to obtain a local processing result is performed by a first thread;
the step of obtaining a second processing result of the image to be tracked, the step of sending the image to be registered to the cloud end, the step of obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud end processing result, and the step of obtaining the second processing result of the image to be registered, wherein the at least one step is executed by a second thread;
wherein the first thread and the second thread are asynchronously processed.
8. The method of claim 7, wherein the first processing result is the first transformation parameter, the second processing result is a pose of the image to be tracked, and the method further comprises performing the following steps with the second thread: and obtaining the pose of the image to be registered by using the first transformation parameters.
9. The method of claim 7, wherein prior to said first image registration of said image to be registered with said local target image to obtain a local processing result, said method further comprises performing the following steps with said second thread:
Initializing the first thread;
and/or the method further comprises performing, with the second thread, at least one of the following steps:
rendering and displaying the image to be tracked based on the second processing result of the image to be tracked after the second thread obtains the second processing result of the image to be tracked;
and rendering and displaying the image to be registered based on the first processing result of the image to be registered under the condition that the first thread obtains the first processing result of the image to be registered and the image to be registered is not displayed.
10. An image processing terminal, characterized by comprising:
the image acquisition module is used for acquiring an image to be registered;
the local registration module is used for carrying out first image registration on the image to be registered by utilizing a local target image so as to obtain a local processing result; and
the cloud registration module is used for sending the image to be registered to a cloud so that the cloud performs second image registration on the image to be registered by using a cloud target image to obtain a cloud processing result;
the determining module is used for obtaining a first processing result of the image to be registered based on at least one of the local processing result and the cloud processing result;
The image acquisition module is configured to acquire an image to be registered, and includes: acquiring an image frame shot by a shooting device, wherein the image frame comprises a first image frame and a second image frame; taking the first image frame as the image to be registered;
the image tracking module is used for taking a second image frame as an image to be tracked in sequence, and obtaining a second processing result of the image to be tracked based on a reference processing result of a reference image frame and image information in the image to be tracked and the reference image frame, wherein the reference image frame is the image frame before the image to be tracked, and the reference processing result is a reference transformation parameter between the reference image frame and the target image determined based on the first processing result or a pose of a terminal when the terminal shoots the reference image frame;
the first processing result is a first transformation parameter between the image to be registered and a final target image, the final target image is the local target image or a cloud target image, and the second processing result is a second transformation parameter between the image to be tracked and the reference image frame; or the first processing result is the pose of the terminal when shooting the image to be registered, and the second processing result is the pose of the terminal when shooting the image to be tracked; or the first processing result is the first transformation parameter, and the second processing result is the pose of the terminal when the terminal shoots the image to be tracked.
11. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the image processing method of any one of claims 1 to 9.
12. A computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the image processing method of any of claims 1 to 9.
CN202110713177.6A 2021-06-25 2021-06-25 Image processing method, related terminal, device and storage medium Active CN113409365B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110713177.6A CN113409365B (en) 2021-06-25 2021-06-25 Image processing method, related terminal, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110713177.6A CN113409365B (en) 2021-06-25 2021-06-25 Image processing method, related terminal, device and storage medium

Publications (2)

Publication Number Publication Date
CN113409365A CN113409365A (en) 2021-09-17
CN113409365B true CN113409365B (en) 2023-08-25

Family

ID=77679458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110713177.6A Active CN113409365B (en) 2021-06-25 2021-06-25 Image processing method, related terminal, device and storage medium

Country Status (1)

Country Link
CN (1) CN113409365B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106328148A (en) * 2016-08-19 2017-01-11 上汽通用汽车有限公司 Natural speech recognition method, natural speech recognition device and natural speech recognition system based on local and cloud hybrid recognition
CN110276257A (en) * 2019-05-20 2019-09-24 阿里巴巴集团控股有限公司 Face identification method, device, system, server and readable storage medium storing program for executing
CN110728705A (en) * 2019-09-24 2020-01-24 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111091590A (en) * 2019-12-18 2020-05-01 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111125409A (en) * 2019-12-04 2020-05-08 浙江大华技术股份有限公司 Control method and device of access control system and access control system
CN111739069A (en) * 2020-05-22 2020-10-02 北京百度网讯科技有限公司 Image registration method and device, electronic equipment and readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7106891B2 (en) * 2001-10-15 2006-09-12 Insightful Corporation System and method for determining convergence of image set registration
US9195872B2 (en) * 2013-02-15 2015-11-24 Samsung Electronics Co., Ltd. Object tracking method and apparatus
US10249047B2 (en) * 2016-09-13 2019-04-02 Intelligent Fusion Technology, Inc. System and method for detecting and tracking multiple moving targets based on wide-area motion imagery

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106328148A (en) * 2016-08-19 2017-01-11 上汽通用汽车有限公司 Natural speech recognition method, natural speech recognition device and natural speech recognition system based on local and cloud hybrid recognition
CN110276257A (en) * 2019-05-20 2019-09-24 阿里巴巴集团控股有限公司 Face identification method, device, system, server and readable storage medium storing program for executing
CN110728705A (en) * 2019-09-24 2020-01-24 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111125409A (en) * 2019-12-04 2020-05-08 浙江大华技术股份有限公司 Control method and device of access control system and access control system
CN111091590A (en) * 2019-12-18 2020-05-01 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111739069A (en) * 2020-05-22 2020-10-02 北京百度网讯科技有限公司 Image registration method and device, electronic equipment and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘占强.高分辨率遥感图像配准技术的研究.中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑 .2019,2-3、9、11、15、25-26、43-44. *

Also Published As

Publication number Publication date
CN113409365A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
WO2020259481A1 (en) Positioning method and apparatus, electronic device, and readable storage medium
CN112528831B (en) Multi-target attitude estimation method, multi-target attitude estimation device and terminal equipment
WO2022267287A1 (en) Image registration method and related apparatus, and device and storage medium
CN113409391B (en) Visual positioning method and related device, equipment and storage medium
Liu et al. Robust and fast registration of infrared and visible images for electro-optical pod
CN113393505B (en) Image registration method, visual positioning method, related device and equipment
JP7430243B2 (en) Visual positioning method and related equipment
US11843865B2 (en) Method and device for generating vehicle panoramic surround view image
CN112084849A (en) Image recognition method and device
CN111461998A (en) Environment reconstruction method and device
CN112102404B (en) Object detection tracking method and device and head-mounted display equipment
CN114187333A (en) Image alignment method, image alignment device and terminal equipment
CN116051873A (en) Key point matching method and device and electronic equipment
WO2024022301A1 (en) Visual angle path acquisition method and apparatus, and electronic device and medium
CN113409365B (en) Image processing method, related terminal, device and storage medium
CN112767457A (en) Principal component analysis-based plane point cloud matching method and device
US11238309B2 (en) Selecting keypoints in images using descriptor scores
CN116452631A (en) Multi-target tracking method, terminal equipment and storage medium
CN113409373B (en) Image processing method, related terminal, device and storage medium
CN111951211B (en) Target detection method, device and computer readable storage medium
CN112907662A (en) Feature extraction method and device, electronic equipment and storage medium
CN114119885A (en) Image feature point matching method, device and system and map construction method and system
Zhang Sparse Visual Localization in GPS-Denied Indoor Environments
CN111209837B (en) Target tracking method and device
TWI776668B (en) Image processing method and image processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant