WO2023151214A1 - 图像生成方法、***、电子设备、存储介质和产品 - Google Patents

图像生成方法、***、电子设备、存储介质和产品 Download PDF

Info

Publication number
WO2023151214A1
WO2023151214A1 PCT/CN2022/100540 CN2022100540W WO2023151214A1 WO 2023151214 A1 WO2023151214 A1 WO 2023151214A1 CN 2022100540 W CN2022100540 W CN 2022100540W WO 2023151214 A1 WO2023151214 A1 WO 2023151214A1
Authority
WO
WIPO (PCT)
Prior art keywords
image frame
feature points
image
matching
rotation
Prior art date
Application number
PCT/CN2022/100540
Other languages
English (en)
French (fr)
Inventor
蒋海峰
Original Assignee
上海闻泰信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海闻泰信息技术有限公司 filed Critical 上海闻泰信息技术有限公司
Publication of WO2023151214A1 publication Critical patent/WO2023151214A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/608Rotation of whole images or parts thereof by skew deformation, e.g. two-pass or three-pass rotation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present disclosure relates to an image generation method, system, electronic device, storage medium and product.
  • the image generation method of electronic equipment image technology synthesizes the features of all images.
  • the synthesized image will appear unnatural, or the edges will be blurred, artifacts and overlapping phenomena.
  • the current The imaging method seriously affects the user's perception, brings inconvenience to the user, and affects the user experience.
  • the image technology images of electronic devices are synthesized based on the features of all screens, and the synthesized images may appear unnatural, or blurred edges, artifacts and overlapping phenomena, which seriously affect the user's perception. Bring inconvenience to users and affect user experience.
  • an image generating method, system, electronic device, storage medium, and product are provided.
  • An image generation method comprising:
  • the rotation-translation matrix based on the matching feature points, otherwise, solve the rotation-translation matrix based on the default method, including: if there are matching feature points, then Use the RANSAC method to solve the matched feature points to obtain the rotation-translation matrix, otherwise use the ECC method to iteratively calculate the minimum error of the foreground area of the first image frame and the second image frame to obtain the rotation-translation matrix .
  • the foreground areas of the first image frame and the second image frame each include at least one block, and according to the foreground areas of the first image frame and the second image frame respectively The matched feature points in each block in , and solve the rotation and translation matrix corresponding to each block.
  • the correcting the foreground area of the second image frame based on the rotation-translation matrix includes: respectively correcting the second image frame based on the rotation-translation matrix corresponding to each block. Each block of the image frame is rectified.
  • the performing feature matching on the feature points of the first image frame and the feature points of the second image frame includes: applying the first method to the first image The feature points of the frame and the feature points of the second image frame are matched once; on the basis of the first match, the second method is applied to perform secondary matching on the feature points of the first image frame and the feature points of the second image frame .
  • An image generation system comprising:
  • the segmentation module is configured to obtain a first image frame and a second image frame, and perform image segmentation on the first image frame and the second image frame respectively, so as to obtain the first image frame and the second image frame modules of the foreground region of the image frame;
  • a judging module is configured as a module for judging whether there are matching feature points in the foreground area of the first image frame and the foreground area of the second image frame;
  • a solution module the solution module is configured to solve the rotation-translation matrix based on the matching feature points if there are matching feature points, otherwise the module for solving the rotation-translation matrix based on the default method;
  • a synthesis module configured to correct the foreground area of the second image frame based on the rotation-translation matrix, and generate a composite image according to the first image frame and the corrected second image frame .
  • the solving module is configured to use the RANSAC method to solve the matching feature points to obtain the rotation and translation matrix if there are matching feature points, otherwise use ECC
  • the method iteratively calculates the minimum error of the foreground area of the first image frame and the second image frame to obtain the modules of the rotation and translation matrix.
  • the foreground areas of the first image frame and the second image frame each include at least one block, and according to the foreground areas of the first image frame and the second image frame respectively
  • the matching feature points in each block in the block are used to solve the module corresponding to the rotation and translation matrix of each block.
  • the synthesis module is configured as a module for correcting each block of the second image frame based on a rotation-translation matrix corresponding to each block.
  • the synthesis module is configured to acquire the feature points of the first image frame and the feature points of the second image frame; A module for performing feature matching on the feature points of the second image frame.
  • the synthesis module is configured to apply the first method to perform a matching on the feature points of the first image frame and the feature points of the second image frame; On the basis of , apply the second method to perform secondary matching on the feature points of the first image frame and the feature points of the second image frame.
  • An electronic device comprising one or more processors and a memory, at least one instruction, at least one program, code set or instruction set are stored in the memory, the instruction, the program, the code set or the The instruction set is loaded by the one or more processors and executes the steps of any one of the image generation methods described above.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions, when the instructions in the non-volatile computer-readable storage medium are executed by one or more processors of the mobile terminal, such that The mobile terminal can execute the steps of the image generation method described in any one of the above.
  • FIG. 1 is a schematic diagram of a scene of an image generation method provided by one or more embodiments of the present disclosure
  • FIG. 2 is a schematic flowchart of an image generation method provided by one or more embodiments of the present disclosure
  • FIG. 3 is a structural block diagram of an image generation system provided by one or more embodiments of the present disclosure.
  • Fig. 4 is an internal structural diagram of an electronic device provided by one or more embodiments of the present disclosure.
  • first and second and the like in the specification and claims of the present disclosure are used to distinguish different objects, rather than to describe a specific order of objects.
  • first camera and the second camera are used to distinguish different cameras, not to describe a specific order of the cameras.
  • words such as “exemplary” or “for example” are used as examples, illustrations or illustrations. Any embodiment or design described as “exemplary” or “for example” in the embodiments of the present disclosure shall not be construed as being preferred or advantageous over other embodiments or designs. To be precise, the use of words such as “exemplary” or “for example” is intended to present related concepts in a specific manner. In addition, in the description of the embodiments of the present disclosure, unless otherwise specified, the meaning of "plurality” refers to two one or more.
  • the image generation method provided in the present disclosure can be applied to the application environment shown in FIG. 1 .
  • the image generating method is applied in an image generating system.
  • the image generating system includes a terminal 102 and a server 104 . Wherein, the terminal 102 communicates with the server 104 through a network.
  • the terminal 102 can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices, and the server 104 can be realized by an independent server or a server cluster composed of multiple servers.
  • FIG. 2 a flowchart of steps of an image generation method provided by one or more embodiments of the present disclosure.
  • This embodiment uses a mobile terminal as an example for illustration. It can be understood that this method can also be applied to a server, or a system including a terminal and a server, and can be implemented through interaction between the terminal and the server.
  • Step S201 the image generation system obtains the first image frame and the second image frame, and performs image segmentation on the first image frame and the second image frame respectively, so as to obtain the foreground of the first image frame and the second image frame area.
  • the image generation system divides the foreground and background of the two images by acquiring the first frame and the second frame of the image, where the foreground includes but not limited to people, still life, and objects, and the background includes but not limited to landscapes ,architecture.
  • the images of the first frame and the second frame may be obtained by shooting with an electronic device, where the electronic device is, for example, a smart phone, a tablet computer, and the like.
  • the foreground areas of the first image frame and the second image frame each include at least one block, and according to the matching features in each block in the foreground area of the first image frame and the second image frame, respectively Points, solve the rotation and translation matrix corresponding to each block.
  • the image generation system divides the first image frame and the second image frame, wherein the deep learning semantic model is used for segmentation, wherein the segmented image includes at least one block, the number of pixels contained in the block and the number of pixels in the first image frame is associated with the foreground region of the second image frame. Matching is performed according to the feature points of the foreground regions of the first image frame and the second image frame, and the block is rotated, translated, etc., so that the two are aligned.
  • correcting the foreground area of the second image frame based on the rotation-translation matrix includes: the image generation system respectively corrects each block of the second image frame based on the rotation-translation matrix corresponding to each block Make corrections.
  • the image generating system uses the first image frame as a reference frame, so that the second image frame is corrected toward the first image frame.
  • the blocks of the second image frame are corrected based on the feature points to achieve a better image perception.
  • Step S202 the image generation system judges whether there are matching feature points in the foreground area of the first image frame and the foreground area of the second image frame.
  • the image generating system judges whether feature points are included in the foreground of the first image frame and the foreground of the second image frame.
  • feature points can be simply understood as more prominent points in the image frame, such as contour points, bright points in darker areas, dark points in brighter areas, harris corner points, etc.
  • Step S203 if there are matching feature points, the image generation system calculates the rotation-translation matrix based on the matching feature points, otherwise the image generation system calculates the rotation-translation matrix based on a default method.
  • the image generation system adjusts the second image frame based on RANSAC. If there are no feature points, the image generation system iteratively calculates based on the image registration method of maximizing the enhanced correlation coefficient (ECC).
  • ECC enhanced correlation coefficient
  • Step S204 the image generating system corrects the foreground area of the second image frame based on the rotation-translation matrix, and generates a composite image according to the first image frame and the corrected second image frame.
  • the image generating system corrects the second image frame according to corresponding rules, and the corrected second image frame is synthesized with the first image frame to become a new image frame.
  • the rotation-translation matrix is solved based on the matching feature points, otherwise the rotation-translation matrix is solved based on the default method, including:
  • the image generation system uses the RANSAC method to solve the matching feature points to obtain the rotation and translation matrix, otherwise the image generation system uses the ECC method to iteratively calculate the first image frame and the second image frame The minimum error of the foreground region to get the rotation-translation matrix.
  • RANSAC is the abbreviation of RANdom SAmple Consensus (random sampling consensus). It can iteratively estimate the parameters of a mathematical model from a set of observation data sets containing "outliers". For example, it is to find a suitable 2-dimensional straight line from a set of observation data. Assuming that the observation data contains inliers and outliers, in which the inliers are approximately passed by a straight line, while the outliers are far away from the straight line, RANSAC can draw a model that only uses inlier points to calculate, and the probability is high enough. ECC is an enhanced correlation coefficient, which has the advantage that the photometric distortion of contrast and brightness remains unchanged.
  • the image generation system acquires the feature points of the first image frame and the feature points of the second image frame; performs feature matching on the feature points of the first image frame and the feature points of the second image frame.
  • the feature points of the first image frame and the second image frame may be points with a difference between brightness and darkness greater than a preset threshold, points with a color change greater than a preset threshold, and corner points.
  • the image generation system matches the feature points of the two image frames, so that the images are convenient for subsequent operations.
  • the feature matching of the feature points of the first image frame and the feature points of the second image frame includes:
  • the image generation system applies the first method to perform a matching on the feature points of the first image frame and the feature points of the second image frame; on the basis of the first matching, applies the second method to the The feature points are matched twice with the feature points of the second image frame.
  • the image generation system performs secondary precise matching on the matched feature points to achieve better visual effects.
  • the image generation method provided in the present disclosure can synthesize images with high definition by separating the foreground and the background and synthesizing the images in a targeted manner. Compared with the traditional synthesis method in the prior art, it reduces the problem of failure to synthesize normally or lower resolution after synthesis due to the large difference in definition between the foreground and background of the image, and has the advantage of improving the user experience .
  • steps in the flow chart of FIG. 2 are displayed sequentially according to the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in FIG. 2 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. The execution of these sub-steps or stages The order is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • the embodiment of the present disclosure also provides an image generation system, the embodiment of the image generation system corresponds to the foregoing method embodiment, for the convenience of reading, the embodiment of this device does not repeat the foregoing Details in the method embodiments are described one by one, but it should be clear that the device in this embodiment can correspondingly implement all the content in the foregoing method embodiments.
  • FIG. 3 is a structural block diagram of an image generation system provided by an embodiment of the present disclosure. As shown in FIG. 3 , the image generation system 300 provided by this embodiment includes:
  • a segmentation module 310 configuring the segmentation module 310 to obtain a first image frame and a second image frame, and perform image segmentation on the first image frame and the second image frame respectively, so as to obtain the first image frame and the second image frame modules of the foreground region of the image frame;
  • Judging module 320 configuring the judging module 320 as a module for judging whether there are matching feature points in the foreground area of the first image frame and the foreground area of the second image frame;
  • the solution module 330 is configured to solve the solution module 330 as if there is a matching feature point, then solve the rotation-translation matrix based on the matching feature point, otherwise solve the module of the rotation-translation matrix based on the default method;
  • Synthesizing module 340 configuring the synthesizing module 340 to correct the foreground area of the second image frame based on the rotation-translation matrix, and generate a synthetic image according to the first image frame and the corrected second image frame .
  • the solving module 330 is configured to use the RANSAC method to solve the matching feature points to obtain the rotation and translation matrix if there are matching feature points, Otherwise, the ECC method is used to iteratively calculate the minimum error of the foreground area of the first image frame and the second image frame, so as to obtain the modules of the rotation and translation matrix.
  • the foreground areas of the first image frame and the second image frame each include at least one block, and according to the foreground areas of the first image frame and the second image frame respectively
  • the matching feature points in each block in the block are used to solve the module corresponding to the rotation and translation matrix of each block.
  • the synthesis module 340 is configured as a module for correcting each block of the second image frame based on the rotation and translation matrix corresponding to each block.
  • the synthesis module 340 configures the synthesis module 340 to acquire the feature points of the first image frame and the feature points of the second image frame; A module for performing feature matching between feature points and feature points of the second image frame.
  • the synthesis module 340 configures the synthesis module 340 to apply the first method to perform a matching on the feature points of the first image frame and the feature points of the second image frame; On the basis of the primary matching, the second method is applied to perform secondary matching on the feature points of the first image frame and the feature points of the second image frame.
  • the image generating system provided in this embodiment can execute the image generating method provided in the above method embodiment, and its implementation principle and technical effect are similar, and will not be repeated here.
  • Each module in the above-mentioned image generation system can be fully or partially realized by software, hardware and a combination thereof.
  • the above-mentioned modules can be embedded in or independent of one or more processors in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that one or more processors can call and execute the above The operation corresponding to the module.
  • an electronic device is provided.
  • the electronic device may be a terminal device, and its internal structure may be as shown in FIG. 4 .
  • the electronic device includes one or more processors, memory, communication interface, display screen, and input device connected by a system bus. Wherein, one or more processors of the electronic device are used to provide calculation and control capabilities.
  • the memory of the electronic device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and computer readable instructions.
  • the internal memory provides an environment for the execution of the operating system and computer readable instructions in the non-volatile storage medium.
  • the communication interface of the electronic device is used to communicate with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, an operator network, near field communication (NFC) or other technologies.
  • the computer-readable instructions are executed by one or more processors, the method for displaying the preview image provided by the above-mentioned embodiments can be realized.
  • the display screen of the electronic device may be a liquid crystal display screen or a communication ink display screen
  • the input device of the electronic device may be a touch layer covered on the display screen, or a button, a trackball or a touch pad provided on the casing of the electronic device , and can also be an external keyboard, touchpad, or mouse.
  • FIG. 4 is only a block diagram of a part of the structure related to the disclosed solution, and does not constitute a limitation on the communication equipment to which the disclosed solution is applied.
  • the specific communication equipment may include There may be more or fewer components than shown in the figures, or certain components may be combined, or have different component arrangements.
  • the image generation system provided by the present disclosure can be implemented in the form of computer readable instructions, and the computer readable instructions can be run on the electronic device as shown in FIG. 4 .
  • Various program modules constituting the image generation system can be stored in the memory of the electronic device, for example, the segmentation module, judgment module, solution module, and synthesis module shown in FIG. 3 .
  • Computer-readable instructions constituted by various program modules cause one or more processors to execute the steps in the image generation method of various embodiments of the present disclosure described in this specification.
  • the electronic device as shown in FIG. 4 can obtain the first image frame and the second image frame through the segmentation module in the image generation system as shown in FIG.
  • Image segmentation is performed on the image frame to obtain foreground areas of the first image frame and the second image frame.
  • the judging module is configured to judge whether there are matching feature points between the foreground area of the first image frame and the foreground area of the second image frame.
  • Solving module if there is a matching feature point, solve the rotation-translation matrix based on the matching feature point, otherwise solve the rotation-translation matrix based on the default method.
  • the synthesis module is configured to correct the foreground area of the second image frame based on the rotation-translation matrix, and generate a composite image according to the first image frame and the corrected second image frame.
  • an electronic device including a memory and one or more processors, the memory stores computer-readable instructions, and the one or more processors execute the computer-readable instructions to implement the following steps: obtaining The first image frame and the second image frame, and image segmentation is performed on the first image frame and the second image frame respectively, so as to obtain the foreground area of the first image frame and the second image frame; determine the first image frame and the second image frame; Whether there is a matching feature point in the foreground area of the image frame and the foreground area of the second image frame; if there is a matching feature point, then solve the rotation-translation matrix based on the matching feature point, otherwise solve the rotation-translation matrix based on the default method; based on the The rotation-translation matrix corrects the foreground area of the second image frame, and generates a composite image according to the first image frame and the corrected second image frame.
  • the following steps are also implemented: if there is a matching feature point, then use the RANSAC method to solve the matching feature point to obtain the rotation and translation matrix, Otherwise, the ECC method is used to iteratively calculate the minimum error of the foreground area of the first image frame and the second image frame, so as to obtain the rotation and translation matrix.
  • the foreground areas of the first image frame and the second image frame each include at least one block
  • the features matched in each block in the foreground area of the first image frame and the second image frame are respectively Points, solve the rotation and translation matrix corresponding to each block.
  • the following steps are further implemented: correcting each block of the second image frame based on the rotation-translation matrix corresponding to each block.
  • the following steps are also implemented: obtaining the feature points of the first image frame and the feature points of the second image frame; The feature points are matched with the feature points of the second image frame.
  • the following steps are further implemented: applying the first method to perform a matching of the feature points of the first image frame and the feature points of the second image frame; On the basis of the first matching, the second method is applied to perform secondary matching on the feature points of the first image frame and the feature points of the second image frame.
  • the electronic device provided in this embodiment can implement the image synthesis method provided in the above method embodiment, and its implementation principle and technical effect are similar, and will not be repeated here.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions, on which computer-readable instructions are stored, and when the computer-readable instructions are executed by one or more processors, the following steps are implemented: obtaining the first Image frame and second image frame, and image segmentation is carried out to described first image frame and second image frame respectively, to obtain the foreground area of described first image frame and second image frame; Judge described first image frame Whether there are matching feature points in the foreground area of and the foreground area of the second image frame; if there are matching feature points, the rotation-translation matrix is solved based on the matched feature points, otherwise the rotation-translation matrix is solved based on the default method; based on the rotation-translation The matrix corrects the foreground area of the second image frame, and generates a composite image according to the first image frame and the corrected second image frame.
  • the following steps are further implemented: if there is a matching feature point, then use the RANSAC method to solve the matching feature point to obtain the rotation-translation matrix , otherwise the ECC method is used to iteratively calculate the minimum error of the foreground area of the first image frame and the second image frame to obtain the rotation and translation matrix.
  • the foreground areas of the first image frame and the second image frame each include at least one block
  • the features matched in each block in the foreground area of the first image frame and the second image frame are respectively Points, solve the rotation and translation matrix corresponding to each block.
  • the following steps are further implemented: correcting each block of the second image frame based on the rotation-translation matrix corresponding to each block respectively.
  • the following steps are also implemented: acquiring the feature points of the first image frame and the feature points of the second image frame; The feature points of the second image frame are matched with the feature points of the second image frame.
  • the following steps are further implemented: applying the first method to perform a matching of the feature points of the first image frame and the feature points of the second image frame ; On the basis of the first matching, apply the second method to perform secondary matching on the feature points of the first image frame and the feature points of the second image frame.
  • the computer-readable instructions stored on one or more non-volatile computer-readable storage media storing computer-readable instructions provided in this embodiment can implement the preview image display method provided in the above-mentioned method embodiments, and its implementation principle It is similar to the technical effect and will not be repeated here.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
  • Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory.
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • the image generation method provided in the present disclosure can synthesize an image with high definition by separating the foreground and the background and synthesizing the images in a targeted manner. Compared with the traditional synthesis method in the prior art, it reduces the problem of failure to synthesize normally or lower resolution after synthesis due to the large difference in definition between the foreground and background of the image, and has the advantage of improving the user experience , has strong industrial applicability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

本公开提供了软件图像生成方法、***、电子设备、存储介质和产品,所述方法包括:获得第一图像帧和第二图像帧,并分别对所述第一图像帧和第二图像帧进行图像分割,以得到所述第一图像帧和第二图像帧的前景区域;判断所述第一图像帧的前景区域和第二图像帧的前景区域是否存在匹配的特征点;如果存在匹配的特征点,则基于匹配的特征点求解旋转平移矩阵,否则基于默认方法求解旋转平移矩阵;基于所述旋转平移矩阵对所述第二图像帧的前景区域进行矫正,并根据所述第一图像帧和矫正后的第二图像帧生成合成图像。本公开通过分离出前景和后景,针对性对图像进行合成,可以合成出清晰度高的图像,提成图像合成质量。

Description

图像生成方法、***、电子设备、存储介质和产品
相关交叉引用
本公开要求于2022年2月14日提交中国专利局、申请号为202210132413.X、发明名称为“图像生成方法、***、电子设备、存储介质和产品”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及图像生成方法、***、电子设备、存储介质和产品。
背景技术
随着电子设备影像技术的发展,用户对成像的质量要求更高,厂商对电子设备影像处理提出了新的挑战。
目前电子设备的影像技术图像生成方法,针对全部画面的特征进行合成,使用这种方式进行图像合成时会出现合成的图像不自然的现象,或者边缘模糊以及伪影和重叠的现象,显然,目前的成像方式严重影响用户的观感,给用户带来不便,影响使用体验。
发明内容
(一)要解决的技术问题
在现有技术中,电子设备的影像技术图像是针对全部画面的特征进行合成的,可能会出现合成的图像不自然的现象,或者边缘模糊以及伪影和重叠的现象,严重影响用户的观感,给用户带来不便,影响 使用体验。
(二)技术方案
根据本公开公开的各种实施例,提供一种图像生成方法、***、电子设备、存储介质和产品。
一种图像生成方法,所述方法包括:
获得第一图像帧和第二图像帧,并分别对所述第一图像帧和第二图像帧进行图像分割,以得到所述第一图像帧和第二图像帧的前景区域;
判断所述第一图像帧的前景区域和第二图像帧的前景区域是否存在匹配的特征点;
如果存在匹配的特征点,则基于匹配的特征点求解旋转平移矩阵,否则基于默认方法求解旋转平移矩阵;
基于所述旋转平移矩阵对所述第二图像帧的前景区域进行矫正,并根据所述第一图像帧和矫正后的第二图像帧生成合成图像。
作为本公开实施例一种可选的实施方式,如果存在匹配的特征点,则基于匹配的特征点求解旋转平移矩阵,否则基于默认方法求解旋转平移矩阵,包括:如果存在匹配的特征点,则利用RANSAC方法对所述匹配的特征点求解以得到所述旋转平移矩阵,否则利用ECC方法迭代计算所述第一图像帧和第二图像帧的前景区域的最小误差,以得到所述旋转平移矩阵。
作为本公开实施例一种可选的实施方式,所述第一图像帧和第二图像帧的前景区域均包括至少一个区块,分别根据所述第一图像帧和第二图像帧的前景区域中各区块内匹配的特征点,求解对应于各区块的旋转平移矩阵。
作为本公开实施例一种可选的实施方式,所述基于所述旋转平移矩阵对所述第二图像帧的前景区域进行矫正,包括:分别基于各区块对应的旋转平移矩阵对所述第二图像帧的各区块进行矫正。
作为本公开实施例一种可选的实施方式,在判断所述第一图像帧的前景区域和第二图像帧的前景区域是否存在匹配的特征点之前,还包括:获取所述第一图像帧的特征点和第二图像帧的特征点;对所述 第一图像帧的特征点和第二图像帧的特征点进行特征匹配。
作为本公开实施例一种可选的实施方式,所述对所述第一图像帧的特征点和第二图像帧的特征点进行特征匹配,包括:应用第一方法,对所述第一图像帧的特征点和第二图像帧的特征点进行一次匹配;在一次匹配的基础上,应用第二方法,对所述第一图像帧的特征点和第二图像帧的特征点进行二次匹配。
一种图像生成***,包括:
分割模块,将所述分割模块配置成获得第一图像帧和第二图像帧,并分别对所述第一图像帧和第二图像帧进行图像分割,以得到所述第一图像帧和第二图像帧的前景区域的模块;
判断模块,将所述判断模块配置成判断所述第一图像帧的前景区域和第二图像帧的前景区域是否存在匹配的特征点的模块;
求解模块,将所述求解模块配置成如果存在匹配的特征点,则基于匹配的特征点求解旋转平移矩阵,否则基于默认方法求解旋转平移矩阵的模块;
合成模块,将所述合成模块配置成基于所述旋转平移矩阵对所述第二图像帧的前景区域进行矫正,并根据所述第一图像帧和矫正后的第二图像帧生成合成图像的模块。
作为本公开实施例一种可选的实施方式,将所述求解模块配置成如果存在匹配的特征点,则利用RANSAC方法对所述匹配的特征点求解以得到所述旋转平移矩阵,否则利用ECC方法迭代计算所述第一图像帧和第二图像帧的前景区域的最小误差,以得到所述旋转平移矩阵的模块。
作为本公开实施例一种可选的实施方式,所述第一图像帧和第二图像帧的前景区域均包括至少一个区块,分别根据所述第一图像帧和第二图像帧的前景区域中各区块内匹配的特征点,求解对应于各区块的旋转平移矩阵的模块。
作为本公开实施例一种可选的实施方式,将所述合成模块配置成分别基于各区块对应的旋转平移矩阵对所述第二图像帧的各区块进行矫正的模块。
作为本公开实施例一种可选的实施方式,将所述合成模块配置成获取所述第一图像帧的特征点和第二图像帧的特征点;对所述第一图像帧的特征点和第二图像帧的特征点进行特征匹配的模块。
作为本公开实施例一种可选的实施方式,将所述合成模块配置成应用第一方法,对所述第一图像帧的特征点和第二图像帧的特征点进行一次匹配;在一次匹配的基础上,应用第二方法,对所述第一图像帧的特征点和第二图像帧的特征点进行二次匹配的模块。
一种电子设备,包括一个或多个处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述指令、所述程序、所述代码集或所述指令集由所述一个或多个处理器加载并执行上述任一项所述的一种图像生成方法的步骤。
一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,当所述非易失性计算机可读存储介质中的指令由移动终端的一个或多个处理器执行时,使得移动终端能够执行上述任一项所述的图像生成方法的步骤。
本公开的其他特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本公开而了解。本公开的目的和其他优点在说明书、权利要求书以及附图中所特别指出的结构来实现和获得,本公开的一个或多个实施例的细节在下面的附图和描述中提出。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举可选实施例,并配合所附附图,作详细说明如下。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用来解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开一个或多个实施例提供的图像生成方法的场景示意图;
图2为本公开一个或多个实施例提供的图像生成方法的流程示意图;
图3为本公开一个或多个实施例提供的图像生成***的结构框图;
图4为本公开一个或多个实施例提供的电子设备的内部结构图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
本公开的说明书和权利要求书中的术语“第一”和“第二”等是用来区别不同的对象,而不是用来描述对象的特定顺序。例如,第一摄像头和第二摄像头是为了区别不同的摄像头,而不是为了描述摄像头的特定顺序。
在本公开实施例中,“示例性的”或者“例如”等词来表示作例子、例证或说明。本公开实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念,此外,在本公开实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。
本公开提供的图像生成方法,可以应用于如图1所示的应用环境中。该图像生成方法应用于图像生成***中。该图像生成***包括终端102与服务器104。其中,终端102与服务器104通过网络进行通信。通过获得第一图像帧和第二图像帧,并分别对所述第一图像帧和第二图像帧进行图像分割,以得到所述第一图像帧和第二图像帧的前景区域;判断所述第一图像帧的前景区域和第二图像帧的前景区域是否存 在匹配的特征点;如果存在匹配的特征点,则基于匹配的特征点求解旋转平移矩阵,否则基于默认方法求解旋转平移矩阵;基于所述旋转平移矩阵对所述第二图像帧的前景区域进行矫正,并根据所述第一图像帧和矫正后的第二图像帧生成合成图像(结合权1的整体方案进行描述)。其中,终端102可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备,服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
参照图2所示,本公开的一个或多个实施例提供的图像生成方法的步骤流程图。本实施例以该方法应用移动终端进行举例说明,可以理解的是,该方法也可以应用于服务器,还可以应用于包括终端和服务器的***,并通过终端和服务器的交互实现。
步骤S201,图像生成***获得第一图像帧和第二图像帧,并分别对所述第一图像帧和第二图像帧进行图像分割,以得到所述第一图像帧和第二图像帧的前景区域。
具体的,图像生成***通过获取图像的第一帧和第二帧,将两张图像的前景和后景做出分割,其中前景包括但不限于人物、静物、物品,后景包括但不限于风景、建筑。
在具体实施过程中,第一帧和第二帧的图像可以通过电子设备拍摄获得,其中,电子设备例如为:智能手机、平板电脑等。
在上述实施例中,所述第一图像帧和第二图像帧的前景区域均包括至少一个区块,分别根据所述第一图像帧和第二图像帧的前景区域中各区块内匹配的特征点,求解对应于各区块的旋转平移矩阵。
具体的,图像生成***将第一图像帧和第二图像帧进行分割,其中采用深度学***移等,使两者对齐。
在上述步骤中,所述基于所述旋转平移矩阵对所述第二图像帧的前景区域进行矫正,包括:图像生成***分别基于各区块对应的旋转平移矩阵对所述第二图像帧的各区块进行矫正。
具体的,图像生成***以第一图像帧为基准帧,以使第二图像帧向第一图像帧进行校正。基于特征点对第二图像帧的区块进行校正,以达到更好的图像观感。
步骤S202,图像生成***判断所述第一图像帧的前景区域和第二图像帧的前景区域是否存在匹配的特征点。
具体的,图像生成***在第一图像帧的前景和第二图像帧的前景中判断是否包含特征点。例如,特征点可以简单的理解为图像帧中比较显著的点,如轮廓点,较暗区域中的亮点,较亮区域中的暗点,harris角点等。
步骤S203,如果存在匹配的特征点,则图像生成***基于匹配的特征点求解旋转平移矩阵,否则图像生成***基于默认方法求解旋转平移矩阵。
具体的,如果存在可以匹配的特征点,则图像生成***基于RANSAC调整第二图像帧。若没有特征点,则图像生成***基于增强相关系数(ECC)最大化的图像配准方法迭代计算。
步骤S204,图像生成***基于所述旋转平移矩阵对所述第二图像帧的前景区域进行矫正,并根据所述第一图像帧和矫正后的第二图像帧生成合成图像。
具体的,图像生成***根据相应规则对第二图像帧矫正,校正后的第二图像帧和第一图像帧合成,成为新的图像帧。
在上述步骤中,如果存在匹配的特征点,则基于匹配的特征点求解旋转平移矩阵,否则基于默认方法求解旋转平移矩阵,包括:
如果存在匹配的特征点,则图像生成***利用RANSAC方法对所述匹配的特征点求解以得到所述旋转平移矩阵,否则图像生成***利用ECC方法迭代计算所述第一图像帧和第二图像帧的前景区域的最小误差,以得到所述旋转平移矩阵。
具体的,RANSAC是RANdom SAmple Consensus(随机抽样一致)的缩写。它可以从一组包含“局外点”的观测数据集中,通过迭代方式估计数学模型的参数。例如,是从一组观测数据中找出合适的2维直线。假设观测数据中包含局内点和局外点,其中局内点近似的被直 线所通过,而局外点远离于直线,RANSAC能得出一个仅仅用局内点计算出模型,并且概率还足够高。ECC为增强相关系数,该方式具有对比度和亮度的光度失真不变的优点。
在上述步骤中,在判断所述第一图像帧的前景区域和第二图像帧的前景区域是否存在匹配的特征点之前,还包括:
图像生成***获取所述第一图像帧的特征点和第二图像帧的特征点;对所述第一图像帧的特征点和第二图像帧的特征点进行特征匹配。
具体的,例如第一图像帧和第二图像帧的特征点可以是明暗差距变化大于预设阈值的点、颜色变化大于预设阈值的点以及角点。图像生成***将两张图像帧的特征点进行匹配,以使图像便于后续操作。
在上述步骤中,所述对所述第一图像帧的特征点和第二图像帧的特征点进行特征匹配,包括:
图像生成***应用第一方法,对所述第一图像帧的特征点和第二图像帧的特征点进行一次匹配;在一次匹配的基础上,应用第二方法,对所述第一图像帧的特征点和第二图像帧的特征点进行二次匹配。
具体的,图像生成***在匹配好的特征点上在进行二次精确匹配,以达到更好的视觉效果。
综上,本公开提供的图像生成方法,通过分离出前景和后景,针对性对图像进行合成,便可以合成出清晰度高的图像。相比于现有技术中的传统合成方式相比,减少因为图像前景和后景的清晰度差别过大,导致无法正常成合成或合成后清晰度较低的问题,具有提成用户使用体验的优点。
应该理解的是,虽然图2的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
基于同一发明构思,作为对上述方法的实现,本公开实施例还提供了一种图像生成***,该图像生成***实施例与前述方法实施例对应,为便于阅读,本装置实施例不再对前述方法实施例中的细节内容进行逐一赘述,但应当明确,本实施例中的装置能够对应实现前述方法实施例中的全部内容。
图3为本公开实施例提供的图像生成***的结构框图,如图3所示,本实施例提供的图像生成***300包括:
分割模块310,将分割模块310配置成获得第一图像帧和第二图像帧,并分别对所述第一图像帧和第二图像帧进行图像分割,以得到所述第一图像帧和第二图像帧的前景区域的模块;
判断模块320,将判断模块320配置成判断所述第一图像帧的前景区域和第二图像帧的前景区域是否存在匹配的特征点的模块;
求解模块330,将求解模块330配置成如果存在匹配的特征点,则基于匹配的特征点求解旋转平移矩阵,否则基于默认方法求解旋转平移矩阵的模块;
合成模块340,将合成模块340配置成基于所述旋转平移矩阵对所述第二图像帧的前景区域进行矫正,并根据所述第一图像帧和矫正后的第二图像帧生成合成图像的模块。
作为本公开实施例一种可选的实施方式,求解模块330,将求解模块330配置成如果存在匹配的特征点,则利用RANSAC方法对所述匹配的特征点求解以得到所述旋转平移矩阵,否则利用ECC方法迭代计算所述第一图像帧和第二图像帧的前景区域的最小误差,以得到所述旋转平移矩阵的模块。
作为本公开实施例一种可选的实施方式,所述第一图像帧和第二图像帧的前景区域均包括至少一个区块,分别根据所述第一图像帧和第二图像帧的前景区域中各区块内匹配的特征点,求解对应于各区块的旋转平移矩阵的模块。
作为本公开实施例一种可选的实施方式,合成模块340,将合成模块340配置成分别基于各区块对应的旋转平移矩阵对所述第二图像帧的各区块进行矫正的模块。
作为本公开实施例一种可选的实施方式,合成模块340,将合成模块340配置成获取所述第一图像帧的特征点和第二图像帧的特征点;对所述第一图像帧的特征点和第二图像帧的特征点进行特征匹配的模块。
作为本公开实施例一种可选的实施方式,合成模块340,将合成模块340配置成应用第一方法,对所述第一图像帧的特征点和第二图像帧的特征点进行一次匹配;在一次匹配的基础上,应用第二方法,对所述第一图像帧的特征点和第二图像帧的特征点进行二次匹配的模块。
本实施例提供的图像生成***可以执行上述方法实施例提供的图像生成方法,其实现原理与技术效果类似,此处不再赘述。上述图像生成***中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的一个或多个处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于一个或多个处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种电子设备,该电子设备可以是终端设备,其内部结构图可以如图4所示。该电子设备包括通过***总线连接的一个或多个处理器、存储器、通信接口、显示屏和输入装置。其中,该电子设备的一个或多个处理器用于提供计算和控制能力。该电子设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作***和计算机可读指令。该内存储器为非易失性存储介质中的操作***和计算机可读指令的运行提供环境。该电子设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、运营商网络、近场通信(NFC)或其他技术实现。该计算机可读指令被一个或多个处理器执行时以实现上述实施例提供的预览图像的显示方法。该电子设备的显示屏可以是液晶显示屏或者通讯墨水显示屏,该电子设备的输入装置可以是显示屏上覆盖的触摸层,也可以是电子设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图4示出的结构,仅仅是与本公开方 案相关的部分结构的框图,并不构成对本公开方案所应用于其上的通讯设备的限定,具体的通讯设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,本公开提供的图像生成***可以实现为一种计算机可读指令的形式,计算机可读指令可在如图4所示的电子设备上运行。电子设备的存储器中可存储组成该一种图像生成***的各个程序模块,比如,图3所示的分割模块、判断模块、求解模块、合成模块。各个程序模块构成的计算机可读指令使得一个或多个处理器执行本说明书中描述的本公开各个实施例的图像生成方法中的步骤。
例如,如图4所示的电子设备可以通过如图3所示的图像生成***中的分割模块,执行获得第一图像帧和第二图像帧,并分别对所述第一图像帧和第二图像帧进行图像分割,以得到所述第一图像帧和第二图像帧的前景区域。判断模块,执行判断所述第一图像帧的前景区域和第二图像帧的前景区域是否存在匹配的特征点。求解模块,执行如果存在匹配的特征点,则基于匹配的特征点求解旋转平移矩阵,否则基于默认方法求解旋转平移矩阵。合成模块,执行基于所述旋转平移矩阵对所述第二图像帧的前景区域进行矫正,并根据所述第一图像帧和矫正后的第二图像帧生成合成图像。
在一个实施例中,提供了一种电子设备,包括存储器和一个或多个处理器,该存储器存储有计算机可读指令,该一个或多个处理器执行计算机可读指令时实现以下步骤:获得第一图像帧和第二图像帧,并分别对所述第一图像帧和第二图像帧进行图像分割,以得到所述第一图像帧和第二图像帧的前景区域;判断所述第一图像帧的前景区域和第二图像帧的前景区域是否存在匹配的特征点;如果存在匹配的特征点,则基于匹配的特征点求解旋转平移矩阵,否则基于默认方法求解旋转平移矩阵;基于所述旋转平移矩阵对所述第二图像帧的前景区域进行矫正,并根据所述第一图像帧和矫正后的第二图像帧生成合成图像。
在一个实施例中,一个或多个处理器执行计算机可读指令时还实现以下步骤:如果存在匹配的特征点,则利用RANSAC方法对所述匹配 的特征点求解以得到所述旋转平移矩阵,否则利用ECC方法迭代计算所述第一图像帧和第二图像帧的前景区域的最小误差,以得到所述旋转平移矩阵。
在一个实施例中,所述第一图像帧和第二图像帧的前景区域均包括至少一个区块,分别根据所述第一图像帧和第二图像帧的前景区域中各区块内匹配的特征点,求解对应于各区块的旋转平移矩阵。
在一个实施例中,一个或多个处理器执行计算机可读指令时还实现以下步骤:分别基于各区块对应的旋转平移矩阵对所述第二图像帧的各区块进行矫正。
在一个实施例中,一个或多个处理器执行计算机可读指令时还实现以下步骤:获取所述第一图像帧的特征点和第二图像帧的特征点;对所述第一图像帧的特征点和第二图像帧的特征点进行特征匹配。
在一个实施例中,一个或多个处理器执行计算机可读指令时还实现以下步骤:应用第一方法,对所述第一图像帧的特征点和第二图像帧的特征点进行一次匹配;在一次匹配的基础上,应用第二方法,对所述第一图像帧的特征点和第二图像帧的特征点进行二次匹配。
本实施例提供的电子设备,可以实现上述方法实施例提供的图像合成方法,其实现原理与技术效果类似,此处不再赘述。
一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,其上存储有计算机可读指令,计算机可读指令被一个或多个处理器执行时实现以下步骤:获得第一图像帧和第二图像帧,并分别对所述第一图像帧和第二图像帧进行图像分割,以得到所述第一图像帧和第二图像帧的前景区域;判断所述第一图像帧的前景区域和第二图像帧的前景区域是否存在匹配的特征点;如果存在匹配的特征点,则基于匹配的特征点求解旋转平移矩阵,否则基于默认方法求解旋转平移矩阵;基于所述旋转平移矩阵对所述第二图像帧的前景区域进行矫正,并根据所述第一图像帧和矫正后的第二图像帧生成合成图像。
在一个实施例中,计算机可读指令被一个或多个处理器执行时还实现以下步骤:如果存在匹配的特征点,则利用RANSAC方法对所述匹配的特征点求解以得到所述旋转平移矩阵,否则利用ECC方法迭代计 算所述第一图像帧和第二图像帧的前景区域的最小误差,以得到所述旋转平移矩阵。
在一个实施例中,所述第一图像帧和第二图像帧的前景区域均包括至少一个区块,分别根据所述第一图像帧和第二图像帧的前景区域中各区块内匹配的特征点,求解对应于各区块的旋转平移矩阵。
在一个实施例中,计算机可读指令被一个或多个处理器执行时还实现以下步骤:分别基于各区块对应的旋转平移矩阵对所述第二图像帧的各区块进行矫正。
在一个实施例中,计算机可读指令被一个或多个处理器执行时还实现以下步骤:获取所述第一图像帧的特征点和第二图像帧的特征点;对所述第一图像帧的特征点和第二图像帧的特征点进行特征匹配。
在一个实施例中,计算机可读指令被一个或多个处理器执行时还实现以下步骤:应用第一方法,对所述第一图像帧的特征点和第二图像帧的特征点进行一次匹配;在一次匹配的基础上,应用第二方法,对所述第一图像帧的特征点和第二图像帧的特征点进行二次匹配。
本实施例提供的一个或多个存储有计算机可读指令的非易失性计算机可读存储介质上存储的计算机可读指令,可以实现上述方法实施例提供的预览图像的显示方法,其实现原理与技术效果类似,此处不再赘述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本公开所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,比如静态随机存取存储器(Static Random Access Memory,SRAM)和动态随机存取存储器(Dynamic Random Access Memory, DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本公开的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本公开构思的前提下,还可以做出若干变形和改进,这些都属于本公开的保护范围。因此,本公开专利的保护范围应以所附权利要求为准。
工业实用性
本公开提供的图像生成方法,通过分离出前景和后景,针对性对图像进行合成,便可以合成出清晰度高的图像。相比于现有技术中的传统合成方式相比,减少因为图像前景和后景的清晰度差别过大,导致无法正常成合成或合成后清晰度较低的问题,具有提成用户使用体验的优点,具有很强的工业实用性。

Claims (14)

  1. 一种图像生成方法,所述方法包括:
    获得第一图像帧和第二图像帧,并分别对所述第一图像帧和第二图像帧进行图像分割,以得到所述第一图像帧和第二图像帧的前景区域;
    判断所述第一图像帧的前景区域和第二图像帧的前景区域是否存在匹配的特征点;
    如果存在匹配的特征点,则基于匹配的特征点求解旋转平移矩阵,否则基于默认方法求解旋转平移矩阵;
    基于所述旋转平移矩阵对所述第二图像帧的前景区域进行矫正,并根据所述第一图像帧和矫正后的第二图像帧生成合成图像。
  2. 根据权利要求1所述的图像生成方法,其中,如果存在匹配的特征点,则基于匹配的特征点求解旋转平移矩阵,否则基于默认方法求解旋转平移矩阵,包括:
    如果存在匹配的特征点,则利用RANSAC方法对所述匹配的特征点求解以得到所述旋转平移矩阵,否则利用ECC方法迭代计算所述第一图像帧和第二图像帧的前景区域的最小误差,以得到所述旋转平移矩阵。
  3. 根据权利要求1所述的图像生成方法,其中,所述第一图像帧和第二图像帧的前景区域均包括至少一个区块,分别根据所述第一图像帧和第二图像帧的前景区域中各区块内匹配的特征点,求解对应于各区块的旋转平移矩阵。
  4. 根据权利要求3所述的图像生成方法,其中,所述基于所述旋转平移矩阵对所述第二图像帧的前景区域进行矫正,包括:
    分别基于各区块对应的旋转平移矩阵对所述第二图像帧的各区块进行矫正。
  5. 根据权利要求1-4任一项所述的图像生成方法,其中,在判断所述第一图像帧的前景区域和第二图像帧的前景区域是否存在匹配的特征点之前,还包括:
    获取所述第一图像帧的特征点和第二图像帧的特征点;
    对所述第一图像帧的特征点和第二图像帧的特征点进行特征匹配。
  6. 根据权利要求5所述的图像生成方法,其中,所述对所述第一图像帧的特征点和第二图像帧的特征点进行特征匹配,包括:
    应用第一方法,对所述第一图像帧的特征点和第二图像帧的特征点进行一次匹配;
    在一次匹配的基础上,应用第二方法,对所述第一图像帧的特征点和第二图像帧的特征点进行二次匹配。
  7. 一种图像生成***,包括:
    分割模块,将所述分割模块配置成获得第一图像帧和第二图像帧,并分别对所述第一图像帧和第二图像帧进行图像分割,以得到所述第一图像帧和第二图像帧的前景区域的模块;
    判断模块,将所述判断模块配置成判断所述第一图像帧的前景区域和第二图像帧的前景区域是否存在匹配的特征点的模块;
    求解模块,将所述求解模块配置成如果存在匹配的特征点,则基于匹配的特征点求解旋转平移矩阵,否则基于默认方法求解旋转平移矩阵的模块;
    合成模块,将所述合成模块配置成基于所述旋转平移矩阵对所述第二图像帧的前景区域进行矫正,并根据所述第一图像帧和矫正后的第二图像帧生成合成图像的模块。
  8. 根据权利要求7所述的图像生成***,其中,所述求解模块,将所述求解模块配置成如果存在匹配的特征点,则利用RANSAC方法对所述匹配的特征点求解以得到所述旋转平移矩阵,否则利用ECC方法迭代计算所述第一图像帧和第二图像帧的前景区域的最小误差,以得到所述旋转平移矩阵的模块。
  9. 根据权利要求7所述的图像生成***,其中,所述第一图像帧和第二图像帧的前景区域均包括至少一个区块,分别根据所述第一图像帧和第二图像帧的前景区域中各区块内匹配的特征点,求解对应于各区块的旋转平移矩阵的模块。
  10. 根据权利要求9所述的图像生成***,其中,所述合成模块,将所述合成模块配置成分别基于各区块对应的旋转平移矩阵对所述第二图像帧的各区块进行矫正的模块。
  11. 根据权利要求7-10任一项所述的图像生成***,其中,所述合成模块,将所述合成模块配置成获取所述第一图像帧的特征点和第二图像帧的特征点;对所述第一图像帧的特征点和第二图像帧的特征点进行特征匹配的模块。
  12. 根据权利要求11所述的图像生成***,其中,所述合成模块,将所述合成模块配置成应用第一方法,对所述第一图像帧的特征点和第二图像帧的特征点进行一次匹配;在一次匹配的基础上,应用第二方法,对所述第一图像帧的特征点和第二图像帧的特征点进行二次匹配的模块。
  13. 一种电子设备,包括:存储器和一个或多个处理器,所述存储器中存储有至少一条指令、至少一段计算机可读指令、代码集或指令集;所述至少一条指令、所述至少一段计算机可读指令、所述代码集或所述指令集由所述一个或多个处理器加载并执行时,使得所述一个或多个处理器执行权利要求1-6任一项所述的图像生成方法的步骤。
  14. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,当所述非易失性计算机可读存储介质中的计算机可读指令由一个或多个处理器执行时,使得所述一个或多个处理器执行权利要求1-6任一项所述的图像生成方法的步骤。
PCT/CN2022/100540 2022-02-14 2022-06-22 图像生成方法、***、电子设备、存储介质和产品 WO2023151214A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210132413.X 2022-02-14
CN202210132413.XA CN114519753A (zh) 2022-02-14 2022-02-14 图像生成方法、***、电子设备、存储介质和产品

Publications (1)

Publication Number Publication Date
WO2023151214A1 true WO2023151214A1 (zh) 2023-08-17

Family

ID=81597033

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/100540 WO2023151214A1 (zh) 2022-02-14 2022-06-22 图像生成方法、***、电子设备、存储介质和产品

Country Status (2)

Country Link
CN (1) CN114519753A (zh)
WO (1) WO2023151214A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114519753A (zh) * 2022-02-14 2022-05-20 上海闻泰信息技术有限公司 图像生成方法、***、电子设备、存储介质和产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100128971A1 (en) * 2008-11-25 2010-05-27 Nec System Technologies, Ltd. Image processing apparatus, image processing method and computer-readable recording medium
CN110689554A (zh) * 2019-09-25 2020-01-14 深圳大学 用于红外图像序列的背景运动估计方法、装置及存储介质
CN113450392A (zh) * 2020-03-25 2021-09-28 英特尔公司 基于图像模板的参数化透视的鲁棒表面配准
CN113837936A (zh) * 2020-06-24 2021-12-24 上海汽车集团股份有限公司 一种全景图像的生成方法和装置
CN114519753A (zh) * 2022-02-14 2022-05-20 上海闻泰信息技术有限公司 图像生成方法、***、电子设备、存储介质和产品

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100128971A1 (en) * 2008-11-25 2010-05-27 Nec System Technologies, Ltd. Image processing apparatus, image processing method and computer-readable recording medium
CN110689554A (zh) * 2019-09-25 2020-01-14 深圳大学 用于红外图像序列的背景运动估计方法、装置及存储介质
CN113450392A (zh) * 2020-03-25 2021-09-28 英特尔公司 基于图像模板的参数化透视的鲁棒表面配准
CN113837936A (zh) * 2020-06-24 2021-12-24 上海汽车集团股份有限公司 一种全景图像的生成方法和装置
CN114519753A (zh) * 2022-02-14 2022-05-20 上海闻泰信息技术有限公司 图像生成方法、***、电子设备、存储介质和产品

Also Published As

Publication number Publication date
CN114519753A (zh) 2022-05-20

Similar Documents

Publication Publication Date Title
US11373275B2 (en) Method for generating high-resolution picture, computer device, and storage medium
US10958850B2 (en) Electronic device and method for capturing image by using display
WO2018176925A1 (zh) Hdr图像的生成方法及装置
CN111583161A (zh) 模糊图像的增强方法、计算机设备和存储介质
US9672414B2 (en) Enhancement of skin, including faces, in photographs
WO2021115136A1 (zh) 视频图像的防抖方法、装置、电子设备和存储介质
WO2018072270A1 (zh) 一种图像显示增强方法及装置
US9646368B2 (en) Automatic color correction
CN105279006B (zh) 基于Android***的屏幕截图方法及终端
WO2020063030A1 (zh) 主题色彩的调节方法、装置、存储介质及电子设备
WO2023098045A1 (zh) 图像对齐方法、装置、计算机设备和存储介质
WO2020147698A1 (zh) 画面优化方法、装置、终端及对应的存储介质
US20170351932A1 (en) Method, apparatus and computer program product for blur estimation
WO2023151214A1 (zh) 图像生成方法、***、电子设备、存储介质和产品
US20200410718A1 (en) Method and apparatus for determining text color
WO2023207454A1 (zh) 图像处理方法、图像处理装置以及可读存储介质
US20200236270A1 (en) Systems and methods for color matching for realistic flash images
WO2023151210A1 (zh) 图像处理方法、电子设备及计算机可读存储介质
US20220398704A1 (en) Intelligent Portrait Photography Enhancement System
CN113422967B (zh) 一种投屏显示控制方法、装置、终端设备及存储介质
CN113393391B (zh) 图像增强方法、图像增强装置、电子设备和存储介质
US20230368340A1 (en) Gating of Contextual Attention and Convolutional Features
US11195247B1 (en) Camera motion aware local tone mapping
CN110874816B (zh) 一种图像处理方法、装置、移动终端及存储介质
CN113012085A (zh) 图像处理方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22925573

Country of ref document: EP

Kind code of ref document: A1