CN118233639A - Image alignment processing method, device and storage medium - Google Patents

Image alignment processing method, device and storage medium Download PDF

Info

Publication number
CN118233639A
CN118233639A CN202211632339.4A CN202211632339A CN118233639A CN 118233639 A CN118233639 A CN 118233639A CN 202211632339 A CN202211632339 A CN 202211632339A CN 118233639 A CN118233639 A CN 118233639A
Authority
CN
China
Prior art keywords
alignment
image frame
center offset
offset
current image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211632339.4A
Other languages
Chinese (zh)
Inventor
姚海强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202211632339.4A priority Critical patent/CN118233639A/en
Publication of CN118233639A publication Critical patent/CN118233639A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The disclosure relates to an image alignment processing method, an image alignment processing device and a storage medium. The method comprises the following steps: determining an alignment parameter for performing alignment transformation processing on a current image frame in response to acquiring an image alignment processing request of the current image frame; acquiring the center offset of the current image frame; and updating an alignment transformation output matrix for performing alignment transformation processing on the current image frame based on the difference value between the center offset and the reference center offset and the alignment parameter, and processing the current image frame based on the updated alignment transformation output matrix. By adopting the method disclosed by the invention, the offset of the image is used for replacing the whole image to perform alignment conversion processing, so that the computational complexity of the image alignment processing is reduced, the resource consumption is saved, the unsmooth image preview and the blocking are avoided, and the user experience is improved.

Description

Image alignment processing method, device and storage medium
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to an image alignment processing method, an image alignment processing device and a storage medium.
Background
In devices with cameras, such as smartphones, with an ever-increasing sensor technology, more and more sensors are beginning to support optical image stabilization (Optical Image Stabilizer, OIS) anti-shake functions. In a scene photographed by a plurality of cameras, the sensor of each camera needs to switch in zooming (zoom) or fallback (a mechanism for switching the lens according to the photographed scene distance and the photographed ambient brightness) scenes. If one or more of the cameras has OIS function, the OIS may cause lens shift to change calibration parameters of each sensor, so that Field of View (FOV) uniformity of each sensor is poor, and FOV jitter is generated when the sensors are switched, which affects user experience.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image alignment processing method, apparatus, and storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided an image alignment processing method applied to an image processing apparatus including at least two image sensors, including:
Determining an alignment parameter for performing alignment transformation processing on a current image frame in response to acquiring an image alignment processing request of the current image frame;
Acquiring the center offset of the current image frame;
And updating an alignment transformation output matrix for performing alignment transformation processing on the current image frame based on the difference value between the center offset and the reference center offset and the alignment parameter, and processing the current image frame based on the updated alignment transformation output matrix.
In one embodiment, the acquiring the center offset of the current image frame includes:
and acquiring the Hall offset of the current image frame, and acquiring the center offset of the current image frame based on the Hall offset.
In one embodiment, the current image frame includes an image output by each of the at least two image sensors;
the obtaining the center offset of the current image frame based on the hall offset includes:
And obtaining the center offset of the current image frame of each image sensor based on the Hall offset and the corresponding relation between the Hall offset and the center offset pre-stored in each image sensor.
In one embodiment, the reference center offset is a center offset of an image frame previous to the current image frame corresponding to the image alignment processing request.
In one embodiment, the updating the alignment transformation output matrix for performing the alignment transformation on the current image frame based on the alignment parameter and the difference between the center offset and the reference center offset includes:
Correcting the alignment parameter based on a difference of the center offset and a reference center offset;
updating the alignment transformation output matrix based on the corrected alignment parameters.
In one embodiment, the method further comprises:
After updating the alignment transformation output matrix, the center offset is saved as a new reference center offset.
According to a second aspect of the embodiments of the present disclosure, there is provided an image alignment processing apparatus including:
an alignment module configured to determine an alignment parameter for performing an alignment transformation process on a current image frame in response to acquiring an image alignment processing request for the current image frame;
a compensation module configured to acquire a center offset of the current image frame;
An updating module configured to update an alignment transformation output matrix that performs an alignment transformation process on the current image frame based on a difference between the center offset and a reference center offset, and the alignment parameter;
the alignment module is further configured to process the current image frame based on the updated alignment transformation output matrix.
In one embodiment, the compensation module is further configured to:
and acquiring the Hall offset of the current image frame, and acquiring the center offset of the current image frame based on the Hall offset.
In one embodiment, the current image frame includes an image output by each of at least two image sensors;
the obtaining the center offset of the current image frame based on the hall offset includes:
And obtaining the center offset of the current image frame of each image sensor based on the Hall offset and the corresponding relation between the Hall offset and the center offset pre-stored in each image sensor.
In one embodiment, the reference center offset is a center offset of an image frame previous to the current image frame corresponding to the image alignment processing request.
In one embodiment, the updating the alignment transformation output matrix for performing the alignment transformation on the current image frame based on the alignment parameter and the difference between the center offset and the reference center offset includes:
Correcting the alignment parameter based on a difference of the center offset and a reference center offset;
updating the alignment transformation output matrix based on the corrected alignment parameters.
In one embodiment, the compensation module is further configured to save the center offset as a new reference center offset after updating the alignment transformation output matrix.
According to a third aspect of the embodiments of the present disclosure, there is provided an image alignment processing apparatus including:
A processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: an image alignment processing method as claimed in any one of the first aspects is performed.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium having stored therein instructions which, when executed by a processor of a terminal, enable the terminal to perform the image alignment processing method according to any one of the first aspects.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: by replacing the whole image with the offset of the image to perform alignment transformation processing, the computational complexity of the image alignment processing is reduced, the resource consumption is saved, the unsmooth image preview and the blocking are avoided, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating an image alignment processing method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a method of deriving a center offset of a current image frame based on a hall offset according to an exemplary embodiment.
FIG. 3 is a flowchart illustrating a method of updating an alignment transformation output matrix, according to an example embodiment.
Fig. 4 is a flowchart illustrating an image alignment processing method according to an exemplary embodiment.
Fig. 5 is a flowchart illustrating an image preview method according to an exemplary embodiment.
Fig. 6 is a block diagram showing the structure of an image alignment processing apparatus according to an exemplary embodiment.
Fig. 7 is a block diagram of an apparatus according to an example embodiment.
Fig. 8 is a block diagram of an apparatus according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure.
As mentioned above, in a scene photographed by a plurality of cameras, the sensor of each camera needs to switch in zooming (zoom) or fallback (a mechanism for performing lens switching according to the photographed scene distance and the photographed ambient brightness) scenes. If one or more of the cameras has OIS function, the OIS may cause lens shift to change calibration parameters of each sensor, so that Field of View (FOV) uniformity of each sensor is poor, and FOV jitter is generated when the sensors are switched, which affects user experience.
In the related art, the uniformity of FOV can be improved by increasing the frequency of the homography matrix homegraphy calculated in the Spatial Alignment Transform (SAT) algorithm. However, since the SAT algorithm has a large calculation amount, it cannot be guaranteed that each frame of image acquired by the sensor is calculated, resulting in poor alignment effect. Meanwhile, increasing the calculation frequency of the homography matrix homegraphy inevitably consumes more resources, such as processor resources, which increases the energy consumption of the mobile phone and affects the image frame rate, so that the image preview is not smooth, and the clamping occurs, thereby affecting the user experience.
In view of this, the embodiment of the disclosure provides an image alignment processing method, which replaces the whole image with the offset of the image to perform alignment transformation processing, thereby reducing the computational complexity of the image alignment processing, saving the resource consumption, avoiding the unsmooth image preview and the blocking, and improving the user experience.
Fig. 1 is a flowchart illustrating an image alignment processing method according to an exemplary embodiment, which is applied to an image processing apparatus including at least two image sensors, as shown in fig. 1, and which includes the following steps.
In step S11, in response to acquiring an image alignment processing request for a current image frame, an alignment parameter for performing an alignment conversion processing for the current image frame is determined.
In step S12, the center offset of the current image frame is acquired.
In step S13, an alignment conversion output matrix for performing an alignment conversion process on the current image frame is updated based on the difference between the center offset and the reference center offset and the alignment parameter, and the current image frame is processed based on the updated alignment conversion output matrix.
In the embodiment of the disclosure, the image processing device may be a smart phone, a tablet computer, a smart home appliance, or other devices, where the image processing device includes at least two image sensors, and the image sensors may be cameras, or other image sensing devices. For convenience of explanation and description, the following description will take an example in which the image processing apparatus is a smart phone and the image sensor is a camera. The smart phone can comprise a plurality of image sensors such as a main camera, a tele camera, a wide-angle camera, a super wide-angle camera, a micro-distance camera and the like. When the smart phone is used for photographing, one or more cameras can be automatically selected for photographing according to photographing requirements and photographing conditions, and images photographed by the cameras are fused to generate preview images and sheets respectively.
The alignment transformation processing SAT is a method in fusion processing of images captured by a plurality of cameras, and is used for determining a Master camera (Master) among the plurality of cameras according to a target zoom parameter and the like, determining the rest cameras as Slave cameras (Slave), and calculating parameters of translation, rotation, cutting and the like of the output images of the rest cameras and the output images of the Master cameras by taking the output images of the Master cameras as references. And (3) carrying out alignment processing on the output images of the cameras according to parameters such as translation, rotation, cutting and the like output by the SAT algorithm, and obtaining a fused image.
In an embodiment of the present disclosure, in response to acquiring an image alignment processing request for a current image frame, an alignment parameter for performing an alignment transformation processing on the current image frame is determined. The image alignment processing request may be sent by an image processing device, such as a smart phone, in an example, when a user using the smart phone performs a zoom operation on a user interface while using a photographing application, the photographing application may send a target zoom parameter corresponding to the zoom operation to a control unit in the smart phone, and the control unit determines which image sensors, i.e., cameras, collect images according to the target zoom parameter, and sends the image alignment processing request to the SAT module after receiving the determined images collected by the cameras. The SAT module calculates the alignment parameters according to the target zoom parameters and the like. According to the alignment parameters, an alignment transformation output matrix of the SAT can be calculated, the alignment transformation output matrix comprises parameters for translating, rotating, cutting and the like of each image, and a rear-end processing unit of the smart phone correspondingly processes the images based on the parameters for translating, rotating, cutting and the like in the alignment transformation output matrix, so that the aligned images can be obtained. In the embodiment of the disclosure, the alignment parameter may be a homography matrix offset parameter in the SAT algorithm, or may be other alignment parameters, which is not limited herein.
When the image acquisition equipment is subjected to lens switching, the master camera and the slave cameras in the SAT algorithm are correspondingly switched, and at the moment, if one or more cameras start an OIS function, the alignment parameters need to be recalculated in order to ensure the consistency of the field angle FOV. However, the alignment parameters are computationally complex, consume high resources and take long time. In order to solve the technical problem, in the embodiment of the disclosure, the alignment parameter is calculated based on the offset difference value of the image, instead of the scheme of directly calculating the alignment parameter of each frame of image in the related art, so that the calculation complexity can be reduced.
In the OIS mode, the hall sensor may be configured to measure the hall position information during OIS offset in real time, and calculate the magnitude and direction of the lens movement at the current moment according to the correspondence between the hall position information and the lens movement. In view of this, in the embodiment of the present disclosure, the offset difference of the image may be a difference between the real-time offset of the image and the reference offset, the offset of the image may be a center offset of the image, and the center offset of the image may be obtained based on the hall offset of the image.
In an example, a hall offset of a current image frame corresponding to the image alignment processing request may be obtained, and then a center offset of the current image frame, which is a real-time offset of the image, may be obtained based on the hall offset. It can be appreciated that the hall offset of the current image frame in the embodiment of the disclosure may be the hall offset of the image sensor at the time corresponding to the current image frame, that is, the hall offset of the image sensor when the current image frame is acquired. Wherein, the number of the image sensors is two or more, and each image sensor acquires one current image frame.
In embodiments of the present disclosure, the reference offset may be an offset of an image frame that precedes the current image frame, which, in one example,
The reference offset may be an offset of an image frame previous to the current image frame. The reference offset may be a reference center offset of 5, and the determination method thereof is the same as the determination method of the real-time offset described above, and will not be described herein.
In the embodiment of the disclosure, a difference is obtained between a center offset and a reference center offset of a current image frame, a difference between the center offset and the reference center offset is obtained, an alignment transformation output matrix for performing alignment transformation processing SAT on the current image frame is updated based on the difference between the center offset and the reference center offset and an alignment parameter, and the current image frame is processed based on the updated alignment transformation output matrix, so that an image after alignment processing can be obtained.
0 According to the technical scheme of the embodiment of the disclosure, the alignment transformation place is carried out by replacing the whole image with the offset of the image
And the computational complexity of image alignment processing is reduced, the resource consumption is saved, the unsmooth image preview and the blocking are avoided, and the user experience is improved.
Fig. 2 is a flowchart illustrating a method of deriving a center offset of a current image frame based on a hall offset according to an exemplary embodiment, including the following steps, as shown in fig. 2.
5 In step S21, based on the Hall offset, and the Hall offset and the pre-stored in each image sensor
And obtaining the center offset of the current image frame of each image sensor according to the corresponding relation of the center offsets.
In embodiments of the present disclosure, the current image frame may include an image output by each of the at least two image sensors, i.e., the current image frame includes at least two images, wherein each image is acquired and output by a different image sensor.
The corresponding relation between the Hall offset and the center offset is prestored in each image sensor, the Hall offset of each 0 image sensor is collected by the Hall sensor, and for convenience of description, the Hall offset of the image sensor can be equal to the corresponding value
The Hall offset of the previous frame image can be obtained by inquiring the corresponding relation between the Hall offset and the center offset, and the center offset of the current frame image is also the center offset of the image sensor corresponding to the corresponding moment of the current frame image.
FIG. 3 is a flowchart illustrating a method of updating an aligned transform output matrix, as shown in FIG. 3, according to an exemplary embodiment, the method including the following steps.
5 In step S31, the alignment parameter is corrected based on the difference between the center offset and the reference center offset.
In step S32, the alignment transformation output matrix is updated based on the corrected alignment parameters.
In an embodiment of the present disclosure, correcting the alignment parameter based on the difference between the center offset and the reference center offset may include compensating the alignment parameter based on the difference between the center offset and the reference center offset, thereby obtaining a corrected alignment parameter
A number. Updating the alignment transformation output matrix based on the corrected alignment parameters may include updating 0 the alignment transformation output matrix by a mapping method to obtain an offset and/or a rotation angle required for performing alignment processing on the image, and the like.
By adopting the technical scheme of the embodiment of the disclosure, the alignment parameters are corrected based on the center offset of the image frames, and the alignment transformation output matrix is updated according to the alignment parameters, so that the image alignment processing speed can be improved, the resource consumption can be reduced, and the user experience can be improved.
Fig. 4 is a flowchart illustrating an image alignment processing method according to an exemplary embodiment, and steps S41 to S43 of the method are the same as steps S11 to S13 in fig. 1, and are not described herein. Further, the method also includes step S44.
In step S44, after updating the alignment conversion output matrix, the center offset is stored as a new reference center offset.
In the embodiment of the disclosure, after updating the alignment transformation output matrix, the center offset may be saved as a new reference center offset, and the algorithm update parameter updateHomegraphyTag in the SAT algorithm is modified, so that the current SAT calculation is modified to make updateHomegraphyTag true (true) when Ji Canshu is modified, and make updateHomegraphyTag false (false) when no alignment parameter is modified.
Fig. 5 is a flowchart illustrating an image preview method in which an image alignment processing method provided by an embodiment of the present disclosure is employed, according to an exemplary embodiment. As shown in fig. 5, the method first receives a preview request from an application, and in response to the preview request, two image sensors respectively collect images and process the collected current frame image in real time, it is understood that the number of image sensors may also be increased as needed, for example, three or more. One of the two image sensors is a master image sensor and the other is a slave image sensor. The output images of the two image sensors are processed by SAT algorithm and then sent to the compensation processing module. The compensation processing module obtains the Hall offset of the master image sensor and the slave image sensor, calculates the center offset based on the Hall offset, stores the center offset as the reference center offsets baseM and baseS1, judges whether the reference center offset changes, if so, updates updateHomegraphyTag =true by the SAT algorithm, and if not, updates updateHomegraphyTag =false by the SAT algorithm. The homography matrix and the smooth fusion matrix used by the current preview request are calculated based on the SAT algorithm, and a pair Ji Canshu of homography matrices used by the current preview request, such as homography matrix offset parameter factor X, is calculated. And acquiring the Hall offsets of the master image sensor and the slave image sensor at the corresponding moment of the current preview request, calculating the center offset M1 and S1 based on the Hall offsets, and calculating the difference value M=M1-baseM 1 between the center offset and the reference center offset, wherein S=S1-baseS. And compensating Ji Canshu, such as a homography matrix offset parameter factor X, based on the difference values M and S, and carrying out mapping update on the SAT output matrix according to the compensated factor X. And (3) carrying out alignment processing on the image frames at the corresponding moment of the current preview request based on the updated output matrix, so as to obtain a preview image.
By adopting the technical scheme of the embodiment of the disclosure, the problem of field angle jitter caused by OIS influence is represented and compensated through the Hall offset, the problem of unified and smooth frame returning time interval is solved, the problem of unsmooth preview caused by different packet returning speeds is avoided, and the user experience is improved. Meanwhile, the scheme adopts bypass processing, extra waiting time is not needed, the system consumption is small, and almost no power consumption exists.
Based on the same conception, the embodiment of the disclosure also provides an image alignment processing device.
It will be appreciated that, in order to implement the above-described functions, the image alignment processing apparatus provided in the embodiments of the present disclosure includes corresponding hardware structures and/or software modules that perform the respective functions. The disclosed embodiments may be implemented in hardware or a combination of hardware and computer software, in combination with the various example elements and algorithm steps disclosed in the embodiments of the disclosure. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation is not to be considered as beyond the scope of the embodiments of the present disclosure.
Fig. 6 is a block diagram of an image alignment processing apparatus according to an exemplary embodiment. Referring to fig. 6, the apparatus 600 includes an alignment module 601, a compensation module 602, and an update module 603.
The alignment module 601 is configured to determine an alignment parameter for performing an alignment transformation process on a current image frame in response to acquiring an image alignment process request for the current image frame.
The compensation module 602 is configured to obtain a center offset of the current image frame.
The updating module 603 is configured to update an alignment transformation output matrix for performing an alignment transformation process on the current image frame based on a difference between the center offset and a reference center offset, and the alignment parameter.
In an embodiment of the disclosure, the compensation module is further configured to: and acquiring the Hall offset of the current image frame, and acquiring the center offset of the current image frame based on the Hall offset.
In an embodiment of the disclosure, the alignment module is further configured to process the current image frame based on the updated alignment transformation output matrix.
In an embodiment of the disclosure, the current image frame includes an image output by each of at least two image sensors; the obtaining the center offset of the current image frame based on the hall offset includes: and obtaining the center offset of the current image frame of each image sensor based on the Hall offset and the corresponding relation between the Hall offset and the center offset pre-stored in each image sensor.
In this embodiment of the present disclosure, the reference center offset is a center offset of an image frame previous to the current image frame corresponding to the image alignment processing request.
In an embodiment of the present disclosure, the updating the alignment transformation output matrix for performing an alignment transformation on the current image frame based on the alignment parameter and a difference between the center offset and the reference center offset includes: correcting the alignment parameter based on a difference of the center offset and a reference center offset; updating the alignment transformation output matrix based on the corrected alignment parameters.
In an embodiment of the disclosure, the compensation module is further configured to save the center offset as a new reference center offset after updating the alignment transformation output matrix.
According to the technical scheme of the embodiment of the disclosure, the offset of the image is used for replacing the whole image to perform alignment conversion, so that the calculation complexity of the image alignment is reduced, the resource consumption is saved, the unsmooth image preview and the blocking are avoided, and the user experience is improved.
Fig. 7 is a block diagram illustrating an apparatus 700 for image alignment processing according to an example embodiment. For example, apparatus 700 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 7, an apparatus 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the apparatus 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 702 may include one or more processors 720 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 702 can include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operations at the apparatus 700. Examples of such data include instructions for any application or method operating on the apparatus 700, contact data, phonebook data, messages, pictures, videos, and the like. The memory 704 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 706 provides power to the various components of the device 700. Power component 706 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 700.
The multimedia component 708 includes a screen between the device 700 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the apparatus 700 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 710 is configured to output and/or input audio signals. For example, the audio component 710 includes a Microphone (MIC) configured to receive external audio signals when the device 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 704 or transmitted via the communication component 716. In some embodiments, the audio component 710 further includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the apparatus 700. For example, the sensor assembly 714 may detect an on/off state of the device 700, a relative positioning of the components, such as a display and keypad of the device 700, a change in position of the device 700 or a component of the device 700, the presence or absence of user contact with the device 700, an orientation or acceleration/deceleration of the device 700, and a change in temperature of the device 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate communication between the apparatus 700 and other devices in a wired or wireless manner. The apparatus 700 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 716 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 704, including instructions executable by processor 720 of apparatus 700 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Fig. 8 is a block diagram illustrating an apparatus 800 for image alignment processing according to an example embodiment. For example, the apparatus 800 may be provided as a server. Referring to fig. 8, apparatus 800 includes a processing component 822 that further includes one or more processors and memory resources, represented by memory 832, for storing instructions, such as application programs, executable by processing component 822. The application programs stored in memory 832 may include one or more modules each corresponding to a set of instructions. Further, the processing component 822 is configured to execute instructions to perform the image alignment processing methods described above.
The apparatus 800 may also include a power component 826 configured to perform power management of the apparatus 800, a wired or wireless network interface 850 configured to connect the apparatus 800 to a network, and an input/output (I/O) interface 858. The device 800 may operate based on an operating system stored in memory 832, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
It is understood that the term "plurality" in this disclosure means two or more, and other adjectives are similar thereto. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It is further understood that the terms "first," "second," and the like are used to describe various information, but such information should not be limited to these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the expressions "first", "second", etc. may be used entirely interchangeably. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It will be further understood that "connected" includes both direct connection where no other member is present and indirect connection where other element is present, unless specifically stated otherwise.
It will be further understood that although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the scope of the appended claims.

Claims (14)

1. An image alignment processing method, characterized by being applied to an image processing apparatus including at least two image sensors, comprising:
Determining an alignment parameter for performing alignment transformation processing on a current image frame in response to acquiring an image alignment processing request of the current image frame;
Acquiring the center offset of the current image frame;
And updating an alignment transformation output matrix for performing alignment transformation processing on the current image frame based on the difference value between the center offset and the reference center offset and the alignment parameter, and processing the current image frame based on the updated alignment transformation output matrix.
2. The method of claim 1, wherein the acquiring the center offset of the current image frame comprises:
and acquiring the Hall offset of the current image frame, and acquiring the center offset of the current image frame based on the Hall offset.
3. The method of claim 2, wherein the current image frame comprises an image output by each of the at least two image sensors;
the obtaining the center offset of the current image frame based on the hall offset includes:
And obtaining the center offset of the current image frame of each image sensor based on the Hall offset and the corresponding relation between the Hall offset and the center offset pre-stored in each image sensor.
4. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The reference center offset is a center offset of an image frame previous to the current image frame corresponding to the image alignment processing request.
5. The method according to claim 1, wherein updating the alignment transform output matrix for performing the alignment transform processing on the current image frame based on the alignment parameter and a difference between a center offset and a reference center offset, comprises:
Correcting the alignment parameter based on a difference of the center offset and a reference center offset;
updating the alignment transformation output matrix based on the corrected alignment parameters.
6. The method of any one of claims 1-5, further comprising:
After updating the alignment transformation output matrix, the center offset is saved as a new reference center offset.
7. An image alignment processing apparatus, comprising:
an alignment module configured to determine an alignment parameter for performing an alignment transformation process on a current image frame in response to acquiring an image alignment processing request for the current image frame;
a compensation module configured to acquire a center offset of the current image frame;
An updating module configured to update an alignment transformation output matrix that performs an alignment transformation process on the current image frame based on a difference between the center offset and a reference center offset, and the alignment parameter;
the alignment module is further configured to process the current image frame based on the updated alignment transformation output matrix.
8. The apparatus of claim 7, wherein the compensation module is further configured to:
and acquiring the Hall offset of the current image frame, and acquiring the center offset of the current image frame based on the Hall offset.
9. The apparatus of claim 8, wherein the current image frame comprises an image output by each of at least two image sensors;
the obtaining the center offset of the current image frame based on the hall offset includes:
And obtaining the center offset of the current image frame of each image sensor based on the Hall offset and the corresponding relation between the Hall offset and the center offset pre-stored in each image sensor.
10. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
The reference center offset is a center offset of an image frame previous to the current image frame corresponding to the image alignment processing request.
11. The apparatus of claim 7, wherein updating the alignment transform output matrix for performing the alignment transform on the current image frame based on the alignment parameter and a difference between a center offset and a reference center offset comprises:
Correcting the alignment parameter based on a difference of the center offset and a reference center offset;
updating the alignment transformation output matrix based on the corrected alignment parameters.
12. The device according to any one of claims 7 to 11, wherein,
The compensation module is further configured to save the center offset as a new reference center offset after updating the alignment transformation output matrix.
13. An image alignment processing apparatus, comprising:
A processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1-6.
14. A storage medium having instructions stored therein that, when executed by a processor of a device, enable the device to perform the method of any one of claims 1-6.
CN202211632339.4A 2022-12-19 2022-12-19 Image alignment processing method, device and storage medium Pending CN118233639A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211632339.4A CN118233639A (en) 2022-12-19 2022-12-19 Image alignment processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211632339.4A CN118233639A (en) 2022-12-19 2022-12-19 Image alignment processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN118233639A true CN118233639A (en) 2024-06-21

Family

ID=91496691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211632339.4A Pending CN118233639A (en) 2022-12-19 2022-12-19 Image alignment processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN118233639A (en)

Similar Documents

Publication Publication Date Title
EP2991336B1 (en) Image capturing method and apparatus
CN112114765A (en) Screen projection method and device and storage medium
CN113032030A (en) Camera starting method and device, terminal equipment and storage medium
CN111953903A (en) Shooting method, shooting device, electronic equipment and storage medium
CN111861942A (en) Noise reduction method and device, electronic equipment and storage medium
CN111343386B (en) Image signal processing method and device, electronic device and storage medium
CN110876014B (en) Image processing method and device, electronic device and storage medium
CN112331158B (en) Terminal display adjusting method, device, equipment and storage medium
CN109255839B (en) Scene adjustment method and device
CN112235509B (en) Focal length adjusting method and device, mobile terminal and storage medium
CN118233639A (en) Image alignment processing method, device and storage medium
CN114442792A (en) Method and device for adjusting operating frequency of processor and storage medium
CN111698414B (en) Image signal processing method and device, electronic device and readable storage medium
CN108769513B (en) Camera photographing method and device
CN113746998A (en) Image processing method, device, equipment and storage medium
WO2023230860A1 (en) Zooming method, zooming device, electronic equipment, and storage medium
CN110876013B (en) Method and device for determining image resolution, electronic equipment and storage medium
CN114268731B (en) Camera switching method, camera switching device and storage medium
CN118118798A (en) Image shooting method, device, electronic equipment and storage medium
CN107220131B (en) Data communication method, device and equipment
CN109862252B (en) Image shooting method and device
CN117956268A (en) Preview frame rate control method and device thereof
CN118175420A (en) Image acquisition method, device, equipment and storage medium
CN118354198A (en) Camera starting method, device, medium and electronic equipment
CN115733913A (en) Continuous photographing method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination