CN113487500B - Image distortion correction method and apparatus, electronic device, and storage medium - Google Patents

Image distortion correction method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
CN113487500B
CN113487500B CN202110718857.7A CN202110718857A CN113487500B CN 113487500 B CN113487500 B CN 113487500B CN 202110718857 A CN202110718857 A CN 202110718857A CN 113487500 B CN113487500 B CN 113487500B
Authority
CN
China
Prior art keywords
information
image
coordinate
coordinate information
input image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110718857.7A
Other languages
Chinese (zh)
Other versions
CN113487500A (en
Inventor
霍星
蔡进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ziguang Zhanrui Communication Technology Co Ltd
Original Assignee
Beijing Ziguang Zhanrui Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ziguang Zhanrui Communication Technology Co Ltd filed Critical Beijing Ziguang Zhanrui Communication Technology Co Ltd
Priority to CN202110718857.7A priority Critical patent/CN113487500B/en
Publication of CN113487500A publication Critical patent/CN113487500A/en
Application granted granted Critical
Publication of CN113487500B publication Critical patent/CN113487500B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image distortion correction method and device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an input image and face detection information, wherein the face detection information comprises face indication information and/or position indication information; determining first coordinate mapping information according to the image size of the input image, a preset image fusion model and face detection information, wherein the first coordinate mapping information is used for representing a coordinate mapping relation between coordinate information of each sampling point of the output image and corresponding coordinate information in the input image; carrying out interpolation processing on coordinate information among sampling points of the output image according to the first coordinate mapping information to obtain second coordinate mapping information; and carrying out distortion correction on the input image according to the second coordinate mapping information to obtain an output image. Therefore, the method and the device are beneficial to ensuring the flexibility of image distortion correction, improving the processing efficiency and accuracy of image distortion correction and improving the imaging quality and integrity of the distortion-removed image.

Description

Image distortion correction method and apparatus, electronic device, and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image distortion correction method and apparatus, an electronic device, and a storage medium.
Background
In recent years, manufacturers have been forced to develop electronic devices such as mobile phones carrying multiple cameras, which are equipped with wide-angle/ultra-wide-angle lenses, thereby raising a trend of wide-angle/ultra-wide-angle functions. The wide-angle/ultra-wide-angle lens can bring brand-new photographing experience, namely the view-finding range of the wide-angle/ultra-wide-angle lens can enable a shot image to have a wide view-finding field. Therefore, the wide-angle/ultra-wide-angle lens is more advantageous for photographing natural scenery and tall buildings. In addition, when a group image is captured in which a large number of people stand side by side, the general lens has to be extended in shooting distance to accommodate more people in the visual field, and as a result, the portrait in the image becomes smaller. On the contrary, the wide view field of the wide-angle/ultra-wide-angle lens can easily meet the requirement of multi-person shooting. Meanwhile, when the front camera of the electronic equipment with the relatively limited shooting distance is used, the wide-angle/super-wide-angle lens can also easily realize self-shooting group photo of a plurality of people.
However, in a planar image captured by a wide-angle/super-wide-angle lens, a certain distortion phenomenon occurs on a screen located at an edge or a corner of the planar image. The traditional distortion correction algorithm can adopt a perspective projection model to straighten the torsion curve strips caused by distortion, but can generate a perspective effect of large and small sizes at the same time, namely, a strip stretching phenomenon. For example, when a portrait is taken, people at the edge of the field of view or people at the corners will have severe stretching deformation, which causes distortion in the geometric shape and geometric scale of the portrait. Thus, for portrait photographing, the user is more sensitive to the occurrence of distortion of the portrait and is unacceptable.
Disclosure of Invention
The embodiment of the application provides an image distortion correction method and device, electronic equipment and a storage medium, so that the flexibility of image distortion correction is expected to be ensured, the processing efficiency and accuracy of image distortion correction are improved, and the imaging quality and integrity of a distortion-removed image are improved.
In a first aspect, an embodiment of the present application provides an image distortion correction method, including:
acquiring an input image and face detection information, wherein the face detection information comprises face indication information and/or position indication information, the face indication information is used for indicating whether a face region exists in the input image, the position indication information is used for indicating whether first position information exists, and the first position information is used for representing the position information of the face region in the input image;
determining first coordinate mapping information according to the image size of the input image, a preset image fusion model and the face detection information, wherein the first coordinate mapping information is used for representing a coordinate mapping relation between coordinate information of each sampling point of the output image and corresponding coordinate information in the input image, and the preset image fusion model comprises at least one of a perspective projection model, a cylindrical projection model and a spherical projection model;
performing interpolation processing on coordinate information between sampling points of the output image according to the first coordinate mapping information to obtain second coordinate mapping information, wherein the second coordinate mapping information is used for expressing a coordinate mapping relation between the coordinate information of each pixel point of the output image and the corresponding coordinate information in the input image;
and carrying out distortion correction on the input image according to the second coordinate mapping information to obtain the output image.
In a second aspect, an embodiment of the present application provides an image distortion correction apparatus, including a processing unit configured to:
acquiring an input image and face detection information, wherein the face detection information comprises face indication information and/or position indication information, the face indication information is used for indicating whether a face region exists in the input image, the position indication information is used for indicating whether first position information exists, and the first position information is used for representing the position information of the face region in the input image;
determining first coordinate mapping information according to the image size of the input image, a preset image fusion model and the face detection information, wherein the first coordinate mapping information is used for representing a coordinate mapping relation between coordinate information of each sampling point of the output image and corresponding coordinate information in the input image, and the preset image fusion model comprises at least one of a perspective projection model, a cylindrical projection model and a spherical projection model;
performing interpolation processing on coordinate information between sampling points of the output image according to the first coordinate mapping information to obtain second coordinate mapping information, wherein the second coordinate mapping information is used for expressing a coordinate mapping relation between the coordinate information of each pixel point of the output image and the corresponding coordinate information in the input image;
and carrying out distortion correction on the input image according to the second coordinate mapping information to obtain the output image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory and a communication interface, where the memory stores one or more programs, and the one or more programs are executed by the processor, and the one or more programs are used to execute the instructions of the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, and the computer program is operable to cause a computer to perform some or all of the steps described in the first aspect of the embodiments of the present application.
In a fifth aspect, the present application provides a computer program product, where the computer program product includes a computer program operable to cause a computer to perform some or all of the steps described in the first aspect of the present application. The computer program product may be a software installation package.
Firstly, calculating the coordinate information of each sampling point of an output image to be mapped to the corresponding coordinate information in the input image according to the image size of the input image, a preset image fusion model, the existence of a face region in the input image and/or the position information in the input image of the face region, and obtaining first coordinate mapping information; then, performing coordinate interpolation on coordinate information between sampling points of the output image according to the first coordinate mapping information, and calculating that the coordinate information of each pixel point of the output image is mapped to corresponding coordinate information in the input image to obtain second coordinate mapping information; finally, distortion correction is carried out on the input image according to the second coordinate mapping information to obtain an output image, namely a distortion-removed image, so that the flexibility of image distortion correction is favorably ensured, the processing efficiency and accuracy of image distortion correction are improved, and the imaging quality and integrity of the distortion-removed image are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be expressly understood that the drawings described below are only illustrative of some embodiments of the invention. It is also possible for a person skilled in the art to derive other figures from these figures without inventive effort.
Fig. 1 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a software structure of an electronic device according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a coordinate transformation from a camera coordinate system to an image coordinate system according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of pinhole imaging provided by embodiments of the present application;
FIG. 5 is a schematic diagram of yet another pinhole imaging provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of yet another pinhole imaging provided by an embodiment of the present application;
fig. 7 is a schematic flowchart of an image distortion correction method according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a sampling point of an output image according to an embodiment of the present disclosure;
fig. 9 is a schematic flowchart of determining target coordinate information according to a preset image fusion model according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a polar coordinate system established with an optical center point of an output image according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of a polar coordinate system established by a geometric center point of a face region in an output image according to an embodiment of the present disclosure;
fig. 12 is a block diagram of functional units of an image distortion correction apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions of the present application, the following description is given for clarity and completeness in conjunction with the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the description of the embodiments of the present application without inventive step, are within the scope of the present application.
The terms "first," "second," and the like in the description, claims, and drawings of the present application are used for distinguishing between different objects and not necessarily for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, software, product, or apparatus that comprises a list of steps or elements is not limited to those listed but may alternatively include other steps or elements not listed or inherent to such process, method, product, or apparatus.
Before describing the technical solution of the embodiment of the present application, the electronic device and the software and hardware structure of the electronic device in the embodiment of the present application are specifically described below.
It should be noted that the electronic device according to the embodiment of the present application may be a handheld device, a vehicle-mounted device, a wearable device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, a projection device, a projector, or other devices connected to a wireless modem, or may be various User Equipments (UEs), terminal devices (terminal devices), cell phones (smart phones), smart screens, smart tvs, smart watches, notebook computers, smart stereos, cameras, game pads, microphones, Stations (STAs), Access Points (APs), Mobile Stations (MSs), Personal Digital Assistants (PDAs), Personal Computers (PCs), or relay devices.
In particular, the electronic device may be a wearable device. Wherein, this wearable equipment also can be called intelligent wearable equipment, is the general name of the smart machine who uses wearable technique to carry out intelligent design, development to daily wearing, for example glasses, gloves, wrist-watch, bracelet, dress, shoes etc.. The wearable device can be worn directly on the body, and can also be integrated into a portable device on the user's clothing or accessories. Wearable equipment not only can carry on dedicated hardware architecture, can also carry on dedicated software architecture and carry out data interaction, high in the clouds interaction etc.. In addition, wearable smart machine can not rely on other smart machine in order to realize complete or partial function, for example smart watch, intelligent glasses etc. also can only be concentrated on some application function, and need use with other smart machine cooperations, like all kinds of intelligent bracelet, intelligent ornament etc. that carry out the physical sign monitoring.
The structure of the electronic device according to the embodiment of the present application is described in detail below with reference to fig. 1, and it should be understood that the structure illustrated in fig. 1 is not intended to specifically limit the electronic device. In some possible examples, the electronic device may also include more or fewer modules than illustrated in fig. 1, or combine some of the modules illustrated in fig. 1, or split some of the modules illustrated in fig. 1, or distribute modules other than those illustrated in fig. 1. In addition, the modules illustrated in fig. 1 may be implemented by hardware, software, or a combination of hardware and software.
Referring to fig. 1, the electronic device may include a processor 110, an antenna 1, an antenna 2, a mobile communication module 120, a wireless communication module 130, an audio module 140, a sensor module 150, a display module 160, a camera module 170, a charging management module 180, an internal memory 1901, an external memory interface 1902, and the like.
Specifically, the processor 110 may be configured to run or load an operating system, which may be an Android operating system, an RTOS (real-time operating system) operating system, a UNIX operating system, a Linux operating system, a DOS operating system, a Windows operating system, a Mac operating system, or the like. It should be noted that the processor can be regarded as a complete System On Chip (SOC).
In particular, processor 110 may include one or more processing units. For example, the processor 110 may include at least one of a Central Processing Unit (CPU), an Application Processor (AP), a Micro Controller Unit (MCU), a Single Chip Microcomputer (SCM), a single chip microcomputer (GPU), an Image Signal Processor (ISP), a controller, a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), an application-specific integrated circuit (ASIC), a baseband processor, a neural Network Processor (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors.
Further, a memory may be provided in processor 110 for storing instructions and data. Alternatively, the processor may call a program stored in memory to run an operating system. Optionally, a memory in the processor may hold or cache instructions that have just been used or recycled by the processor. If the processor needs to reuse the instruction or data, the instruction or data can be directly called from the memory, so that repeated access is avoided, and the waiting time of the processor is reduced to improve the system efficiency. Optionally, the memory in the processor may also store or cache data, and the data may be synchronized or transferred to other processors for execution. Wherein the memory in the processor may be a cache memory.
Further, the processor 110 may also include one or more communication interfaces. The communication interface may include at least one of a Serial Peripheral Interface (SPI), an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, a Universal Serial Bus (USB) interface, and the like.
The wireless communication function of the electronic device may be implemented by the antenna 1, the antenna 2, the mobile communication module 120, the wireless communication module 130, the modem processor, the baseband processor, and the like. Wherein the antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in an electronic device may be used to cover a single or multiple communication bands. In addition, different antennas can be multiplexed to improve the utilization rate of the antennas. For example, antenna 1 is multiplexed as a diversity antenna for a wireless local area network.
In particular, the mobile communication module 120 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device. The mobile communication module 120 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like.
Further, the mobile communication module 120 may receive the electromagnetic wave from the antenna 1, and may perform filtering, amplification, and other processing on the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. In addition, the mobile communication module 120 can also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave.
Further, at least a part of the functional modules of the mobile communication module 120 may be disposed in the processor 110; alternatively, at least some of the functional modules of the mobile communication module 120 may be disposed in the same device as some of the modules of the processor 110.
Specifically, the wireless communication module 130 may provide a solution for wireless communication applied to the electronic device, including Bluetooth (BT), Wireless Local Area Network (WLAN), wireless fidelity (Wi-Fi) network, Near Field Communication (NFC), infrared technology (IR), and the like.
Further, the wireless communication module 130 may be one or more devices integrating at least one communication processing module. The wireless communication module 130 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 130 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic wave radiation through the antenna 2.
It should be noted that the electronic device may implement an audio function through the audio module 140, the speaker 1401, the receiver 1402, the microphone 1403, the earphone interface 1404, the processor 110, and the like. Such as music/video playback, recording, etc.
Specifically, the audio module 140 can be used to convert digital audio information into an analog audio signal for output, and can also be used to convert an analog audio input into a digital audio signal. In addition, the audio module 140 may also be used to encode and decode audio signals. In some possible examples, the audio module 140 may be disposed in the processor 110, or some functional modules of the audio module 140 may be disposed in the processor 110.
In particular, the speaker 1401 may be used to convert an audio electrical signal into a sound signal. The electronic apparatus can listen to sound played in music/video through the speaker 1401, or listen to a handsfree call, or the like. In some possible examples, the speaker 1401 may serve as one of the sound emitting components.
In particular, receiver 1402 may be used to convert an electrical audio signal into an acoustic signal. When the electronic device receives a call or voice information, the receiver 1402 can be close to the ear to receive voice. In some possible examples, receiver 1402 may serve as one of the sound emitting components.
In particular, the microphone 1403 may be used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a sound signal into the microphone 1403 by sounding a sound signal near the microphone 1403 through the mouth of the user. Additionally, the electronic device may be provided with at least one microphone 1403. In some possible examples, the electronic device may provide two microphones 1403, which may implement a noise reduction function in addition to collecting sound signals; in some possible examples, the electronic device may further provide three, four or more microphones 1403, which may be used for identifying the sound source to implement a directional recording function, in addition to acquiring the sound signal or reducing the noise, and the like, without being limited in particular.
In particular, the headset interface 1404 may be used to connect wired headsets. The headset interface 1404 may be a USB interface 1803, an open mobile electronic device platform (OMTP) standard interface of 3.5mm, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface, or the like.
Specifically, the sensor module 150 may include an inertial sensor, a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, an ultra-wideband UWB sensor, a near field communication NFC sensor, a laser sensor, a visible light sensor, and/or the like.
The electronic device may implement a display function through the GPU, the display module 160, the processor 110, and the like. The GPU may be configured to perform mathematical and geometric calculations and perform graphics rendering, among other things. Additionally, the GPU may be a microprocessor for image processing and connect the display module 160 and the processor 110. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
Specifically, the display module 160 may be a display screen for displaying images, videos, and the like. The display module 160 may include a display panel, among others. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a quantum dot light-emitting diode (QLED), or the like. In some possible examples, the electronic device may include one or more display modules 160.
The electronic device may implement a shooting function through the ISP, the DSP, the camera module 170, the video codec, the GPU, the display module 160, the processor 110, and the like. The ISP may be used to process data fed back by the camera module 170. For example, when taking a picture, the shutter is opened first, then the light is transmitted to the photosensitive element of the camera module 170 through the lens of the camera module 170, so that the optical signal is converted into an electrical signal, and finally the electrical signal is transmitted to the ISP through the photosensitive element to be processed into a digital image and the like. In addition, the ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some possible examples, the ISP and/or DSP may be provided in the camera module 170.
In particular, the camera module 170 may be a camera or a camera module, which is used to capture or shoot still/moving images or videos. The image capturing module 170 may include a lens, a photosensitive element, and the like, and the photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. Therefore, the stereoscopic object can generate an optical image through the lens and project the optical image to the photosensitive element. The light sensing element can convert the optical signal in the optical image into an electrical signal, which is then transmitted to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP. The DSP converts the digital image signal into an image signal in a standard RGB, YUV or other format. In some possible examples, the electronic device may include one or more camera modules 170.
In particular, the charge management module 180 may be configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some possible examples, the charging management module 180 may receive charging input of a wired charger through the USB interface 1803. In some possible examples, the charging management module 180 may receive a wireless charging input through a wireless charging coil of the electronic device. While the charging management module 180 charges the battery 1801, power may be supplied to the electronic device through the power management module 1802.
It should be noted that the power management module 1802 may be used to connect the battery 1801, the charge management module 180, and the processor 110. The power management module 1802 receives input from the battery 1801 and/or the charging management module 180, and provides power to various modules in the electronic device, the processor 110, and the like.
Specifically, the power management module 1802 may also be configured to monitor parameters such as battery capacity, battery cycle count, and battery state of health (leakage, impedance). In some possible examples, the power management module 1802 may also be disposed in the processor 110; in some possible examples, the power management module 1802 and the charge management module 180 may also be provided in the same device.
It is noted that the internal memory 1901 may be used for storing computer executable program code, which includes instructions. The processor 110 executes various functional applications and data processing of the electronic device by executing instructions stored in the internal memory 1901. In some possible examples, the internal memory 1901 stores program codes for executing the technical solutions of the embodiments of the present application.
Specifically, the internal memory 1901 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (e.g., a sound playing function, an image playing function, etc.) required for at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the electronic device, and the like. In addition, the internal memory 1901 may include a high-speed random access memory, and may further include a nonvolatile memory. Such as at least one magnetic disk storage device, flash memory device, Universal Flash Storage (UFS), etc.
In particular, the external memory interface 1902 may be used to connect an external memory card, such as a micro SD card, to extend the storage capability of the electronic device. The external memory card communicates with the processor 110 through the external memory interface 1902 to implement a data storage function. For example, files such as music, video, and the like are saved in an external memory card.
In the embodiment of the present application, a software system of an electronic device may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the following, the embodiment of the present application takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of an electronic device.
Fig. 2 is a schematic diagram of an architecture of a software and hardware system with an Android system. The internal memory 1901 may store therein a kernel layer 220, a system runtime layer 240, an application framework layer 260, and an application layer 280. Wherein, the layers communicate with each other through a software interface, and the kernel layer 220, the system runtime layer 240 and the application framework layer 260 belong to an operating system space.
Specifically, the application layer 280 belongs to a user space, and at least one application program (or simply "application") runs in the application layer 280, and the application program may be a native application program carried by an operating system itself or a third-party application program developed by a third-party developer. For example, the application layer 280 may include applications such as camera, gallery, calendar, phone, map, navigation, WLAN, bluetooth, music, video, and short messages.
It should be noted that the application framework layer 260 provides various Application Programming Interfaces (APIs) and programming frameworks that may be used by the application programs that build the application layer, so that developers can build their own application programs by using these APIs. For example, a window manager (window manager), a content provider (content providers), a view system (view system), a telephone manager (telephone manager), a resource manager, a notification manager (notification manager), a message manager, an activity manager (activity manager), a package manager (package manager), a location manager (location manager), and an NFC service, etc.
In particular, a window manager may be used to manage the windowing program. The window manager can obtain the size of the display screen and judge whether a status bar, a lock screen, a capture screen and the like exist.
In particular, the content provider may be used to store and retrieve data and make the data accessible to applications. The data may include, among other things, video, images, audio, calls made and answered, browsing history and bookmarks, phone books, and the like. In addition, the content provider may enable an application to access data of another application, such as a contacts database, or to share their own data.
In particular, the view system includes visual controls. For example, controls for displaying text, controls for displaying pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
In particular, a phone manager is used to provide communication functions for the electronic device. For example, management of call status (e.g., on, off, etc.).
In particular, the resource manager may provide various resources for the application. Such as localized strings, icons, pictures, layout files, video files, etc.
Specifically, the notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. For example, a notification manager is used to notify download completion, message alerts, and the like. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system. Additionally, the notification of the application running in the background may also be a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, a prompt tone is sounded, the electronic device vibrates, indicator lights flash, and the like.
Specifically, the message manager may be configured to store data of messages reported by each application program, and process the data reported by each application program.
In particular, the campaign manager may be used to manage application lifecycle and provide common navigation fallback functionality. In one possible example, the message manager may be part of the notification manager.
It should be noted that the system runtime library layer 240 provides a main feature support for the Android system through some C/C + + libraries. For example, the SQLite library provides support for a database, the OpenGL/ES library provides support for 3D drawing, the Webkit library provides support for a browser kernel, and the like. Also provided in the system Runtime layer 240 is an Android Runtime library (Android Runtime), which mainly provides some core libraries that can allow developers to write Android applications using the Java language.
Specifically, the kernel layer 220 may provide underlying drivers for various hardware of the electronic device, such as a display driver, an audio driver, a camera driver, a bluetooth driver, a Wi-Fi driver, a power management, an NFC driver, a UWB driver, and the like.
Before describing the technical solutions of the embodiments of the present application, the following will specifically explain related concepts that may be involved in the present application.
1. Camera imaging
In image measurement processes and machine vision applications, in order to determine the correlation between the three-dimensional geometric position of a certain point on a space object and the corresponding point in an image, a geometric model of camera imaging must be established, and relevant geometric model parameters are camera parameters (such as camera internal parameters, camera external parameters and distortion parameters). In most conditions, the camera parameters need to be obtained through experiments and calculation, and the process of solving the camera parameters is called camera calibration (or video camera calibration). There is some distortion (distortion) in the imaging process of the camera, and the distortion is an offset to the linear projection, and the distortion may be caused by the lens of the camera.
In general, camera imaging can be classified into rigid body transformation (from the world coordinate system to the camera coordinate system), perspective projection (from the camera coordinate system to the image coordinate system), distortion correction, digitized images, and the like. The method comprises the following specific steps:
the coordinate system in the rigid body transformation satisfies the following coordinate transformation:
X c ,Y c ,Z c )=Q(X w ,Y w ,Z w );
wherein the matrix Q is determined by the matrix R and the vector T; the matrix R represents a rotation matrix; vector T represents a translation vector; coordinates (A)X c ,Y c ,Z c ) Representing coordinates in a camera coordinate system; coordinates (a)X w ,Y w ,Z w ) Representing coordinates in a world coordinate system. It is understood that rigid body transformations only change the spatial position (translation) and orientation (rotation) of an object, and not its shape. Meanwhile, the rotation matrix R and the translational vector T are also referred to as camera external parameters. That is, the camera parameters determine the transformation of a point on a spatial object from the world coordinate system to the camera coordinate system and describe the position and orientation of the camera in the world coordinate system.
Perspective projection involves a coordinate transformation from the camera coordinate system to the image coordinate system. Referring to FIG. 3, the image coordinate system includes an image pixel coordinate system (u0 0 vCoordinate system and image physical coordinate system (C:)x0 1 yA coordinate system). The image pixel coordinate system takes the upper left corner of the image as the origin (0 0 ) And in units of pixels. The image physical coordinate system is defined by the geometric center point (also called principal point,0 1 ) Is the origin and is in millimeters. The coordinate system in perspective projection satisfies the following coordinate transformation:
u,v,1)=A/Z c (X c ,Y c ,Z c );
wherein the coordinates (A), (B), (C)u,v1) coordinates in an image pixel coordinate system; the matrix A represents camera parameters; matrix A is formed bydxdyfAnd coordinates (a)u 0 ,v 0 ) Determining;dxrepresents the size ratio (mm/pixel);dysize ratio (mm/pixel);frepresents the camera focal length; coordinates (A)u 0 ,v 0 ) Representing the coordinates of the principal point.
The coordinate transformation relation determined by ideal pinhole imaging is linear, and in practice, for a camera used in reality, lens distortion (lens distortion) occurs due to irregular refraction of lenses of a lens caused by light rays, that is, the coordinate of an image point calculated according to an ideal pinhole imaging model has a deviation from the actual coordinate. The introduction of distortion causes the geometric transformation relationship in the pinhole imaging model to become non-linear, increasing the complexity of the model, but closer to the real situation. Among them, the distortion caused by the distortion can be classified into radial distortion and tangential distortion.
The distortion types can be classified into radial distortion and tangential distortion. The reason for forming the radial distortion is that the lens manufacturing process is not perfect, so that the shape of the lens has defects, and the radial distortion includes pincushion distortion, barrel distortion and the like. The tangential distortion includes thin lens distortion, centrifugal distortion, and the like. The reason for the formation of thin lens distortion is that there is some slight tilt of the lens; the reason for the centrifugal distortion is that the lens barrel is formed by combining a plurality of lenses, and the optical axes of the lenses are not on the same center line. After distortion of the lens is introduced, the coordinate transformation relation of the image point from the ideal image coordinate system to the real image coordinate system satisfies the following conditions:
x'=x(1+k 1 r 2 +k 2 r 4 +k 3 r 6 )+2p 1 xy+p 2 (r 2 +2x 2 );
y' = y(1+k 1 r 2 +k 2 r 4 +k 3 r 6 )+p 1 (r 2 +2y 2 )+2p 2 xy
r 2 = x 2 +y 2
wherein the coordinates (A), (B), (C)x,y) Representing coordinates in the real image coordinate system (i.e., coordinates in the distorted image); coordinates (A)x',y') Representing coordinates in an ideal image coordinate system;k 1k 2 andk 3 representing a radial distortion parameter;p 1 andp 2 representing the tangential distortion parameter. Meanwhile, the radial distortion parameter and the tangential distortion parameter may be collectively referred to as a distortion parameter.
The light rays pass through the camera lens and then are imaged on a photosensitive array (CCD or CMOS), and then the photosensitive array converts the optical signals into electric signals to finally form a digital image.
2. Perspective projection
The imaging principle of perspective projection and the cause of the tensile deformation will be described and analyzed. Wherein the perspective projection is also a pinhole imaging, as shown in fig. 4.θRepresenting the angle formed by the ray (the line segment where the object point, the optical center and the image point are) and the optical axis,fis the camera focal length. Distance of image point formed on imaging plane to optical axisL
L=f·tan(θ);
With followingθThe size of the mixture is increased, and the mixture is,Lthe slope is gradually increased; when in useθWhen the angle is equal to 90 degrees,Lit will become infinite. Thus, for linear perspective projection, the size of the image sensor determines the field of view range of the imaging. To obtain a larger field of view, one can start with an optical lens, and a wide-angle/ultra-wide-angle lens uses nonlinear projection. The non-linear projection is radially contracted when imaged in a region away from the optical axis, resulting in distortion of the picture at the edge or corner of the image. Among them, the conventional distortion correction algorithm is a process of converting such a non-linear projection into a linear perspective projection.
It should be noted that the conventional distortion correction algorithm may use camera parameters (e.g., such asdxdyfu 0v 0 ) And distortion parameters (e.g.k 1k 2k 3p 1p 2 ) The distortion image is subjected to distortion correction to obtain a perspective projection image (i.e., a distortion-removed image). Meanwhile, the embodiment of the present application will also refer to this mode asPerspective projection model.
Although the distorted twisted line is straightened in the perspective projection image, the perspective effect of the size of the line is generated, namely the line stretching phenomenon. The reason for the perspective effect of the object is that an included angle exists between a certain side surface of the object and the imaging plane, so that the image of the end closer to the imaging plane is larger, and the image of the end farther from the imaging plane is smaller.
When a planar object, such as a checkerboard calibration plate, is photographed, if the checkerboard plane is parallel to the imaging plane, geometric deformation will not occur in all the checkerboards after imaging, and even the checkerboards at the edge or corner positions will not generate tensile deformation. However, if the checkerboard calibration plate is slightly inclined to form an angle with the imaging plane, the perspective effect of the checkerboard calibration plate is large and small, that is, the checkerboard at the edge or corner is deformed by stretching.
Referring to fig. 5, the three-dimensional object is divided into two parts on the object plane. When the object plane is parallel to the imaging plane, the two parts are also homogeneous two parts in the projected imaging according to a similar triangle.
Referring to fig. 6, when the object plane and the imaging plane form an angle, the two uniform portions are no longer uniform in the projected image. Wherein, one end of the object plane close to the imaging plane is obviously widened after imaging, the perspective effect of near-far and near-far is presented, and the edge stretching which is closer to the imaging plane is more obvious.
Because a certain side surface of the solid object always exists or an included angle is formed between a tangent plane and an imaging plane, when the solid object is shot, the object positioned at the edge or corner of the visual field is easy to have a stretching effect, and the geometric shape and the geometric proportion of the solid object are distorted. For example, when a portrait is taken, people at the edge or corner of the visual field always have some side surfaces forming an angle with the imaging plane due to the fact that the head and the body are three-dimensional, and therefore the portrait is taken to generate stretching effects to different degrees.
3. Spherical projection
Spherical projection, also called stereoscopic projection, is a projection method that can maintain the angle, and is a projection method that projects a captured planar image onto a spherical image whose radius is the focal length of a camera, and has an advantage that the angle of the image can be well maintained. Therefore, the perspective projection image is projected again by a spherical projection method to obtain a spherical projection image, and the edge or corner position of the spherical projection image can recover the geometric shape and geometric proportion of the solid object, but can cause a certain degree of bending deformation of the long straight line in the background of the image.
4. Cylindrical projection
The cylindrical projection is a projection method of projecting a captured planar image onto a cylindrical image with the focal length of the camera as the radius, and has an advantage that the vertical lines are kept from being bent and deformed. Similarly, the perspective projection image is again projected by the cylindrical projection method to obtain a cylindrical projection image, and the stretching deformation of both sides of the cylindrical projection image can achieve a certain degree of restoration, but the long linear line in the horizontal direction generates a certain degree of bending deformation.
For height of image sizeH(H pixels) and a width ofW(W pixels) of an original plane image whose coordinates on an image coordinate system can be expressed by a coordinate transformation relation of cylindrical projection as:
x' = arctan((x-W/2)/f)+arctan(W/(2f));
y' = (f·(y-H/2)/(sqr((x-W/2) 2 +f 2 ))+H/2;
wherein the coordinates (A), (B), (C)x,y) Representing the coordinates on the image coordinate system of the plane image, wherein the coordinates use the upper left corner of the original plane image as an origin; coordinates (A)x',y') Express coordinates (x,y) Coordinates projected through the cylindrical surface;frepresents the camera focal length;Hindicating the height of the original plane image;Wrepresenting the width of the original plane image, sqr representingArithmetic square root. Alternatively, the first and second electrodes may be,
x' = arctan(x/f);
y' =cos(arctan(x/f));
wherein the coordinates (A), (B), (C)x,y) And coordinates on the image coordinate system of the plane image are expressed, and the coordinates use the geometric center of the original plane image as an origin.
In summary, perspective projection, spherical projection and cylindrical projection have the following advantages and disadvantages:
(1) the perspective projection has the advantages that: the distortion can be corrected by straightening the torsion curve bar caused by the distortion in the image; the disadvantages are as follows: the stereo object at the edge or corner of the image may be caused to generate stretching deformation, i.e. perspective effect of large and small, causing distortion of the geometric shape and geometric scale of the image.
(2) The advantages of spherical projection: the geometric shape and the geometric proportion of the solid object positioned at the edge or corner position in the image can be maintained; the disadvantages are as follows: a degree of curvature may be caused to long straight lines in the background of the image.
(3) The advantages of cylindrical projection: the geometric shape and the geometric proportion of the three-dimensional object positioned at the left edge and the right edge in the image can be kept, and meanwhile, the vertical lines are not bent; the disadvantages are as follows: long straight lines in the background of the image can be caused to bend in the horizontal direction.
In a planar image captured by a wide-angle/super-wide-angle lens, a certain distortion phenomenon occurs on a screen located at an edge or a corner of the planar image. The traditional distortion correction algorithm can adopt a perspective projection model to straighten the torsion curve strips caused by distortion, but can generate a perspective effect of large and small sizes at the same time, namely a strip stretching phenomenon. For example, when a portrait is taken, people at the edge of the field of view or people at the corners will have severe stretching deformation, which causes distortion in the geometric shape and geometric scale of the portrait. Thus, for portrait photographing, the user is more sensitive to the occurrence of distortion of the portrait and is unacceptable.
In conjunction with the above description, the steps performed by the image distortion correction method will be described below in terms of method examples, please refer to fig. 7. Fig. 7 is a schematic flowchart of an image distortion correction method provided in an embodiment of the present application, where the method includes:
and S710, acquiring an input image and face detection information.
The face detection information may include face indication information and/or position indication information; the face indication information can be used for indicating whether a face region exists in the input image; the position indication information may be used to indicate whether first position information exists, and the first position information may be used to indicate position information of the face region in the input image.
It should be noted that the input image in the embodiment of the present application may represent an image captured by the camera module 170 of the electronic device, may represent an image stored in the internal memory 1901 of the electronic device, and may represent an image read by the external memory interface 1902 of the electronic device, which is not limited in particular. The camera module 170 may include a wide/ultra-wide lens, among others.
It will be appreciated that the input image may be an image captured by a wide/ultra-wide angle lens. Due to the imaging characteristics of the wide-angle/super-wide-angle lens (i.e., the wide-angle/super-wide-angle lens uses non-linear projection), a picture in the input image may have a certain distortion, and in particular, a picture located at an edge or a corner in the input image has a serious distortion phenomenon.
Second, the input image may be an image captured for portrait or non-portrait photographing. Therefore, the embodiment of the application can detect whether the face region exists in the input image and/or the specific position of the face region in the input image through a face detection algorithm, so as to obtain the face detection information. The face detection information may include face indication information and/or position indication information.
Finally, if the face detection information includes face indication information, the face indication information may specifically indicate that a face region exists in the input image, or may specifically indicate that a face region does not exist in the input image, which specifically needs to be determined according to whether the electronic device has a face detection function.
If the face detection information includes face indication information and position indication information, and a face region does not exist in the input image (indicated by the face indication information), the position indication information may specifically indicate that there is no position information of the face region in the input image (i.e., first position information).
If the face detection information includes face indication information and position indication information, and a face region (indicated by the face indication information) exists in the input image, the position indication information may specifically indicate that the first position information exists, or may specifically indicate that the first position information does not exist, and specifically, the determination needs to be performed according to whether the electronic device has a position detection function in the face detection.
If a face region exists in the input image and the face region is located at the geometric center position of the input image (represented by the first position information), the face picture in the face region has smaller distortion. At this time, the face image may not be subjected to the distortion removal processing, or the face image may be subjected to the distortion removal processing by using a conventional distortion correction algorithm (such as a perspective projection model), but a significant perspective effect of large and small distances is not generated. The first position information may be used to determine whether the face region is located at a geometric center of the input image, at an edge or a corner of the input image, or the like.
If a face region exists in the input image and the face region is located at an edge or corner of the input image (indicated by the first location information), the face picture in the face region will have severe distortion. At this time, if a traditional distortion correction algorithm (such as a perspective projection model) is used to perform distortion removal processing on the face picture, that is, a twisted line caused by distortion is straightened, the distorted face picture will generate a perspective effect of small size and large size, that is, a line stretching phenomenon, thereby causing distortion of the geometric shape and geometric proportion of the face picture.
Based on the above, according to the distortion correction strategy of image fusion, the embodiment of the application selectively combines and fuses perspective projection, cylindrical projection and spherical projection to obtain the coordinate information of the same sampling point of the output image, and maps the coordinate information to the corresponding different coordinate information in the input image; then, the different coordinate information is selectively weighted and fused according to whether the face region (indicated by the face indication information) exists in the input image and the position information (indicated by the first position information) of the face region in the input image, and the weight in the selective weighted fusion is adaptively calculated by the first position information, which will be described later.
S720, determining first coordinate mapping information according to the image size of the input image, the preset image fusion model and the face detection information.
The first coordinate mapping information can be used for representing a coordinate mapping relation between coordinate information of each sampling point of the output image and corresponding coordinate information in the input image; the preset image fusion model may include at least one of a perspective projection model, a cylindrical projection model, and a spherical projection model.
The output image of the embodiment of the present application may be used to represent an image subjected to distortion correction processing from an input image. Wherein the output image and the input image have the same image size, i.e. the same length and width. Therefore, the image size of the input image is equal to the image size of the output image.
Secondly, the image distortion correction algorithm mainly calculates the coordinate mapping relationship between the coordinate information of the distorted image (such as an input image) and the coordinate information of the undistorted image (such as an output image). Since the (0, 0) coordinate on the distorted image may be mapped to the (-80, 0) coordinate of the undistorted image, and the negative coordinate of (-80, -50) does not exist, it is necessary to calculate the coordinate mapping relationship between each coordinate information of the undistorted image and the corresponding coordinate information in the distorted image by the fact that the distorted image and the undistorted image have the same image size, and then perform distortion correction processing on the distorted image according to the coordinate mapping relationship to obtain the undistorted image.
For example, when the coordinates of the undistorted image are calculated (u,v) Mapping to corresponding coordinates in distorted images: (u',v') When passing through (u',v') Obtaining the corresponding pixel value in the distorted image, and using the pixel value as (A)u,v) And (4) analogizing the corresponding pixel values in sequence to obtain the undistorted image.
Finally, in order to avoid calculating a point-to-point coordinate mapping relationship between the coordinate information of each pixel point of the output image and the coordinate information of the corresponding pixel point in the input image, which results in a large calculation amount and high calculation complexity, in the embodiment of the present application, the coordinate information of the output image needs to be sampled first, and then the coordinate mapping relationship between the coordinate information of each sampling point of the output image and the corresponding coordinate information in the input image is calculated, so that the calculation efficiency is improved, and the calculation complexity is reduced.
Specifically, the preset image fusion model may be configured to determine a distortion correction strategy for preset image fusion, where the distortion correction strategy for preset image fusion may include at least one of performing perspective projection on an input image (to obtain an output image), performing perspective projection on the input image (to obtain a perspective projection image) and then performing cylindrical projection (to obtain an output image), and performing perspective projection on the input image (to obtain a perspective projection image) and then performing spherical projection on the input image (to obtain an output image).
It should be noted that, according to the distortion correction strategy of image fusion, the embodiment of the present application may selectively combine and fuse a perspective projection model, a cylindrical projection model, and a spherical projection model, so as to implement perspective projection of an input image (to obtain an output image), perspective projection of the input image (to obtain a perspective projection image) and then cylindrical projection of the input image (to obtain an output image), and perspective projection of the input image (to obtain a perspective projection image) and then spherical projection of the input image (to obtain an output image).
In addition, usually, a coordinate mapping relationship between each coordinate information of the undistorted image and the corresponding coordinate information in the distorted image needs to be calculated, and then the distortion correction is performed on the distorted image according to the coordinate mapping relationship to obtain the undistorted image. Therefore, each sampling point of the output image is calculated (determined) to be mapped to corresponding coordinate information in the input image according to the perspective projection model, and the input image is subjected to perspective projection (the output image is obtained).
Similarly, the corresponding coordinate information of each sampling point of the output image mapped to the perspective projection image is calculated (determined) according to the cylindrical projection model, the corresponding coordinate information of the perspective projection image mapped to the corresponding coordinate information of the input image is calculated (determined) according to the perspective projection model, and equivalently, the input image is subjected to perspective projection (to obtain the perspective projection image) and then to cylindrical projection (to obtain the output image).
Similarly, the corresponding coordinate information of each sampling point of the output image mapped to the perspective projection image is calculated (determined) according to the spherical projection model, the corresponding coordinate information of the perspective projection image mapped to the corresponding coordinate information of the input image is calculated (determined) according to the perspective projection model, and equivalently, the input image is subjected to perspective projection (to obtain the perspective projection image) and then spherical projection (to obtain the output image).
Specifically, the perspective projection model in the preset image fusion model may be determined by camera calibration parameters. The camera calibration parameters comprise camera internal parameters and distortion parameters.
It should be noted that the perspective projection model according to the embodiment of the present application may be a distortion correction algorithm, and the distortion correction algorithm may perform distortion correction processing on the input image through the camera internal parameter and the distortion parameter.
In addition, as can be seen from the above description of "camera imaging", camera intrinsic parameters may include a size ratiodx、Size ratiodyCoordinates of principal points: (u 0 ,v 0 ) Focal length of camerafAt least one of; the distortion reference may include a radial distortion parameter (e.g., a radial distortion parameter)k 1k 2 Andk 3 ) Tangential distortion parameter (e.g. ofp 1 Andp 2 ) At least one of (a).
For example, coordinates of the output image by perspective projection modelu,v) Mapping to corresponding coordinates in distorted images: (u',v') The following are satisfied:
u = x/dx+u 0
v = y/dy+v 0
x' = x(1+k 1 r 2 +k 2 r 4 +k 3 r 6 )+2p 1 xy+p 2 (r 2 +2x 2 );
y' = y(1+k 1 r 2 +k 2 r 4 +k 3 r 6 )+p 1 (r 2 +2y 2 )+2p 2 xy
r 2 = x 2 +y 2
u' = x'/dx+u 0
v = y'/dy+v 0
specifically, the cylindrical projection model in the preset image fusion model may be determined by camera internal parameters. Wherein the camera parameters may include a camera focal length.
It should be noted that, the cylindrical projection model in the embodiment of the present application may project a captured planar image onto a cylindrical image with a focal length of a camera as a radius, so as to be beneficial to maintaining the geometric shape and geometric proportion of a solid object located at left and right edges in the planar image, and meanwhile, it is also possible to prevent a vertical line in the planar image from being curved.
Specifically, the spherical projection model in the preset image fusion model may be determined by camera internal parameters. Wherein the camera parameters may include a camera focal length.
It should be noted that the spherical projection model according to the embodiment of the present application can project a captured planar image onto a spherical image with a focal length of a camera as a radius, so as to be beneficial to maintaining the geometric shape and geometric scale of a solid object located at an edge or a corner in the planar image.
With the above description, how to determine the first coordinate mapping information according to the image size of the input image, the preset image fusion model, and the face detection information will be specifically described below.
Specifically, the determining the first coordinate mapping information according to the image size of the input image, the preset image fusion model and the face detection information in S720 may include: sampling on an image coordinate according to the image size of an input image to obtain M sampling points of an output image, wherein the value of M is an integer greater than 1; determining target coordinate information according to a preset image fusion model, wherein the target coordinate information is used for representing that the coordinate information of a first sampling point is mapped to corresponding coordinate information in an input image, and the first sampling point is one of M sampling points; and determining first coordinate mapping information according to the face detection information, the target coordinate information and second position information, wherein the second position information is used for representing the position information of the first sampling point in the output image.
It should be noted that, in order to improve the calculation efficiency and reduce the calculation complexity, in the embodiment of the present application, sampling is performed on image coordinates according to an image size of an input image (the image sizes of the input image and the output image are the same) to obtain M sampling points of the output image, so that it is convenient to calculate that coordinate information of each sampling point of the output image is mapped to corresponding coordinate information in the input image, and the problem of large calculation amount caused by directly calculating a coordinate mapping relationship between a pixel point and a pixel point is avoided. Wherein, the sampling points can also be regarded as grid nodes. Meanwhile, the coordinate information of each sampling point can adopt a floating point type data type.
In addition, the image size is typically expressed as the width and height of the image, e.g., 80px by 100px, so that uniform sampling can be performed in steps over the width and height, respectively. For example,the sampling step length is 16, and the coordinate information of the sampling point (grid node) of the 1 st row and the 1 st column is P 00 = (0.0 ), and coordinate information of a sampling point (grid node) in the 1 st row and 2 nd column is P 01 = (0.0,16.0), and coordinate information of a sampling point (grid node) in row 2 and column 1 is P 10 = (16.0,0.0), and so on, as shown in fig. 8.
And secondly, calculating the coordinate information of each sampling point of the output image according to a preset image fusion model and mapping the coordinate information to the corresponding coordinate information in the input image. The coordinate information of a certain sampling point (i.e. a first sampling point) in the M sampling points of the output image is mapped to the corresponding coordinate information in the output image as target coordinate information.
Again, the second position information may be used to determine whether the first sampling point is located at a geometric center point position of the output image or at an edge or corner position of the output image. Alternatively, the second position information may be used to determine whether the first sampling point is located in the face region.
And finally, determining first coordinate mapping information according to the face detection information, the target coordinate information and the position information of the first sampling point in the output image, so that the first coordinate mapping information is determined according to the image size of the input image, the preset image fusion model and the face detection information.
With the above description, how to determine the target coordinate information according to the preset image fusion model is specifically described below.
Specifically, the target coordinate information includes at least one of first target coordinate information, second target coordinate information, and third target coordinate information; determining the target coordinate information according to the preset image fusion model may include: determining that the coordinate information of the first sampling point is mapped to the corresponding coordinate information in the input image according to the perspective projection model to obtain first target coordinate information; and/or the presence of a gas in the gas,
determining that the coordinate information of the first sampling point is mapped to the corresponding coordinate information in the first image according to the cylindrical projection model, and determining that the corresponding coordinate information in the first image is mapped to the corresponding coordinate information in the input image according to the perspective projection model to obtain second target coordinate information, wherein the first image is used for representing the image obtained by performing projection calculation on the input image according to the perspective projection model; and/or the presence of a gas in the gas,
and determining that the coordinate information of the first sampling point is mapped to the corresponding coordinate information in the first image according to the spherical projection model, and determining that the corresponding coordinate information in the first image is mapped to the corresponding coordinate information in the input image according to the perspective projection model to obtain the coordinate information of the third target.
It should be noted that, as can be seen from the above description, a distorted image captured at a wide angle/super wide angle is projected through a perspective projection model to obtain a perspective projection image, and a picture at an edge or a corner in the perspective projection image has a perspective effect of small and large, which causes distortion of the geometric shape and the geometric scale of the image.
Therefore, when the perspective projection image is projected again by the cylindrical projection model to obtain a cylindrical projection image, the both sides of the cylindrical projection image are stretched and deformed to some extent, but the long linear line in the horizontal direction is bent and deformed to some extent.
If the perspective projection image is projected again through the spherical projection model to obtain a spherical projection image, the picture at the edge or corner position in the spherical projection image can realize the recovery of the geometric shape and the geometric proportion, but the long straight line in the background of the image generates a certain degree of bending deformation.
Based on this, according to the distortion correction strategy of image fusion, the embodiment of the application selectively combines and fuses the perspective projection model, the cylindrical projection model and the spherical projection model to obtain that the coordinate information of the same sampling point (such as a first sampling point) of the output image is mapped to the corresponding different coordinate information (such as first target coordinate information, second target coordinate information and third target coordinate information) in the input image, so that the distortion correction strategy of image fusion is used for selectively combining and fusing, the coordinate information of each sampling point of the output image is calculated and mapped to the corresponding coordinate information in the input image, and therefore, the flexibility of image distortion correction is favorably ensured, and the processing efficiency and accuracy of image distortion correction are improved.
For example, referring to fig. 9, the process of determining the target coordinate information according to the preset image fusion model is as follows:
sampling on an image coordinate according to the image size of an input image, and outputting a certain sampling point (namely a first sampling point) in M sampling points of an output image;
outputting the coordinate information of the first sampling point to be mapped to the corresponding coordinate information in the input image according to the following three image fusion modes:
the first output is the perspective projection result, i.e. only the input image is perspective projected to obtain the output image. Therefore, according to the perspective projection model, the coordinate information of the first sampling point is reversely calculated and mapped to the corresponding coordinate information in the input image, and the first target coordinate information, namely P, is obtained a =(x 1 ,y 1 )。
The second output is a cylindrical projection result, i.e. the input image is firstly subjected to perspective projection and then cylindrical projection to obtain an output image. Therefore, the coordinate information of the first sampling point is inversely calculated according to the cylindrical projection model and is mapped to the corresponding coordinate information in the perspective projection image (i.e. the first image), namely P w . Then, based on the perspective projection model, P is calculated reversely w Mapping to corresponding coordinate information in the input image to obtain second target coordinate information, i.e. P b =(x 2 ,y 2 )。
The third output is a spherical projection result, namely, the perspective projection is firstly carried out on the input image, and then the spherical projection is carried out to obtain an output image. Therefore, according to the spherical projection model, the coordinate information of the first sampling point is inversely calculated and mapped to the corresponding coordinate information in the perspective projection image (i.e. the first image), namely P v . Then, based on the perspective projection model, P is calculated reversely v Mapping to corresponding coordinate information in the input image to obtain third target coordinate information, i.e. P c =(x 3 ,y 3 );
Judging whether a face area exists in an input image or not; if the face area exists, selectively combining and weighting and fusing the first target coordinate information, the second target coordinate information and the third target coordinate information to obtain first coordinate mapping information; and if the human face area does not exist, using the first target coordinate information as coordinate information in the first coordinate mapping information, or using the second target coordinate information as coordinate information in the first coordinate mapping information, or using the third target coordinate information as coordinate information in the first coordinate mapping information.
With the above description, a specific description will be given below of how to determine the first coordinate mapping information based on the face detection information, the target coordinate information, and the second position information.
Specifically, if the face detection information includes face indication information and position indication information, determining first coordinate mapping information according to the face detection information, the target coordinate information, and the second position information may include: if the face indication information specifically indicates that a face region exists in the input image, weighting and fusing at least two of the first target coordinate information, the second target coordinate information and the third target coordinate information according to the position indication information and the second position information to obtain first coordinate mapping information; alternatively, the first and second electrodes may be,
if the face indication information specifically indicates that no face area exists in the input image, the first target coordinate information is used as coordinate information in the first coordinate mapping information; alternatively, the first and second electrodes may be,
if the face indication information specifically indicates that no face area exists in the input image, the second target coordinate information is used as coordinate information in the first coordinate mapping information; alternatively, the first and second electrodes may be,
and if the face indication information specifically indicates that no face region exists in the input image, taking the third target coordinate information as coordinate information in the first coordinate mapping information.
It should be noted that, in the embodiment of the present application, a perspective projection model, a cylindrical projection model, and a spherical projection model are selectively combined and fused according to a distortion correction strategy for image fusion, so that coordinate information of a same sampling point (e.g., a first sampling point) of an output image is mapped to corresponding different coordinate information (e.g., first target coordinate information, second target coordinate information, and third target coordinate information) in an input image.
Then, according to whether a face region exists in the input image and the position information (namely, first position information) of the face region in the input image, selective weighted fusion is carried out on the different coordinate information to obtain first coordinate mapping information. Wherein the weights in the selective weighted fusion are adaptively calculated from the first location information.
Therefore, the distortion correction strategy of image fusion, the existence of the human face region in the input image and the position information of the human face region are selectively combined and weighted and fused, and the coordinate information of each sampling point of the output image is calculated and mapped to the corresponding coordinate information in the input image, so that the flexibility of image distortion correction is favorably ensured, and the processing efficiency and the accuracy of image distortion correction are favorably improved.
It should be further noted that, in the embodiment of the present application, since the human image or the non-human image may be captured through the wide-angle/super-wide-angle lens to acquire the input image, a human face picture may exist in the input image, and the human face image may also have a certain distortion. The human face detection algorithm can detect whether a region where a human face picture is located, namely a human face region, exists.
If the input image has the face area, the fact that the face picture exists in the input image is indicated. In order to ensure the flexibility of distortion correction of a face picture and improve the processing efficiency and accuracy of the distortion correction, the embodiment of the application can map the coordinate information of the first sampling point to corresponding different coordinate information in an input image for selective weighted fusion according to the position indication information (used for judging whether the specific position of the face region in the input image is acquired) and the second position information (used for judging whether the first sampling point is located in the face region or not, or used for judging whether the first sampling point is located at the geometric center point of the output image or at the edge or corner of the output image), so as to realize the distortion correction processing based on image fusion on the input image.
And if the input image does not have the face area, the fact that the input image does not have the face picture is indicated. Therefore, the embodiment of the present application may perform perspective projection only on the input image (i.e. using the first target coordinate information as the coordinate information in the first coordinate mapping information); or, the input image may be subjected to perspective projection and then to cylindrical projection (i.e., the second target coordinate information is used as the coordinate information in the first coordinate mapping information); or, the input image may be subjected to perspective projection and then spherical projection (that is, coordinate information of the third target is used as coordinate information of the first coordinate mapping information), and needs to be specifically selected according to respective advantages of the perspective projection, the cylindrical projection, and the perspective projection, so as to ensure flexibility of image distortion correction.
With reference to the above description, how to perform weighted fusion on at least two of the first target coordinate information, the second target coordinate information, and the third target coordinate information according to the position indication information and the second position information to obtain the first coordinate mapping information will be specifically described below.
The first method is as follows:
specifically, performing weighted fusion on at least two of the first target coordinate information, the second target coordinate information, and the third target coordinate information according to the position indication information and the second position information to obtain the first coordinate mapping information may include: and if the position indication information specifically indicates that the first position information does not exist, performing linear weighted fusion on two of the first target coordinate information, the second target coordinate information and the third target coordinate information according to the second position information to obtain first coordinate mapping information.
It should be noted that the electronic device may detect the input image through a face detection function, so as to determine whether a face region (indicated by the face indication information) exists in the input image. However, the electronic apparatus may not have a position detection function in the face detection. Therefore, when the face region exists in the input image, the electronic device cannot acquire the position information of the face region in the input image (i.e., the position indication information specifically indicates that the first position information does not exist). At this time, in the embodiment of the present application, the first target coordinate information, the second target coordinate information, and the third target coordinate information may be selectively and linearly weighted and fused according to the position information (represented by the second position information) of the first sampling point in the output image, so as to obtain the first coordinate mapping information.
For example, for an electronic device that does not support a face detection function, a group photo mode can be set, which is suitable for multiple persons to use in a group photo scene side by side, and a distortion correction strategy for image fusion can be set in advance. When the electronic equipment shoots the portrait photo in the group photo mode, the electronic equipment cannot know the position distribution of each face area in the portrait photo, so that the electronic equipment can carry out distortion correction processing on the portrait photo through preset image fusion and distortion correction strategies. The preset image fusion distortion correction strategy can perform linear weighted fusion on the first target coordinate information and the second target coordinate information; or, performing linear weighted fusion on the first target coordinate information and the third target coordinate information; or, linear weighted fusion is performed on the second target coordinate information and the third target coordinate information, and adaptive setting is required according to a specific use scene.
Therefore, whether the face region exists in the input image or not and the position information of the face region are selectively combined and weighted and fused, and the coordinate information of each sampling point of the output image is calculated and mapped to the corresponding coordinate information in the input image, so that the flexibility of image distortion correction is favorably ensured, and the processing efficiency and the accuracy of the image distortion correction are favorably improved.
Further, the second location information may include a value of the first parameter; each weight in the linear weighted fusion is determined by a first parameter value and a first preset parameter value; a first parameter value that may be used to represent a radial distance of the first sample point to an optical center point of the output image; the first preset parameter value may be used to indicate one of radial distances from an optical center point of the output image to four corner points of the output image.
The first preset parameter value may be specifically used to represent a maximum value, a minimum value, or any value in radial distances from an optical center point of the output image to four corner points of the output image.
It should be noted that, in the embodiment of the present application, a polar coordinate system is established with an optical center point of an output image as an origin, and a radial distance (i.e., a first parameter value) from a first sampling point to the origin and a radial distance from four corner points of the output image to the origin are calculated, so that each weight in linear weighted fusion is determined by the first parameter value and a first preset parameter value. Wherein the optical center point of the output image and the geometric center point of the output image may not coincide.
In addition, whether the first sampling point is located at the edge or corner of the output image can be reflected through the first parameter value and the first preset parameter value. Therefore, the weight in the linear weighted fusion can also be determined by the position information of the first sampling point in the output image.
Illustratively, referring to fig. 10, a person 1010 is present in the input image, and the embodiment of the present application approximates the position of the person 1010 in the input image to the position in the output image 1000. Where point 1021 represents a first sample point, point 1031 represents the optical center point of the output image 1000, and points 1041, 1042, 1043, and 1044 represent the four corner points of the output image 1000, respectively.
Then, by establishing a polar coordinate system with point 1031 as an origin, a radial distance from point 1021 to point 1031 is calculated as R (i.e., a first parameter value), and a radial distance from point 1031 to point 1041 is calculated as R 1 Radial distance R from point 1031 to point 1042 2 The radial distance from point 1031 to point 1043 is R 3 Radial distance from point 1031 to point 1044 is R 4 . Wherein the first preset parameter value may be R 1 、R 2 、R 3 、R 4 The maximum value, the minimum value, or any value.
Further, performing linear weighted fusion on two of the first target coordinate information, the second target coordinate information, and the third target coordinate information according to the second position information to obtain the first coordinate mapping information, which may include: calculating the first parameter value and a first preset parameter value according to a first preset formula to obtain first coordinate mapping information, wherein the first preset formula satisfies the following conditions:
P s = (1-W)·P m +W·P n
wherein, P s Representing coordinate information in the first coordinate mapping information, P m Represents one of first target coordinate information, second target coordinate information, and third target coordinate information, P n Indicating the division P in the first target coordinate information, the second target coordinate information and the third target coordinate information m In another, W represents a ratio of the first parameter value to the first predetermined parameter value.
It should be noted that, if the first parameter value is closer to the first preset parameter value, it indicates that the first sampling point is closer to the edge or corner of the output image, i.e. W is closer to 1 (W → 1), and P is closer to P s The more equal to P n . At this time, if P n Representing the first target coordinate information, namely performing perspective projection on the edge or corner position in the input image; if P n Representing the second target coordinate information, namely performing perspective projection on the edge or corner position in the input image and then performing cylindrical projection; if P n The coordinate information of the third object is equivalent to performing perspective projection on the edge or corner position in the input image and then performing spherical projection.
Similarly, if the first parameter value is smaller than the first predetermined parameter value, it indicates that the first sampling point is closer to the geometric center region of the output image, i.e. W is closer to 0 (W → 1), and P is closer to P s The more equal to P m
In addition, since the spherical projection and the cylindrical projection can maintain the geometric shape and the geometric proportion of the solid object at the edge or corner position in the image, the combination mode of the spherical projection and the cylindrical projection, namely P, is preferentially selected when the first sampling point is closer to the edge or corner position of the output image n Specifically representing second object coordinate information orThird target coordinate information, and P m Specifically, the first target coordinate information or the second target coordinate information.
The second method comprises the following steps:
specifically, performing weighted fusion on at least two of the first target coordinate information, the second target coordinate information, and the third target coordinate information according to the position indication information and the second position information to obtain the first coordinate mapping information may include: and if the position indication information specifically indicates that the first position information exists, performing linear weighted fusion on the first target coordinate information, the second target coordinate information and the third target coordinate information according to the first position information and the second position information to obtain first coordinate mapping information.
It should be noted that the electronic device may have a position detection function in face detection. Therefore, when the face region exists in the input image, the electronic device can acquire the position information of the face region in the input image (i.e., the position indication information specifically indicates that the first position information exists).
In addition, the first position information may be used to determine whether the face region is located at the geometric center of the input image, or at the edge or corner of the input image, and the second position information may be used to determine whether the first sample point is located in the face region. Therefore, according to the embodiment of the application, the first target coordinate information, the second target coordinate information and the third target coordinate information can be subjected to linear weighted fusion according to the first position information and the second position information to obtain the first coordinate mapping information, so that the flexibility of image distortion correction can be ensured, and the processing efficiency and the accuracy of image distortion correction can be improved.
For example, for an electronic device supporting a face detection function, when the electronic device takes a portrait photo, if a face region in the portrait photo is located in an image center region with a small distortion degree, the weight for performing distortion correction on the face region is small, and the effect is basically equivalent to that of a non-face region; and if the face area is closer to the edge or corner area of the portrait photo, the larger the weight for carrying out distortion correction on the face area is, and the smooth transition is carried out on the non-face area.
Therefore, the coordinate information of each sampling point of the output image is calculated and mapped to the corresponding coordinate information in the input image by selectively combining and weighting and fusing whether the face area exists in the input image, the position information of the face area and the position information of the sampling point, so that the flexibility of image distortion correction is ensured, and the processing efficiency and the accuracy of image distortion correction are improved.
Further, the first location information may include a second parameter value, and the second location information may include a third parameter value; each weight in the linear weighted fusion is determined by a second parameter value, a third parameter value, a second preset parameter value, a third preset parameter value and a fourth preset parameter value; the second parameter value is used for representing the radial distance from the geometric central point of the face area to the optical central point of the input image; the third parameter value is used for representing the radial distance from the first sampling point to the geometric center point of the face area in the output image; the second preset parameter value is used for representing one radial distance from the optical central point of the input image to the radial distances of the four corner points of the input image; a third preset parameter value used for representing the radius of an inscribed circle of the face area or the radius of an circumscribed circle of the face area; and the fourth preset parameter value is used for expressing that the preset coefficient is multiplied by the third preset parameter value, and the value of the preset coefficient is more than 1.
It should be noted that, in the embodiment of the present application, a polar coordinate system is established with an optical center point of an input image as an origin, a radial distance (i.e., a second parameter value) from a geometric center point of a face region to the origin is calculated, and radial distances from four corner points of the input image to the origin are calculated. Consistent with the principle in the above-mentioned "manner one", whether the face region is located at the edge or corner of the output image can be reflected through the second parameter value and the second preset parameter value. Therefore, the weight in the linear weighted fusion can also be determined by the position information of the face region in the input image.
In addition, the embodiment of the application approximately equates the position of the human face region in the input image to the position in the output image. Based on the approximation, the embodiment of the application is convenient to establish a polar coordinate system by taking the geometric center point of the face region in the output image as the origin, and calculate the radial distance (i.e. the third parameter value) from the first sampling point to the origin, so that whether the first sampling point is located in the face region can be reflected through the third parameter value.
Illustratively, referring to fig. 11, a person 1110 is present in the input image, and the embodiment of the present application approximates the position of the person 1110 in the input image to the position in the output image 1100. Wherein, the frame 1121 represents the face region in the output image 1100, the circle 1123 represents an inscribed circle of the face region, the inscribed circle takes the point 1122 as a center, the circle 1124 represents an outer concentric circle of the inscribed circle, the point 1122 represents a geometric center point of the face region, and the point 1130 represents a first sampling point. Circle 1123 has a radius r1 (i.e., the third predetermined parameter value) and circle 1124 has a radius r2 (i.e., the fourth predetermined parameter value). Then, a polar coordinate system is established with the point 1122 as the origin, and the radial distance r (i.e., the third parameter value) from the point 1130 to the point 1122 is calculated.
Further, performing linear weighted fusion on the first target coordinate information, the second target coordinate information, and the third target coordinate information according to the first position information and the second position information to obtain the first coordinate mapping information, which may include: calculating a second parameter value, a third parameter value, a second preset parameter value, a third preset parameter value and a fourth preset parameter value according to a second preset formula to obtain first coordinate mapping information, wherein the second preset formula satisfies the following conditions:
P t = (1-W f )·((1-W g )·P b +W g ·P c )+W f ·P a
wherein, P t Representing coordinate information in the first coordinate mapping information, P a Representing first object coordinate information, P b Representing second object coordinate information, P c Represents third object coordinate information, W g Representing the ratio of the second parameter value to a second preset parameter value;
W f the following are satisfied:
if r is less than r1, W f Equal to 0; if r is greater than r2, W f Equal to 1; otherwise, W f Equal to (r-r 1)/(r 2-r 1);
where r denotes a third parameter value, r1 denotes a third preset parameter value, and r2 denotes a fourth preset parameter value.
It should be noted that, if the second parameter value is closer to the second preset parameter value, the closer the face region is to the edge or corner of the input image, that is, W g The closer to 1 (W) g → 1), and P t The more equal to ((1-W) f )·P c +W f ·P a ). At this time, due to P a Represents first target coordinate information, and P c And the third target coordinate information is represented, so that the distortion correction strategy is equivalent to a weighted fusion of the perspective projection of the input image, the spherical projection and the perspective projection of the input image, and the recovery of the geometric shape and the geometric proportion of the face picture at the edge or corner is facilitated.
In addition, if r<r1, the first sample point is located in the face area (as shown in fig. 11). At this time, W f Is equal to 0, and P t Is equal to ((1-W) g )·P b +W g ·P c )。
In the same way, if r>r2, the first sample point is far from the face area, i.e. the first sample point is located in the non-face area or the background area. At this time, W f Is equal to 1, and P t Is equal to P a . Due to P a The first target coordinate information is expressed, and is equivalent to perspective projection of a background region in the input image.
Similarly, if r1<=r<And r2, the first sampling point is closer to the face region (i.e. the first sampling point is located in the critical region or the transition region). At this time, W f Equal to (r-r 1)/(r 2-r 1), and P t Is equal to ((1-W) f )·((1-W g )·P b +W g ·P c )+W f ·P a )。
In summary, the closer the face region is to the edge or corner of the input imagePosition (W) g → 1) and the first sample point is located in the face region (r)<r1), then P t The more equal to P c . At this time, due to P c And representing the coordinate information of the third target, namely performing spherical projection on the face area in the input image, so as to recover the geometric shape and the geometric proportion of the face picture in the face area at the edge or corner position through the spherical projection.
And S730, performing interpolation processing on the coordinate information among the sampling points of the output image according to the first coordinate mapping information to obtain second coordinate mapping information.
The second coordinate mapping information may be used to represent a coordinate mapping relationship between the coordinate information of each pixel point of the output image and the corresponding coordinate information in the input image.
It should be noted that, in order to improve the calculation efficiency and reduce the calculation complexity, the embodiments of the present application perform sampling on image coordinates according to the image size, so as to solve the mapping of the coordinate information of each sampling point of the output image to the corresponding coordinate information in the input image. Then, the embodiment of the present application further needs to perform interpolation processing, that is, coordinate interpolation on the coordinate information between the sampling points of the output image, so that the coordinate information of each pixel point of the output image is obtained on the whole image and mapped to the corresponding coordinate information in the input image, thereby facilitating improvement of imaging quality and integrity of the distortion-removed image.
Specifically, the interpolation processing may include bilinear interpolation (the bilinear interpolation).
It should be noted that, in mapping the coordinate information of each sampling point of the output image to the corresponding coordinate information in the input image, a floating point value may exist. Therefore, the embodiment of the present application performs interpolation processing by using bilinear interpolation.
And S740, carrying out distortion correction on the input image according to the second coordinate mapping information to obtain an output image.
Specifically, performing distortion correction on the input image according to the second coordinate mapping information to obtain the output image may include: performing coordinate mapping on the input image according to the second coordinate mapping information to obtain a target image; and carrying out interpolation processing on the pixel values of the pixel points of the target image to obtain an output image.
The interpolation process may include nearest interpolation (nearest interpolation).
It should be noted that, the input image is subjected to coordinate mapping according to the coordinate information of each pixel point of the output image mapped to the corresponding coordinate information in the input image to obtain a target image, and the target image may have a missing pixel value. Therefore, the embodiment of the present application further needs to perform interpolation processing, that is, pixel interpolation on the pixel values of the pixels of the target image, so as to obtain an output image, thereby being beneficial to improving the imaging quality and integrity of the distortion-removed image.
It can be seen that, in the embodiment of the present application, firstly, according to the image size of the input image, the preset image fusion model, whether the input image has a face region and/or the position information in the input image of the face region, the coordinate information of each sampling point of the output image is calculated and mapped to the corresponding coordinate information in the input image, so as to obtain first coordinate mapping information; then, performing coordinate interpolation on coordinate information between sampling points of the output image according to the first coordinate mapping information, and calculating that the coordinate information of each pixel point of the output image is mapped to corresponding coordinate information in the input image to obtain second coordinate mapping information; finally, distortion correction is carried out on the input image according to the second coordinate mapping information to obtain an output image, namely a distortion-removed image, so that the flexibility of image distortion correction is favorably ensured, the processing efficiency and accuracy of image distortion correction are improved, and the imaging quality and integrity of the distortion-removed image are improved.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art would appreciate that the various illustrative methods, functions, modules, elements, or steps described in connection with the embodiments provided herein may be implemented as hardware or in combination with computer software. Whether a method, function, module, unit or step is performed by hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the technical solution. A person skilled in the art may use different methods to implement the described methods, functions, modules, units or steps for each specific application, but such implementation should not be considered as beyond the scope of the present application.
The embodiment of the application can divide the functional units/modules of the electronic equipment according to the method example. For example, each functional unit/module may be divided for each function, or two or more functions may be integrated into one functional unit/module. The integrated functional units/modules may be implemented in a hardware manner or a software program manner. It should be noted that, in the embodiment of the present application, the division of the functional units/modules is schematic, and only one logical function division is used, and there may be another division manner in actual implementation.
In the case of employing integrated units/modules, fig. 12 shows a functional unit composition block diagram of an image distortion correction apparatus. The image distortion correction apparatus 1200 specifically includes: a processing unit 1220 and a communication unit 1230. Processing unit 1220 is used to control and manage the actions of the electronic device, for example, processing unit 1220 is used to support image distortion correction apparatus 1200 in performing some or all of the steps in fig. 7, as well as other processes for the techniques described herein. The communication unit 1230 is used to support the communication of the image distortion correction apparatus 1200 with other devices. The image distortion correction apparatus 1200 may further include a storage unit 1210 for storing program codes and data required therefor.
Among other things, the processing unit 1220 may be a processor or controller, such as a CPU, GPU, general purpose processor, ISP, DSP, ASIC, FPGA, transistor logic, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein. Additionally, processing unit 1220 may also be a combination that performs computing functions, including one or more of a combination of microprocessors, a DSP, and a combination of microprocessors, for example. The communication unit 1230 may be a communication interface, a transceiver, a transceiving circuit, and the like. The storage unit 1210 may be a memory. When the processing unit 1220 is a processor, the communication unit 1230 is a communication interface, and the storage unit 1210 is a memory, the image distortion correcting apparatus 1200 according to the embodiment of the present application may be an electronic device shown in fig. 13.
Specifically, the processing unit 1220 is configured to perform any one of the steps performed by the electronic device in the above method embodiments, and when performing data transmission, such as sending, the communication unit 1230 is optionally invoked to complete the corresponding operation. The details will be described below.
The processing unit 1220 is configured to: acquiring an input image and face detection information, wherein the face detection information comprises face indication information and/or position indication information, the face indication information is used for indicating whether a face region exists in the input image, the position indication information is used for indicating whether first position information exists, and the first position information is used for indicating the position information of the face region in the input image; determining first coordinate mapping information according to the image size of an input image, a preset image fusion model and face detection information, wherein the first coordinate mapping information is used for representing a coordinate mapping relation between coordinate information of each sampling point of the output image and corresponding coordinate information in the input image, and the preset image fusion model comprises at least one of a perspective projection model, a cylindrical projection model and a spherical projection model; performing interpolation processing on coordinate information between sampling points of the output image according to the first coordinate mapping information to obtain second coordinate mapping information, wherein the second coordinate mapping information is used for expressing a coordinate mapping relation between the coordinate information of each pixel point of the output image and the corresponding coordinate information in the input image; and carrying out distortion correction on the input image according to the second coordinate mapping information to obtain an output image.
It can be seen that, in the embodiment of the present application, firstly, according to the image size of the input image, the preset image fusion model, whether the input image has a face region and/or the position information in the input image of the face region, the coordinate information of each sampling point of the output image is calculated and mapped to the corresponding coordinate information in the input image, so as to obtain first coordinate mapping information; then, performing coordinate interpolation on coordinate information between sampling points of the output image according to the first coordinate mapping information, and calculating that the coordinate information of each pixel point of the output image is mapped to corresponding coordinate information in the input image to obtain second coordinate mapping information; finally, distortion correction is carried out on the input image according to the second coordinate mapping information to obtain an output image, namely a distortion-removed image, so that the flexibility of image distortion correction is favorably ensured, the processing efficiency and accuracy of image distortion correction are improved, and the imaging quality and integrity of the distortion-removed image are improved.
It should be noted that, for specific implementation of each operation performed by the image distortion correction apparatus 1200, reference may be made to the corresponding description of the method embodiment shown in fig. 7, and details are not described here again.
Specifically, in determining the first coordinate mapping information according to the image size of the input image, the preset image fusion model and the face detection information, the processing unit 1220 is configured to: sampling on an image coordinate according to the image size of an input image to obtain M sampling points of an output image, wherein the value of M is an integer greater than 1; determining target coordinate information according to a preset image fusion model, wherein the target coordinate information is used for representing that the coordinate information of a first sampling point is mapped to corresponding coordinate information in an input image, and the first sampling point is one of M sampling points; and determining first coordinate mapping information according to the face detection information, the target coordinate information and second position information, wherein the second position information is used for representing the position information of the first sampling point in the output image.
Specifically, the target coordinate information includes at least one of first target coordinate information, second target coordinate information, and third target coordinate information; in determining the target coordinate information according to the preset image fusion model, the processing unit 1220 is configured to: determining that the coordinate information of the first sampling point is mapped to the corresponding coordinate information in the input image according to the perspective projection model to obtain first target coordinate information; and/or determining that the coordinate information of the first sampling point is mapped to the corresponding coordinate information in the first image according to the cylindrical projection model, and determining that the corresponding coordinate information in the first image is mapped to the corresponding coordinate information in the input image according to the perspective projection model to obtain second target coordinate information, wherein the first image is used for representing the image obtained by performing projection calculation on the input image according to the perspective projection model; and/or determining that the coordinate information of the first sampling point is mapped to the corresponding coordinate information in the first image according to the spherical projection model, and determining that the corresponding coordinate information in the first image is mapped to the corresponding coordinate information in the input image according to the perspective projection model to obtain the third target coordinate information.
Specifically, if the face detection information includes face indication information and position indication information, the processing unit 1220 is configured to, in terms of determining the first coordinate mapping information according to the face detection information, the target coordinate information, and the second position information: if the face indication information specifically indicates that a face region exists in the input image, weighting and fusing at least two of the first target coordinate information, the second target coordinate information and the third target coordinate information according to the position indication information and the second position information to obtain first coordinate mapping information; or if the face indication information specifically indicates that no face region exists in the input image, the first target coordinate information is used as coordinate information in the first coordinate mapping information; or if the face indication information specifically indicates that no face region exists in the input image, the second target coordinate information is used as coordinate information in the first coordinate mapping information; or, if the face indication information specifically indicates that no face region exists in the input image, the third target coordinate information is used as the coordinate information in the first coordinate mapping information.
Specifically, in terms of performing weighted fusion on at least two of the first target coordinate information, the second target coordinate information, and the third target coordinate information according to the position indication information and the second position information to obtain the first coordinate mapping information, the processing unit 1220 is configured to: and if the position indication information specifically indicates that the first position information does not exist, performing linear weighted fusion on two of the first target coordinate information, the second target coordinate information and the third target coordinate information according to the second position information to obtain first coordinate mapping information.
Specifically, the second position information includes a first parameter value; each weight in the linear weighted fusion is determined by a first parameter value and a first preset parameter value; a first parameter value representing a radial distance of the first sample point to an optical center point of the output image; and the first preset parameter value is used for representing one radial distance from the optical central point of the output image to the radial distances of the four corner points of the output image.
Specifically, in terms of performing linear weighted fusion on two of the first target coordinate information, the second target coordinate information, and the third target coordinate information according to the second position information to obtain the first coordinate mapping information, the processing unit 1220 is configured to: calculating the first parameter value and a first preset parameter value according to a first preset formula to obtain first coordinate mapping information, wherein the first preset formula satisfies the following conditions:
P s = (1-W)·P m +W·P n
wherein, P s Representing coordinate information in the first coordinate mapping information, P m Represents one of first target coordinate information, second target coordinate information, and third target coordinate information, P n Indicating the division P in the first target coordinate information, the second target coordinate information and the third target coordinate information m In another, W represents a ratio of the first parameter value to the first predetermined parameter value.
Specifically, in terms of performing weighted fusion on at least two of the first target coordinate information, the second target coordinate information, and the third target coordinate information according to the position indication information and the second position information to obtain the first coordinate mapping information, the processing unit 1220 is configured to: and if the position indication information specifically indicates that the first position information exists, performing linear weighted fusion on the first target coordinate information, the second target coordinate information and the third target coordinate information according to the first position information and the second position information to obtain first coordinate mapping information.
Specifically, the first location information includes a second parameter value, and the second location information includes a third parameter value; each weight in the linear weighted fusion is determined by a second parameter value, a third parameter value, a second preset parameter value, a third preset parameter value and a fourth preset parameter value; the second parameter value is used for representing the radial distance from the geometric central point of the face area to the optical central point of the input image; the third parameter value is used for representing the radial distance from the first sampling point to the geometric center point of the face area in the output image; the second preset parameter value is used for representing one radial distance from the optical central point of the input image to the radial distances of the four corner points of the input image; a third preset parameter value used for representing the radius of an inscribed circle of the face area or the radius of an circumscribed circle of the face area; and the fourth preset parameter value is used for expressing that the preset coefficient is multiplied by the third preset parameter value, and the value of the preset coefficient is more than 1.
Specifically, in terms of performing linear weighted fusion on the first target coordinate information, the second target coordinate information, and the third target coordinate information according to the first position information and the second position information to obtain the first coordinate mapping information, the processing unit 1220 is configured to: calculating a second parameter value, a third parameter value, a second preset parameter value, a third preset parameter value and a fourth preset parameter value according to a second preset formula to obtain first coordinate mapping information, wherein the second preset formula satisfies the following conditions:
P t = (1-W f )·((1-W g )·P b +W g ·P c )+W f ·P a
wherein, P t Representing coordinate information in the first coordinate mapping information, P a Representing first object coordinate information, P b Representing second object coordinate information, P c Represents third object coordinate information, W g Representing the ratio of the second parameter value to a second preset parameter value;
W f the following are satisfied:
if r is less than r1, W f Equal to 0; if r is greater than r2, W f Equal to 1; otherwise, W f Equal to (r-r 1)/(r 2-r 1);
where r denotes a third parameter value, r1 denotes a third preset parameter value, and r2 denotes a fourth preset parameter value.
Specifically, in terms of distortion correcting the input image according to the second coordinate mapping information to obtain the output image, the processing unit 1220 is configured to: performing coordinate mapping on the input image according to the second coordinate mapping information to obtain a target image; and carrying out interpolation processing on the pixel values of the pixel points of the target image to obtain an output image.
A schematic structural diagram of another electronic device provided in the embodiment of the present application is described below, as shown in fig. 13. Electronic device 1300 includes, among other things, processor 1310, memory 1320, communication interface 1330, and at least one communication bus connecting processor 1310, memory 1320, and communication interface 1330.
The processor 1310 may be one or more central processing units CPU. In the case where the processor 1310 is a CPU, the CPU may be a single core CPU or a multi-core CPU. The memory 1320 includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), and the memory 1320 is used to store related instructions and data. Communication interface 1330 is used to receive and transmit data.
The processor 1310 in the electronic device 1300 is configured to read one or more programs 1321 stored in the memory 1320 for performing the following steps: acquiring an input image and face detection information, wherein the face detection information comprises face indication information and/or position indication information, the face indication information is used for indicating whether a face region exists in the input image, the position indication information is used for indicating whether first position information exists, and the first position information is used for indicating the position information of the face region in the input image; determining first coordinate mapping information according to the image size of an input image, a preset image fusion model and face detection information, wherein the first coordinate mapping information is used for representing a coordinate mapping relation between coordinate information of each sampling point of the output image and corresponding coordinate information in the input image, and the preset image fusion model comprises at least one of a perspective projection model, a cylindrical projection model and a spherical projection model; performing interpolation processing on coordinate information between sampling points of the output image according to the first coordinate mapping information to obtain second coordinate mapping information, wherein the second coordinate mapping information is used for expressing a coordinate mapping relation between the coordinate information of each pixel point of the output image and the corresponding coordinate information in the input image; and carrying out distortion correction on the input image according to the second coordinate mapping information to obtain an output image.
It should be noted that, for specific implementation of each operation performed by the electronic device 1300, reference may be made to the corresponding description of the method embodiment shown in fig. 7, and details are not described here again.
Embodiments of the present application also provide a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, the computer program being operable to cause a computer to perform part or all of the steps of any of the methods as set forth in the above method embodiments.
Embodiments of the present application also provide a computer program product, where the computer program product includes a computer program operable to cause a computer to perform part or all of the steps of any one of the methods as described in the above method embodiments. The computer program product may be a software installation package.
For simplicity of description, the above embodiments are described as a series of combinations of operations. Those skilled in the art should appreciate that the present application is not limited by the order of acts described, as some steps in the embodiments of the present application may occur in other orders or concurrently. In addition, those skilled in the art should also appreciate that the embodiments described in the specification all belong to the preferred embodiments, and the related actions, steps, modules or units are not necessarily required by the embodiments of the present application.
In the foregoing embodiments, the descriptions of the embodiments of the present application have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be clear to a person skilled in the art that the methods, steps or functions of related modules/units described in the embodiments of the present application can be implemented in whole or in part by software, hardware, firmware or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product or in the form of computer program instructions executed by a processor. Wherein the computer program product comprises at least one computer program instruction, which may consist of corresponding software modules, which may be stored in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable hard disk, a compact disc read only memory (CD-ROM), or any other form of storage medium known in the art. The computer program instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium. For example, the computer program instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes), optical media, or semiconductor media (e.g., SSDs), among others.
Each module/unit included in each apparatus or product described in the above embodiments may be a software module/unit, a hardware module/unit, or a part of the module/unit may be a software module/unit and another part may be a hardware module/unit. For example, for each device or product applied to or integrated on a chip, each module/unit included in the device or product may be implemented by using hardware such as a circuit; alternatively, a part of the modules/units included in the method may be implemented by using a software program running on a processor integrated inside a chip, and another part (if any) of the modules/units may be implemented by using hardware such as a circuit. The same applies to individual devices or products applied to or integrated in a chip module, or to individual devices or products applied to or integrated in a terminal.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the embodiments of the present application in further detail, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present application, and are not intended to limit the scope of the embodiments of the present application. Any modification, equivalent replacement, improvement and the like made on the basis of the technical solutions of the embodiments of the present application should be included in the protection scope of the embodiments of the present application.

Claims (14)

1. An image distortion correction method, comprising:
acquiring an input image and face detection information, wherein the face detection information comprises face indication information and/or position indication information, the face indication information is used for indicating whether a face region exists in the input image, the position indication information is used for indicating whether first position information exists, and the first position information is used for representing the position information of the face region in the input image;
determining first coordinate mapping information according to the image size of the input image, a preset image fusion model and the face detection information, wherein the first coordinate mapping information is used for representing a coordinate mapping relation between coordinate information of each sampling point of the output image and corresponding coordinate information in the input image, and the preset image fusion model comprises at least one of a perspective projection model, a cylindrical projection model and a spherical projection model;
performing interpolation processing on coordinate information between sampling points of the output image according to the first coordinate mapping information to obtain second coordinate mapping information, wherein the second coordinate mapping information is used for expressing a coordinate mapping relation between the coordinate information of each pixel point of the output image and the corresponding coordinate information in the input image;
and carrying out distortion correction on the input image according to the second coordinate mapping information to obtain the output image.
2. The method according to claim 1, wherein the determining first coordinate mapping information according to the image size of the input image, a preset image fusion model and the face detection information comprises:
sampling on an image coordinate according to the image size of the input image to obtain M sampling points of the output image, wherein the value of M is an integer greater than 1;
determining target coordinate information according to the preset image fusion model, wherein the target coordinate information is used for representing that coordinate information of a first sampling point is mapped to corresponding coordinate information in the input image, and the first sampling point is one of the M sampling points;
and determining the first coordinate mapping information according to the face detection information, the target coordinate information and second position information, wherein the second position information is used for representing the position information of the first sampling point in the output image.
3. The method of claim 2, wherein the target coordinate information comprises at least one of first target coordinate information, second target coordinate information, third target coordinate information;
the determining of the target coordinate information according to the preset image fusion model comprises the following steps:
determining that the coordinate information of the first sampling point is mapped to the corresponding coordinate information in the input image according to the perspective projection model to obtain the first target coordinate information; and/or the presence of a gas in the gas,
determining that the coordinate information of the first sampling point is mapped to corresponding coordinate information in a first image according to the cylindrical projection model, and determining that the corresponding coordinate information in the first image is mapped to corresponding coordinate information in the input image according to the perspective projection model to obtain second target coordinate information, wherein the first image is used for representing an image obtained by performing projection calculation on the input image according to the perspective projection model; and/or the presence of a gas in the atmosphere,
and determining that the coordinate information of the first sampling point is mapped to the corresponding coordinate information in the first image according to the spherical projection model, and determining that the corresponding coordinate information in the first image is mapped to the corresponding coordinate information in the input image according to the perspective projection model to obtain the third target coordinate information.
4. The method of claim 3, wherein if the face detection information includes the face indication information and the location indication information, the determining the first coordinate mapping information according to the face detection information, the target coordinate information, and the second location information comprises:
if the face indication information specifically indicates that the face region exists in the input image, performing weighted fusion on at least two of the first target coordinate information, the second target coordinate information and the third target coordinate information according to the position indication information and the second position information to obtain first coordinate mapping information; alternatively, the first and second electrodes may be,
if the face indication information specifically indicates that the face area does not exist in the input image, the first target coordinate information is used as coordinate information in the first coordinate mapping information; alternatively, the first and second electrodes may be,
if the face indication information specifically indicates that the face area does not exist in the input image, the second target coordinate information is used as coordinate information in the first coordinate mapping information; alternatively, the first and second electrodes may be,
and if the face indication information specifically indicates that the face area does not exist in the input image, taking the third target coordinate information as coordinate information in the first coordinate mapping information.
5. The method of claim 4, wherein the weighted fusion of at least two of the first target coordinate information, the second target coordinate information, and the third target coordinate information according to the position indication information and the second position information to obtain the first coordinate mapping information comprises:
and if the position indication information specifically indicates that the first position information does not exist, performing linear weighted fusion on two of the first target coordinate information, the second target coordinate information and the third target coordinate information according to the second position information to obtain the first coordinate mapping information.
6. The method of claim 5, wherein the second location information comprises a first parameter value;
each weight in the linear weighted fusion is determined by the first parameter value and a first preset parameter value;
the first parameter value is used for representing the radial distance from the first sampling point to the optical center point of the output image;
the first preset parameter value is used for representing one radial distance from an optical center point of the output image to four corner points of the output image.
7. The method of claim 6, wherein the linearly weighted fusing two of the first target coordinate information, the second target coordinate information, and the third target coordinate information according to the second position information to obtain the first coordinate mapping information comprises:
calculating the first parameter value and the first preset parameter value according to a first preset formula to obtain the first coordinate mapping information, wherein the first preset formula satisfies the following conditions:
P s = (1-W)·P m +W·P n
wherein, the P s Representing coordinate information in the first coordinate mapping information, the P m Represents one of the first target coordinate information, the second target coordinate information, the third target coordinate information, the P n Representing the first target coordinate information, the second target coordinate informationDividing the P in the target coordinate information and the third target coordinate information m And W represents the ratio of the first parameter value to the first preset parameter value.
8. The method of claim 4, wherein the weighted fusion of at least two of the first target coordinate information, the second target coordinate information, and the third target coordinate information according to the position indication information and the second position information to obtain the first coordinate mapping information comprises:
and if the position indication information specifically indicates that the first position information exists, performing linear weighted fusion on the first target coordinate information, the second target coordinate information and the third target coordinate information according to the first position information and the second position information to obtain the first coordinate mapping information.
9. The method of claim 8, wherein the first location information comprises a second parameter value, and wherein the second location information comprises a third parameter value;
each weight in the linear weighted fusion is determined by the second parameter value, the third parameter value, a second preset parameter value, a third preset parameter value and a fourth preset parameter value;
the second parameter value is used for representing the radial distance from the geometric central point of the face region to the optical central point of the input image;
the third parameter value is used for representing the radial distance from the first sampling point to the geometric center point of the face region in the output image;
the second preset parameter value is used for representing one radial distance from an optical central point of the input image to four corner points of the input image;
the third preset parameter value is used for representing the radius of an inscribed circle of the face area or the radius of an circumscribed circle of the face area;
and the fourth preset parameter value is used for expressing that a preset coefficient is multiplied by the third preset parameter value, and the value of the preset coefficient is more than 1.
10. The method of claim 9, wherein the linear weighted fusion of the first target coordinate information, the second target coordinate information, and the third target coordinate information according to the first position information and the second position information to obtain the first coordinate mapping information comprises:
calculating the second parameter value, the third parameter value, the second preset parameter value, the third preset parameter value and the fourth preset parameter value according to a second preset formula to obtain the first coordinate mapping information, wherein the second preset formula satisfies the following conditions:
P t = (1-W f )·((1-W g )·P b +W g ·P c )+W f ·P a
wherein, the P t Representing coordinate information in the first coordinate mapping information, the P a Representing the first target coordinate information, the P b Representing the second object coordinate information, P c Represents the third target coordinate information, the W g Representing a ratio of the second parameter value to the second preset parameter value;
the W is f The following are satisfied:
if r is less than r1, then W is f Equal to 0; if r is greater than r2, then W is greater than r f Equal to 1; otherwise, the W f Equal to (r-r 1)/(r 2-r 1);
wherein r represents the third parameter value, r1 represents the third preset parameter value, and r2 represents the fourth preset parameter value.
11. The method according to any one of claims 1-10, wherein said distortion correcting said input image according to said second coordinate mapping information to obtain said output image comprises:
performing coordinate mapping on the input image according to the second coordinate mapping information to obtain a target image;
and carrying out interpolation processing on the pixel values of the pixel points of the target image to obtain the output image.
12. An image distortion correction apparatus, characterized in that the apparatus comprises a processing unit for:
acquiring an input image and face detection information, wherein the face detection information comprises face indication information and/or position indication information, the face indication information is used for indicating whether a face region exists in the input image, the position indication information is used for indicating whether first position information exists, and the first position information is used for representing the position information of the face region in the input image;
determining first coordinate mapping information according to the image size of the input image, a preset image fusion model and the face detection information, wherein the first coordinate mapping information is used for representing a coordinate mapping relation between coordinate information of each sampling point of the output image and corresponding coordinate information in the input image, and the preset image fusion model comprises at least one of a perspective projection model, a cylindrical projection model and a spherical projection model;
performing interpolation processing on coordinate information between sampling points of the output image according to the first coordinate mapping information to obtain second coordinate mapping information, wherein the second coordinate mapping information is used for expressing a coordinate mapping relation between the coordinate information of each pixel point of the output image and the corresponding coordinate information in the input image;
and carrying out distortion correction on the input image according to the second coordinate mapping information to obtain the output image.
13. An electronic device comprising a processor, a memory and a communication interface, the memory storing one or more programs, and the one or more programs being executable by the processor, the one or more programs including instructions for performing the steps in the method of any of claims 1-11.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program is operable to cause a computer to perform the method according to any one of claims 1-11.
CN202110718857.7A 2021-06-28 2021-06-28 Image distortion correction method and apparatus, electronic device, and storage medium Active CN113487500B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110718857.7A CN113487500B (en) 2021-06-28 2021-06-28 Image distortion correction method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110718857.7A CN113487500B (en) 2021-06-28 2021-06-28 Image distortion correction method and apparatus, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN113487500A CN113487500A (en) 2021-10-08
CN113487500B true CN113487500B (en) 2022-08-02

Family

ID=77936409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110718857.7A Active CN113487500B (en) 2021-06-28 2021-06-28 Image distortion correction method and apparatus, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN113487500B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114666556A (en) * 2022-02-23 2022-06-24 深圳华侨城文化旅游科技集团有限公司 Method, system, equipment and storage medium for fusing back projection spherical screen edges
WO2024001342A1 (en) * 2022-06-30 2024-01-04 华为技术有限公司 Image distortion correction method, and apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996172A (en) * 2014-05-08 2014-08-20 东北大学 Fish-eye image correction method based on multistep correction
US9536287B1 (en) * 2015-07-09 2017-01-03 Intel Corporation Accelerated lens distortion correction with near-continuous warping optimization
CN107424126A (en) * 2017-05-26 2017-12-01 广州视源电子科技股份有限公司 Method for correcting image, device, equipment, system and picture pick-up device and display device
CN110111262A (en) * 2019-03-29 2019-08-09 北京小鸟听听科技有限公司 A kind of projector distortion correction method, device and projector
CN110570367A (en) * 2019-08-21 2019-12-13 苏州科达科技股份有限公司 Fisheye image correction method, electronic device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9196022B2 (en) * 2014-03-10 2015-11-24 Omnivision Technologies, Inc. Image transformation and multi-view output systems and methods

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996172A (en) * 2014-05-08 2014-08-20 东北大学 Fish-eye image correction method based on multistep correction
US9536287B1 (en) * 2015-07-09 2017-01-03 Intel Corporation Accelerated lens distortion correction with near-continuous warping optimization
CN107424126A (en) * 2017-05-26 2017-12-01 广州视源电子科技股份有限公司 Method for correcting image, device, equipment, system and picture pick-up device and display device
CN110111262A (en) * 2019-03-29 2019-08-09 北京小鸟听听科技有限公司 A kind of projector distortion correction method, device and projector
CN110570367A (en) * 2019-08-21 2019-12-13 苏州科达科技股份有限公司 Fisheye image correction method, electronic device and storage medium

Also Published As

Publication number Publication date
CN113487500A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
WO2020192458A1 (en) Image processing method and head-mounted display device
CN109829864B (en) Image processing method, device, equipment and storage medium
CN113538273B (en) Image processing method and image processing apparatus
KR20150077646A (en) Image processing apparatus and method
CN113487500B (en) Image distortion correction method and apparatus, electronic device, and storage medium
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
CN109947338B (en) Image switching display method and device, electronic equipment and storage medium
CN109840584B (en) Image data classification method and device based on convolutional neural network model
CN110138999B (en) Certificate scanning method and device for mobile terminal
CN111723803A (en) Image processing method, device, equipment and storage medium
CN112581358A (en) Training method of image processing model, image processing method and device
CN110807769B (en) Image display control method and device
CN112396076A (en) License plate image generation method and device and computer storage medium
CN115150542B (en) Video anti-shake method and related equipment
US20240013432A1 (en) Image processing method and related device
CN114708289A (en) Image frame prediction method and electronic equipment
CN110335224B (en) Image processing method, image processing device, computer equipment and storage medium
CN108881739B (en) Image generation method, device, terminal and storage medium
CN111982037B (en) Height measuring method and electronic equipment
CN113747057A (en) Image processing method and electronic equipment
CN110443841B (en) Method, device and system for measuring ground depth
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN111757146A (en) Video splicing method, system and storage medium
CN114390195B (en) Automatic focusing method, device, equipment and storage medium
CN114363482B (en) Method for determining calibration image and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant