CN115797228B - Image processing device, method, chip, electronic equipment and storage medium - Google Patents

Image processing device, method, chip, electronic equipment and storage medium Download PDF

Info

Publication number
CN115797228B
CN115797228B CN202310044515.0A CN202310044515A CN115797228B CN 115797228 B CN115797228 B CN 115797228B CN 202310044515 A CN202310044515 A CN 202310044515A CN 115797228 B CN115797228 B CN 115797228B
Authority
CN
China
Prior art keywords
image data
data
memory
image
memory computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310044515.0A
Other languages
Chinese (zh)
Other versions
CN115797228A (en
Inventor
姜宇奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jiutian Ruixin Technology Co ltd
Original Assignee
Shenzhen Jiutian Ruixin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jiutian Ruixin Technology Co ltd filed Critical Shenzhen Jiutian Ruixin Technology Co ltd
Priority to CN202310044515.0A priority Critical patent/CN115797228B/en
Publication of CN115797228A publication Critical patent/CN115797228A/en
Application granted granted Critical
Publication of CN115797228B publication Critical patent/CN115797228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing device, an image processing method, a chip, an electronic device and a storage medium, wherein the image processing device comprises: the preprocessing unit is used for carrying out image preprocessing on the original image data so as to convert the original image data into image data in a preset format and then send the image data to the in-memory computing module; the in-memory computing module comprises at least one in-memory computing unit, the in-memory computing unit comprises a neural network, the in-memory computing unit is used for receiving the image data in the preset format sent by the preprocessing unit, performing acceleration computing on the image data in the preset format distributed in the neural network, and sending the image data after the acceleration computing to the post-processing unit; and the post-processing unit is used for carrying out image data processing on the received image data after the acceleration calculation to obtain target image data. The method and the device can solve the problems that in the prior art, when a neural network mode is utilized for image restoration, power consumption is high and image restoration efficiency is low.

Description

Image processing device, method, chip, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computers and image processing technologies, and in particular, to an image processing apparatus, an image processing method, a chip, an electronic device, and a storage medium.
Background
The Image signal processor (Image Signal Processor, ISP) currently available can provide a series of Image processing algorithms to process the original Image data output by the Image Sensor (Image Sensor), including black level compensation, lens shading correction, demosaicing, noise reduction, and other algorithms, so as to convert the output Image of the Image Sensor from the RAW domain to the Image data of the RGB domain or the YUV domain, and the processed Image data is provided for the back end to continue processing.
However, in the process of researching and practicing the prior art, the inventor of the application finds that the existing image signal processor based on the traditional algorithm can restore the image more completely under the general imaging condition, but in the specific imaging scene, such as a scene without light, dark light or high exposure, the image restored based on the traditional algorithm has more noise and poorer image restoration quality, and cannot meet the requirement of high-image-quality application. Moreover, the prior art image processing has the problems of higher power consumption and lower image restoration efficiency.
The foregoing description is provided for general background information and does not necessarily constitute prior art.
Disclosure of Invention
In view of the above technical problems, the present application provides an image processing apparatus, an image processing method, a chip, an electronic device, and a storage medium, which can effectively solve the above technical problems.
In order to solve the technical problems, the application provides an image processing device which comprises a preprocessing unit, an in-memory computing module and a post-processing unit, wherein the in-memory computing module is respectively connected with the preprocessing unit and the post-processing unit;
the preprocessing unit is used for carrying out image preprocessing on the acquired original image data so as to convert the original image data into image data in a preset format, and sending the image data in the preset format to the in-memory computing module;
the in-memory computing module comprises at least one in-memory computing unit, wherein the in-memory computing unit comprises a neural network, and is used for receiving the image data in the preset format sent by the preprocessing unit, performing acceleration computation on the image data in the preset format distributed in the neural network, and sending the image data after the acceleration computation to the post-processing unit;
The post-processing unit is used for receiving the image data after the acceleration calculation sent by the in-memory calculation module, and performing image data processing on the image data after the acceleration calculation to obtain target image data.
Optionally, the in-memory computing module is further configured to distribute the received image data in the preset format, and distribute the image data in the preset format to a plurality of in-memory computing units for performing acceleration computation.
Optionally, the distributing the received image data in the preset format, distributing the image data in the preset format to a plurality of in-memory computing units for performing acceleration computation, including:
distributing the image data in the preset format to corresponding in-memory computing units according to a preset execution sequence;
and convolving, pooling, activating and/or scaling the image data distributed in the neural network in the preset format to obtain the image data after the acceleration calculation.
Optionally, the distributing the image data in the preset format to the corresponding in-memory computing unit according to the preset execution sequence includes:
acquiring a preset execution sequence of the neural network corresponding to each in-memory computing unit, wherein the preset execution sequence comprises an execution instruction corresponding to each layer of operator of the neural network, the arrangement of weights corresponding to each layer of operator in a memory and the configuration of data paths in an image processing device;
And distributing the image data in the preset format to the neural network corresponding to each in-memory computing unit based on the preset execution sequence.
Optionally, the convolving, pooling, activating and/or scaling the image data allocated to the preset format in the neural network further comprises:
and respectively transmitting the convolved image data, the pooled image data, the activated image data and the scaled image data to a post-processing unit and/or a memory.
Optionally, the image processing device further comprises a controller and a memory, wherein the controller is respectively connected with the preprocessing unit, the in-memory computing module, the post-processing unit and the memory, and the memory is respectively connected with the preprocessing unit, the in-memory computing module, the post-processing unit and the controller;
the controller is used for controlling the data flow direction of the image processing device through a control bus and carrying out parameter configuration on the preprocessing unit, the in-memory computing module, the post-processing unit and the memory;
the memory is configured to store the acquired original image data and configuration data of the image processing apparatus, where the configuration data includes a configuration parameter of the controller, a configuration parameter of the in-memory computing module, first intermediate data of the in-memory computing module, and second intermediate data of the post-processing unit.
Optionally, the preprocessing unit is further configured to obtain original statistical data corresponding to the original image data; the controller is further configured to dynamically adjust a configuration parameter of the in-memory computing module according to the obtained original statistical data.
Optionally, the dynamically adjusting the configuration parameters of the in-memory computing module according to the obtained raw statistics includes:
determining configuration parameters of the neural network in the in-memory calculation module during the previous frame based on the original statistical data of the original image data of the previous frame, wherein the configuration parameters comprise a weight value, a bias value, a quantization value and a gain value;
taking the configuration parameter of the previous frame as the initial configuration parameter of the current frame of the neural network;
comparing the original statistical data of the previous frame of original image data with the original statistical data of the current frame of original image data to obtain a comparison result;
and adjusting initial configuration parameters of the neural network in the current frame based on the comparison result.
Optionally, the performing image preprocessing on the obtained original image data to convert the original image data into image data in a preset format includes:
Performing brightness statistics and chromaticity statistics on a global or local region of interest of the original image data to obtain corresponding original statistics data; and/or
And performing black level compensation, nonlinear transformation and normalization processing on the original image data to obtain image data in a preset format.
Optionally, the processing the image data after the acceleration calculation to obtain target image data includes:
and carrying out inverse normalization, fixed point and data truncation processing on the image data after the acceleration calculation by the post-processing unit to obtain target image data.
Correspondingly, the application also provides an image processing method, which comprises the following steps:
performing image preprocessing on the obtained original image data to convert the original image data into image data in a preset format, and sending the image data in the preset format to the in-memory computing module;
receiving the image data in the preset format, performing acceleration calculation on the image data in the preset format distributed in the neural network, and sending the image data after the acceleration calculation to the post-processing unit;
and receiving the image data after the acceleration calculation, and performing image data processing on the image data after the acceleration calculation to obtain target image data.
Optionally, the image processing method further includes:
and distributing the received image data in the preset format, and distributing the image data in the preset format into a plurality of in-memory computing units for acceleration computation.
Optionally, the distributing the received image data in the preset format, distributing the image data in the preset format to a plurality of in-memory computing units for performing acceleration computation, including:
distributing the image data in the preset format to corresponding in-memory computing units according to a preset execution sequence;
and convolving, pooling, activating and/or scaling the image data distributed in the neural network in the preset format to obtain the image data after the acceleration calculation.
Optionally, the distributing the image data in the preset format to the corresponding in-memory computing unit according to the preset execution sequence includes:
acquiring a preset execution sequence of the neural network corresponding to each in-memory computing unit, wherein the preset execution sequence comprises an execution instruction corresponding to each layer of operator of the neural network, the arrangement of weights corresponding to each layer of operator in a memory and the configuration of data paths in an image processing device;
And distributing the image data in the preset format to the neural network corresponding to each in-memory computing unit based on the preset execution sequence.
Optionally, after the convolving, pooling, activating and/or scaling the image data allocated to the preset format in the neural network to obtain the image data after the accelerating calculation, the method further includes:
and respectively transmitting the image data obtained after convolution, the pooled image data, the activated image data and the zoomed image data to a target position for storage.
Optionally, the image processing method further includes:
the method comprises the steps that a controller controls the data flow direction of an image processing device according to a control bus, and parameter configuration is carried out on a pre-processing unit, an in-memory computing module, a post-processing unit and a memory;
and storing the acquired original image data and configuration data of the image processing device, wherein the configuration data comprises configuration parameters of a controller, configuration parameters of the in-memory computing module, first intermediate data of the in-memory computing module and second intermediate data of the post-processing unit.
Optionally, the image processing method further includes:
acquiring original statistical data corresponding to the original image data through the preprocessing unit;
And dynamically adjusting the configuration parameters of the in-memory computing module by the controller according to the acquired original statistical data.
Optionally, the dynamically adjusting the configuration parameters according to the obtained original statistics includes:
determining configuration parameters of the neural network in the in-memory calculation module during the previous frame based on the original statistical data of the original image data of the previous frame, wherein the configuration parameters comprise a weight value, a bias value, a quantization value and a gain value;
taking the configuration parameter of the previous frame as the initial configuration parameter of the current frame of the neural network;
comparing the original statistical data of the previous frame of original image data with the original statistical data of the current frame of original image data to obtain a comparison result;
and adjusting initial configuration parameters of the neural network in the current frame based on the comparison result.
Optionally, the performing image preprocessing on the obtained original image data to convert the original image data into image data in a preset format includes:
performing brightness statistics and chromaticity statistics on a global or local region of interest of the original image data to obtain corresponding original statistics data; and/or
And performing black level compensation, nonlinear transformation and normalization processing on the original image data to obtain image data in a preset format.
Optionally, the processing the image data after the acceleration calculation to obtain target image data includes:
and carrying out inverse normalization, fixed point and data truncation on the image data after the acceleration calculation to obtain target image data.
The application also provides a chip comprising the image processing device.
The application also provides electronic equipment comprising the image processing device.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method as described above.
The embodiment of the invention has the following beneficial effects:
as described above, the image processing apparatus, the method, the chip, the electronic device, and the storage medium provided in the present application include a pre-processing unit, an in-memory computing module, and a post-processing unit, where the in-memory computing module is connected to the pre-processing unit and the post-processing unit, respectively; the preprocessing unit is used for carrying out image preprocessing on the acquired original image data so as to convert the original image data into image data in a preset format, and sending the image data in the preset format to the in-memory computing module; the in-memory computing module comprises at least one in-memory computing unit, the in-memory computing unit comprises a neural network, the in-memory computing unit is used for receiving the image data in the preset format sent by the preprocessing unit, performing acceleration computing on the image data in the preset format distributed in the neural network, and sending the image data after the acceleration computing to the post-processing unit; and the post-processing unit is used for receiving the image data after the acceleration calculation sent by the in-memory calculation module, and carrying out image data processing on the image data after the acceleration calculation to obtain target image data. According to the embodiment of the application, firstly, the original image data is converted into the image data which accords with the format required by the in-memory computing module through image preprocessing of the original image data, so that the image data with better quality is provided for the follow-up, and the improvement of the image restoration quality and efficiency is facilitated; and then, performing acceleration calculation on the image data through a plurality of in-memory calculation units, and finally, performing image data processing on the image data, thereby improving the image restoration quality through the neural network and simultaneously reducing the problems of higher power consumption and lower image restoration efficiency in the image restoration by utilizing a neural network mode in the prior art.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic structural view of an image processing apparatus provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a first implementation of an image processing method provided in the embodiment of the present application;
fig. 3 is a schematic flow chart of a first implementation of step S21 provided in the embodiment of the present application;
fig. 4 is a schematic flow chart of step S211 provided in the embodiment of the present application;
fig. 5 is a schematic flow chart of a second implementation of step S21 provided in the embodiment of the present application;
fig. 6 is a schematic flow chart of a second implementation of the image processing method provided in the embodiment of the present application;
fig. 7 is a schematic flow chart of step S5 provided in the embodiment of the present application;
Fig. 8 is a schematic flow chart of step S1 provided in the embodiment of the present application.
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings. Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the element defined by the phrase "comprising one … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element, and furthermore, elements having the same name in different embodiments of the present application may have the same meaning or may have different meanings, a particular meaning of which is to be determined by its interpretation in this particular embodiment or by further combining the context of this particular embodiment.
It will be understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, steps, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, steps, operations, elements, components, items, categories, and/or groups. The terms "or," "and/or," "including at least one of," and the like, as used herein, may be construed as inclusive, or meaning any one or any combination. For example, "including at least one of: A. b, C "means" any one of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a and B and C ", again as examples," A, B or C "or" A, B and/or C "means" any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a and B and C). An exception to this definition will occur only when a combination of elements, functions, steps or operations are in some way inherently mutually exclusive.
It should be noted that, in this document, step numbers such as S1 and S2 are used for the purpose of more clearly and briefly describing the corresponding contents, and not to constitute a substantial limitation on the sequence, and those skilled in the art may perform S2 first and then S1 when implementing the present invention, which are all within the scope of protection of the present application.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" for representing elements are used only for facilitating the description of the present application, and are not of specific significance per se. Thus, "module," "component," or "unit" may be used in combination.
Firstly, an application scene that the application can provide, such as a driving recording system or a driver monitoring system in the automatic driving field, an eye tracker and an image acquisition device in the AR or VR field, and the like, is introduced, and the application provides an image processing device, an image processing method, a chip, electronic equipment and a storage medium, which can solve the problems of higher power consumption and lower image restoration efficiency in image restoration.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. The image processing device specifically may include a pre-processing unit 10, an in-memory computing module 20, and a post-processing unit 30, where the in-memory computing module 20 is connected to the pre-processing unit 10 and the post-processing unit 30, respectively;
the preprocessing unit 10 is configured to perform image preprocessing on the obtained original image data, so as to convert the original image data into image data in a preset format, and send the image data in the preset format to the in-memory computing module.
Specifically, for the preprocessing unit 10, the original image data (for example, an image in RAW format) input by the image sensor or other image processing chips is mainly obtained, and the original image data is subjected to image preprocessing by the preprocessing unit 10, so that the obtained original image data is converted into an image in a data format required by calculation-in-memory (CIM), and after the original image data is converted into the image data in a preset format, the image data is sent to the in-memory Computing module 20 for processing, and the image data with better quality is provided for the subsequent image data by performing image preprocessing on the original image data, so that the image restoration quality is improved.
The in-memory computing module 20 includes at least one in-memory computing unit, where the in-memory computing unit includes a neural network, and the in-memory computing unit is configured to receive the image data in the preset format sent by the preprocessing unit, perform an acceleration computation on the image data in the preset format allocated to the neural network, and send the image data after the acceleration computation to the post-processing unit.
Specifically, for the in-memory computing module 20, the in-memory computing module 20 in this embodiment includes at least one in-memory computing unit, where each in-memory computing unit corresponds to a neural network; after receiving the image data, the in-memory computing module 20 distributes the received image data to corresponding in-memory computing units according to a preset execution sequence for performing acceleration computation, each in-memory computing unit is used for receiving all or part of the image data after image preprocessing sent by the preprocessing unit 10, and the multiple in-memory computing units in the in-memory computing module 20 perform acceleration computation on the image data received by the neural network, so that the efficiency and quality of image processing are improved, and the power consumption of image processing can be remarkably reduced.
It should be noted that the neural network is essentially a model, and a complete neural network model may be included in a memory computing unit, or may be a part of a neural network model.
And the post-processing unit 30 is configured to receive the image data after the acceleration calculation sent by the in-memory calculation module, and perform image data processing on the image data after the acceleration calculation to obtain target image data.
Specifically, for the post-processing unit 30, the image data after the acceleration calculation sent by the in-memory computing module 20 is received, and a series of image data processing including, but not limited to, inverse normalization, pointing, data truncation and the like is performed on the image data after the acceleration processing by the post-processing unit 30, so that the target image data is finally obtained, and the image restoration is completed.
It can be seen that, in the image processing apparatus provided in this embodiment, the original image data is first subjected to image preprocessing by the pre-processing unit 10, and the original image data is converted into image data conforming to the format required by the in-memory computing module, so that image data with better quality is provided for the subsequent process, which is beneficial to improving the quality and efficiency of image restoration; the image data is subjected to acceleration calculation of the neural network through a plurality of in-memory calculation units in the in-memory calculation module 20, and finally the image data is subjected to image data processing through the post-processing unit 30, so that the problems of higher power consumption and lower image restoration efficiency in the image restoration by utilizing the neural network mode in the prior art are solved while the image restoration quality is improved through the neural network.
Optionally, in some embodiments, the in-memory computing module 20 is further configured to distribute the received image data in the preset format, and distribute the image data in the preset format to a plurality of in-memory computing units for performing the acceleration computation.
Optionally, in some embodiments, the distributing the received image data in the preset format, and distributing the image data in the preset format to a plurality of in-memory computing units to perform acceleration computation may specifically include:
distributing image data in a preset format to corresponding in-memory computing units according to a preset execution sequence;
the image data in the preset format distributed into the neural network is convolved, pooled, activated and/or scaled to obtain the image data after the acceleration calculation.
Specifically, for the in-memory computing module 20, after the image data meeting the format requirement is obtained, the image data in the preset format is distributed to the corresponding in-memory computing units according to the preset execution sequence, after each in-memory computing unit receives the image data distributed by the in-memory computing module 20, the computing such as convolution, pooling, activation and/or scaling is performed on the image data in the neural network corresponding to the in-memory computing unit at the same time, so as to obtain the image data after the accelerated computing, thereby realizing the effect of performing the accelerated computing on the image data, and performing the accelerated computing on the image data of the neural network through a plurality of in-memory computing units, so as to further improve the speed of image data processing.
Optionally, in some embodiments, the distributing the image data in the preset format to the corresponding in-memory computing unit according to the preset execution sequence may specifically include:
acquiring a preset execution sequence of the neural network corresponding to each in-memory computing unit, wherein the preset execution sequence comprises an execution instruction corresponding to each layer of operators of the neural network, the arrangement of weights corresponding to each layer of operators in a memory and the configuration of data paths in an image processing device;
and distributing the image data in the preset format to the neural network corresponding to each in-memory computing unit based on the preset execution sequence.
Specifically, the in-memory computing module 20 first obtains a preset execution sequence of the neural networks in each in-memory computing unit, where the preset execution sequence includes an execution command corresponding to each layer operator of each neural network, an arrangement of weights corresponding to each layer operator in a memory of the memory, and a data path configuration of each component in the image processing apparatus; the memory refers to a part of the memory for temporarily storing data and exchanging data with an external memory. The in-memory computing module 20 distributes the acquired image data in the preset format to each in-memory computing unit according to the preset execution sequence, and the in-memory computing units distribute the image data to each layer of operators in the corresponding neural network, so that the image data needing to be subjected to accelerated computation is rapidly distributed to each layer of the plurality of neural networks to carry out accelerated computation.
Optionally, the convoluting, pooling, activating and/or scaling the image data allocated to the preset format in the neural network may specifically further include:
the convolved image data, the pooled image data, the activated image data and the scaled image data are sent to the post-processing unit 30 and/or the memory 50, respectively.
Specifically, after performing convolution, pooling, activation, scaling and/or other acceleration calculations on the image data allocated to the corresponding neural network, the in-memory calculation unit sends the image data obtained after each acceleration calculation to the post-processing unit 30 and/or the memory 50, and when sending the image data obtained after each acceleration calculation to the memory 50, the in-memory calculation unit stores the image data obtained after each acceleration calculation to the memory in the memory 50 according to a preset execution sequence, so that when the image data in the memory 50 needs to be read later, the read image data is complete and orderly.
In a specific embodiment, for the multiple in-memory computing units 200 of the in-memory computing module 20, the in-memory computing module 20 is specifically configured to perform acceleration processing on the image data after image preprocessing based on the neural network, after the in-memory computing module 20 receives the preprocessed image data sent by the preprocessing unit 10, the in-memory computing module 20 distributes the image data to the multiple in-memory computing units 200 to perform data acceleration computation, and the neural network of the in-memory computing units performs acceleration computation on the image data, including but not limited to convolution, pooling, activation, scaling, and/or the like, so as to convert the RAW image into an RGB image of int8/int16/int 32. As shown in fig. 1, for the in-memory computing module 20, the in-memory computing module 20 further includes a vector processing unit 201, and after the in-memory computing module 20 distributes the image data to the plurality of in-memory computing units 200, the image data is convolved, pooled, activated, and/or scaled by the plurality of vector processing units 201, so as to finally obtain the image data after the acceleration computation. The image data processed by each in-memory computing unit may be of the same type or different types, for example, the first in-memory computing unit may perform convolution acceleration processing on the image data, the second in-memory computing unit may be used for pooling the image data after convolution by the first in-memory computing unit, the second in-memory computing unit may also be used for pooling the image data obtained from the memory, and the second in-memory computing unit may also be used for convolution on the image data, i.e. the processing of the image data by each in-memory computing unit may be independent or have a data flow relation. The image data processed by the plurality of in-memory computing units in the in-memory computing module 20 may be the image data transmitted from the preprocessing unit 10, or may be the image data read from the memory 50 by the in-memory computing module 20, and after the acceleration processing of the image data, the in-memory computing module 20 may transmit the image data after the acceleration processing to the post-processing unit 30, or may store the image data as intermediate data in the memory 50. It should be noted that, the in-memory computing module 20 in the present embodiment may be a neural network accelerator for in-memory computing.
Optionally, as shown in fig. 1, the image processing apparatus may specifically further include a controller 40 and a memory 50, where the controller 40 is respectively connected to the pre-processing unit 10, the in-memory computing module 20, the post-processing unit 30, and the memory 50 is respectively connected to the pre-processing unit 10, the in-memory computing module 20, the post-processing unit 30, and the controller 40;
a controller 40 for controlling a data flow direction of the image processing apparatus through a control bus and performing parameter configuration on the pre-processing unit 10, the in-memory computing module 20, the post-processing unit 30, and the memory 50;
specifically, for the controller 40, the controller 40 is respectively connected to the pre-processing unit 10, the in-memory computing module 20, the post-processing unit 30 and the memory 50 of the image processing apparatus, and the controller 40 does not participate in any data flow processing, and only controls the data flow of the whole image processing apparatus through the control bus, for example, how the image data flows from the pre-processing unit 10 to the in-memory computing module 20 and then flows to the post-processing unit 30; how to distribute the image output from the preprocessing unit 10 to each in-memory computing unit 200 in the in-memory computing module 20, and so on; the controller 40 is also used to configure configuration parameters of various components of the image processing apparatus, including, but not limited to, physical configuration parameters of the pre-processing unit 10, the in-memory computing module 20, the post-processing unit 30, and the memory 50, and configuration parameters of the in-memory computing unit and the neural network in the in-memory computing module 20.
The memory 50 is used for storing the acquired original image data and configuration data of the image processing apparatus, wherein the configuration data includes configuration parameters of the controller 40, configuration parameters of the in-memory computing module 20, first intermediate data of the in-memory computing module 20 and second intermediate data of the post-processing unit 30.
Specifically, the memory 50 is respectively connected to the pre-processing unit 10, the in-memory computing module 20, the post-processing unit 30 and the controller 40, and is configured to store original image data (such as image data in RAW format input by an image sensor or other image processing chips), configuration parameters of each component of the image processing device, configuration information of the controller 40, weight information of the in-memory computing module 20, first intermediate data of the in-memory computing module 20, and second intermediate data of the post-processing unit 30, where the first intermediate data is cache data obtained by performing acceleration computation on image data of a neural network by each in-memory computing unit in the in-memory computing module 20, and the second intermediate data is cache data obtained by performing image data processing on the image data after the acceleration computation by the post-processing unit 30; the weight information of the in-memory computing module 20 is used to determine configuration information during image processing in different brightness scenes.
Preferably, the memory 50 in this embodiment may be selected as an on-chip memory, which has the advantages of fast reading speed and low power consumption compared to an off-chip memory.
Optionally, in some embodiments, the preprocessing unit 10 is further configured to obtain original statistics corresponding to the original image data; the controller 40 is further configured to dynamically adjust the configuration parameters of the in-memory computing module according to the obtained raw statistics.
Specifically, in this embodiment, the preprocessing unit 10 is further configured to obtain the original statistics of the original image data, and then the controller 40 is further configured to obtain the original statistics sent by the preprocessing unit 10, so as to dynamically adjust the configuration parameters of the in-memory computing module 20 according to the original statistics, and dynamically adjust the configuration parameters of the neural network in the in-memory computing module 20, so as to adapt to image restoration under different imaging scenarios.
Optionally, in some embodiments, the dynamically adjusting the configuration parameters of the in-memory computing module according to the obtained raw statistics may specifically include:
determining configuration parameters of the neural network in the in-memory calculation module during the previous frame based on the original statistical data of the original image data of the previous frame, wherein the configuration parameters comprise a weight value, a bias value, a quantization value and a gain value;
Taking the configuration parameter of the previous frame as the initial configuration parameter of the current frame of the neural network;
comparing the original statistical data of the original image data of the previous frame with the original statistical data of the original image data of the current frame to obtain a comparison result;
based on the comparison result, the initial configuration parameters of the neural network in the current frame are adjusted.
Specifically, the controller 40 dynamically adjusts the configuration parameters of the in-memory computing module 20 according to the original statistics, including the following steps: firstly, acquiring original statistical data of original image data (such as an image in a RAW format) of a previous frame, wherein the original statistical data comprises an original brightness statistical value and an original chromaticity statistical value, and determining the current configuration parameters of the neural network of the in-memory computing module 20 according to the original statistical data, wherein the configuration parameters comprise a weight value, a bias value, a quantization value and a gain value, and the gain value depends on the image information of each frame; the configuration parameters of the neural network are reserved as the initial configuration parameters of the neural network during the processing of the image data of the current frame, the original statistical data of the original image data of the previous frame and the original statistical data of the original image data of the current frame are compared to obtain corresponding comparison results, if the comparison results exceed a preset threshold value, the initial configuration parameters of the neural network during the processing of the image data of the current frame are dynamically adjusted, so that the self-adaptability of the neural network is utilized, and the configuration parameters of the neural network are dynamically adjusted through the statistical parameters such as exposure, color and the like under different scene conditions, so that the effects of simultaneously carrying out image noise reduction and enhancement are achieved, the problems of more image noise points and poor reduction quality are avoided, and the requirements of high-image quality application are met.
Optionally, in some embodiments, the performing image preprocessing on the acquired original image data to convert the original image data into image data in a preset format may specifically include:
performing brightness statistics and chromaticity statistics on a global or local region of interest of the original image data to obtain corresponding original statistics data; and/or
And performing black level compensation, nonlinear transformation and normalization processing on the original image data to obtain image data in a preset format.
Specifically, for the preprocessing unit 10, first, RAW image data input by an image sensor or other image processing chips are acquired, luminance statistics and chromaticity statistics are performed on the RAW image data, for example, statistics are performed on RAW images according to 10/12/14/16 bits, global mean values and histograms of the RAW images, or local 9*9 images and ROI areas (regions of interest) are counted, and RAW statistical data of the RAW image data including, but not limited to, RAW luminance statistical values and RAW chromaticity statistical values are obtained; then, performing black level compensation on the original image data (RAW image) to obtain a RAW image with black level removed; then, carrying out nonlinear transformation on the original image data, which is mainly used for decompressing the HDR 10-14bit compressed data, and carrying out point-by-point processing on the RAW image, for example, decompressing the 10bit data into 16bit data; then, the original image data is normalized, and the RAW image is globally processed, for example, 10/12/14/16bit image data is converted into int8 or uint8 image data, so as to be supplied to the subsequent in-memory computing module 20 for processing; after the image preprocessing, the image data with the format required by the in-memory computing module 20 is obtained and sent to the in-memory computing module 20 through the preprocessing unit 10, so that the data format is not required to be converted again when the subsequent in-memory computing module 20 carries out the acceleration computing on the image data, and the efficiency of the acceleration computing is improved.
Optionally, in some embodiments, performing image data processing on the image data after the acceleration calculation to obtain target image data may specifically include:
the post-processing unit 30 performs inverse normalization, pointing, and data truncation processing on the image data after the acceleration calculation to obtain target image data.
Specifically, for the post-processing unit 30, the post-processing unit is configured to receive the image data after the acceleration processing sent by the in-memory computing module 20, and perform image data processing on the image data, where the image data processing includes, but is not limited to: inversely normalizing the image data, inversely normalizing the image data normalized by the preprocessing unit 10, and inversely normalizing the image data with wide bit width point by point; carrying out fixed-point processing on the image data, and fixing the FP16 image data as the image data of int 16; performing data truncation on the image data, and performing cut-off operation when the pixel data is oversaturated or negative, so as to convert each pixel point of the int8/int16/int32 in the image into a pixel point of the uint 8; and obtaining target image data after image data processing, and finishing image restoration.
The embodiment of the application also provides a chip comprising the image processing device.
The embodiment of the application also provides electronic equipment, which comprises the image processing device.
As shown in fig. 2, the embodiment of the present application further provides an image processing method, which may be executed in an image processing apparatus, and may specifically include the following steps:
s1, performing image preprocessing on the obtained original image data to convert the original image data into image data in a preset format, and sending the image data in the preset format to an in-memory computing module.
Specifically, for step S1, the original image data (for example, an image in RAW format) input by the image sensor or other image processing chips is mainly obtained, and the original image data is subjected to image preprocessing, so that the obtained original image data is converted into an image in a data format required by Computing-in-memory (CIM), and the original image data is subjected to image preprocessing, so that image data with better quality is provided for the subsequent step, thereby being beneficial to improving the image restoration quality.
S2, receiving image data in a preset format, performing acceleration calculation on the image data in the preset format distributed in the neural network, and sending the image data after the acceleration calculation to a post-processing unit.
Specifically, for step S2, after receiving the image data, the received image data is distributed to the corresponding neural network according to the preset execution sequence to perform acceleration calculation, and the image data received by the neural network is subjected to acceleration calculation, so that the efficiency and quality of image processing are improved, and the power consumption of image processing can be significantly reduced.
S3, receiving the image data after the acceleration calculation, and performing image data processing on the image data after the acceleration calculation to obtain target image data.
Specifically, for step S3, a series of image data processing is performed on the image data after the acceleration processing, including but not limited to inverse normalization, pointing, data truncation, and the like, to finally obtain the target image data, thereby completing the image restoration.
It can be seen that, in the image processing method provided by the embodiment of the application, the original image data is firstly converted into the image data conforming to the format required by the in-memory computing module by performing image preprocessing on the original image data, so that the image data with better quality is provided for the follow-up, and the image restoration quality and efficiency are improved; and then, performing acceleration calculation on the image data through a plurality of in-memory calculation units, and finally, performing image data processing on the image data, thereby improving the image restoration quality through the neural network and simultaneously reducing the problems of higher power consumption and lower image restoration efficiency in the image restoration by utilizing a neural network mode in the prior art.
Optionally, in some embodiments, the image processing method further includes:
s21, distributing the received image data in the preset format, and distributing the image data in the preset format into a plurality of in-memory computing units for acceleration computation.
Optionally, as shown in fig. 3, in some embodiments, step S21 may specifically include:
s211, distributing image data in a preset format to corresponding in-memory computing units according to a preset execution sequence;
s212, convolving, pooling, activating and/or scaling the image data distributed to the neural network in the preset format to obtain the image data after the acceleration calculation.
Specifically, for step S21, after the image data meeting the format requirement is obtained, the image data in the preset format is distributed to the corresponding in-memory computing units according to the preset execution sequence, after each in-memory computing unit receives the image data distributed by the in-memory computing module 20, the computing units simultaneously perform the convolution, pooling, activation, scaling and/or the like on the image data in the neural network corresponding to the in-memory computing units, so as to obtain the image data after the acceleration computation, thereby realizing the effect of performing the acceleration computation on the image data, and performing the acceleration computation on the image data of the neural network through the multiple in-memory computing units, so as to further improve the speed of image data processing.
Optionally, as shown in fig. 4, in some embodiments, step S211 may specifically include:
s2111, acquiring a preset execution sequence of the neural network corresponding to each in-memory computing unit, wherein the preset execution sequence comprises execution instructions corresponding to each layer of operators of the neural network, arrangement of weights corresponding to each layer of operators in a memory and data path configuration in an image processing device;
s2112, distributing image data in a preset format to the neural network corresponding to each in-memory computing unit based on a preset execution sequence.
Specifically, for step S211, firstly, a preset execution sequence of the neural networks in each in-memory computing unit is obtained, where the preset execution sequence includes an execution command corresponding to each layer operator of each neural network, an arrangement of weights corresponding to each layer operator in a memory of the memory, and a data path configuration of each component in the image processing device; according to the preset execution sequence, the acquired image data in the preset format is distributed to each in-memory computing unit, and the in-memory computing units distribute the image data to each layer of operators in the corresponding neural network, so that the image data needing to be subjected to accelerated computation is rapidly distributed to a plurality of neural networks to be subjected to accelerated computation.
Optionally, as shown in fig. 5, in some embodiments, step S212 further includes:
s213, the image data obtained after convolution, the pooled image data, the activated image data and the scaled image data are respectively sent to a target position to be stored.
Specifically, for step S213, the image data obtained after each acceleration calculation is stored in the target location, for example, the memory in the memory, according to the preset execution sequence, so that the image data obtained after the subsequent reading is complete and ordered when the image data in the memory 50 is required to be read.
Optionally, in some embodiments, the image processing method further includes:
the method comprises the steps that a controller controls the data flow direction of an image processing device according to a control bus, and parameter configuration is carried out on a pre-processing unit, an in-memory computing module, a post-processing unit and a memory;
the method comprises the steps of storing the acquired original image data and configuration data of the image processing device, wherein the configuration data comprises configuration parameters of a controller, configuration parameters of an in-memory computing module, first intermediate data of the in-memory computing module and second intermediate data of a post-processing unit.
Specifically, the controller 40 does not participate in any data flow processing, and controls the data flow of the entire image processing apparatus only through the control bus, for example, how the image data flows from the pre-processing unit 10 to the in-memory computing module 20 and then to the post-processing unit 30; how to distribute the image output from the preprocessing unit 10 to each in-memory computing unit 200 in the in-memory computing module 20, and so on; the configuration parameters of the various components of the image processing apparatus, including but not limited to the physical configuration parameters of the pre-processing unit 10, the in-memory computing module 20, the post-processing unit 30, and the memory 50, as well as the configuration parameters of the in-memory computing unit and the neural network in the in-memory computing module 20, are also configured by the controller 40.
Specifically, the memory 50 is used to store original image data (such as image data in RAW format input by an image sensor or other image processing chips), configuration parameters of each component of the image processing device, configuration information of the controller 40, weight information of the in-memory computing module 20, first intermediate data of the in-memory computing module 20 and second intermediate data of the post-processing unit 30, where the first intermediate data is cache data obtained after the image data of the neural network is accelerated by each in-memory computing unit 20, and the second intermediate data is cache data obtained after the image data of the post-processing unit 30 is processed by the image data after the acceleration computation; the weight information of the in-memory computing module 20 is used to determine configuration information during image processing in different brightness scenes.
Optionally, as shown in fig. 6, in some embodiments, the image processing method specifically may further include:
s4, acquiring original statistical data corresponding to the original image data through a preprocessing unit;
s5, dynamically adjusting configuration parameters of the in-memory computing module through the controller according to the acquired original statistical data.
Alternatively, as shown in fig. 7, in some embodiments, step S5 may specifically include:
S51, determining configuration parameters including a weight value, a bias value, a quantization value and a gain value when the neural network in the in-memory calculation module is in the last frame based on the original statistical data of the original image data of the last frame;
s52, taking the configuration parameter of the previous frame as the initial configuration parameter of the current frame of the neural network;
s53, comparing the original statistical data of the original image data of the previous frame with the original statistical data of the original image data of the current frame to obtain a comparison result;
s54, based on the comparison result, the initial configuration parameters of the neural network in the current frame are adjusted.
Specifically, for step S5, first, original statistics data of original image data (for example, an image in RAW format) of a previous frame is obtained, including an original luminance statistics value and an original chrominance statistics value, and current configuration parameters of the neural network are determined according to the original statistics data, where the configuration parameters include a weight value, a bias value, a quantization value and a gain value, and the gain value depends on image information of each frame; the configuration parameters of the neural network are reserved as the initial configuration parameters of the neural network during the processing of the image data of the current frame, the original statistical data of the original image data of the previous frame and the original statistical data of the original image data of the current frame are compared to obtain corresponding comparison results, if the comparison results exceed a preset threshold value, the initial configuration parameters of the neural network during the processing of the image data of the current frame are dynamically adjusted, so that the self-adaptability of the neural network is utilized, and the configuration parameters of the neural network are dynamically adjusted through the statistical parameters such as exposure, color and the like under different scene conditions, so that the effects of simultaneously carrying out image noise reduction and enhancement are achieved, the problems of more image noise points and poor reduction quality are avoided, and the requirements of high-image quality application are met.
Alternatively, as shown in fig. 8, in some embodiments, step S1 may specifically include:
s11, carrying out brightness statistics and chromaticity statistics on a global or local region of interest of the original image data to obtain corresponding original statistical data; and/or
S12, performing black level compensation, nonlinear transformation and normalization processing on the original image data to obtain image data in a preset format.
Specifically, for step S1, first, RAW image data input by an image sensor or other image processing chip is obtained, luminance statistics and chromaticity statistics are performed on the RAW image data, for example, statistics are performed on RAW images according to 10/12/14/16 bits, and RAW image global mean values and histograms, or local 9*9 images and ROI areas (regions of interest) are counted, so as to obtain RAW statistical data of the RAW image data, including but not limited to RAW luminance statistical values and RAW chromaticity statistical values; then, performing black level compensation on the original image data (RAW image) to obtain a RAW image with black level removed; then, carrying out nonlinear transformation on the original image data, which is mainly used for decompressing the HDR 10-14bit compressed data, and carrying out point-by-point processing on the RAW image, for example, decompressing the 10bit data into 16bit data; then, the original image data is normalized, and the RAW image is subjected to global processing, for example, 10/12/14/16bit image data is converted into int8 or uint8 image data, so as to supply subsequent acceleration calculation; after the image preprocessing, the image data with the format required by the subsequent acceleration calculation is obtained, so that the data format is not required to be converted again when the subsequent acceleration calculation is performed on the image data, and the efficiency of the acceleration calculation is improved.
Optionally, in some embodiments, step S3 may specifically include:
and carrying out inverse normalization, pointing and data truncation on the image data after the acceleration calculation to obtain target image data.
Specifically, for step S3, mainly, image data processing is performed on the image data after the acceleration calculation, where the image data processing includes, but is not limited to: inversely normalizing the image data, inversely normalizing the image data normalized by the preprocessing unit 10, and inversely normalizing the image data with wide bit width point by point; carrying out fixed-point processing on the image data, and fixing the FP16 image data as the image data of int 16; performing data truncation on the image data, and performing cut-off operation when the pixel data is oversaturated or negative, so as to convert each pixel point of the int8/int16/int32 in the image into a pixel point of the uint 8; and obtaining target image data after image data processing, and finishing image restoration.
An embodiment of the present application further provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an image processing method, including the steps of: performing image preprocessing on the obtained original image data to convert the original image data into image data in a preset format, and sending the image data in the preset format to an in-memory computing module; receiving image data in a preset format, performing acceleration calculation on the image data in the preset format distributed in the neural network, and sending the image data after the acceleration calculation to a post-processing unit; and receiving the image data after the acceleration calculation, and performing image data processing on the image data after the acceleration calculation to obtain target image data.
According to the image processing method, firstly, the original image data is converted into the image data which accords with the format required by the in-memory computing module through image preprocessing of the original image data, so that the image data with better quality is provided for the follow-up, and the image restoration quality and efficiency are improved; and then, performing acceleration calculation on the image data through a plurality of in-memory calculation units, and finally, performing image data processing on the image data, thereby improving the image restoration quality through the neural network and simultaneously reducing the problems of higher power consumption and lower image restoration efficiency in the image restoration by utilizing a neural network mode in the prior art.
It can be understood that the above scenario is merely an example, and does not constitute a limitation on the application scenario of the technical solution provided in the embodiments of the present application, and the technical solution of the present application may also be applied to other scenarios. For example, as one of ordinary skill in the art can know, with the evolution of the system architecture and the appearance of new service scenarios, the technical solutions provided in the embodiments of the present application are equally applicable to similar technical problems.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of the embodiment of the application can be combined, divided and pruned according to actual needs.
In this application, the same or similar term concept, technical solution, and/or application scenario description will generally be described in detail only when first appearing, and when repeated later, for brevity, will not generally be repeated, and when understanding the content of the technical solution of the present application, etc., reference may be made to the previous related detailed description thereof for the same or similar term concept, technical solution, and/or application scenario description, etc., which are not described in detail later.
In this application, the descriptions of the embodiments are focused on, and the details or descriptions of one embodiment may be found in the related descriptions of other embodiments.
The technical features of the technical solutions of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the present application.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as above, including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, a controlled terminal, or a network device, etc.) to perform the method of each embodiment of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc., that contain an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, storage disks, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., solid State Disk (SSD)), among others.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (15)

1. The image processing device is characterized by comprising a pre-processing unit, an in-memory computing module, a post-processing unit, a controller and a memory, wherein the in-memory computing module is respectively connected with the pre-processing unit and the post-processing unit; the controller is respectively connected with the preprocessing unit, the in-memory computing module, the post-processing unit and the memory, and the memory is respectively connected with the preprocessing unit, the in-memory computing module, the post-processing unit and the controller;
the preprocessing unit is used for carrying out image preprocessing on the acquired original image data so as to convert the original image data into image data in a preset format, and sending the image data in the preset format to the in-memory computing module;
the in-memory computing module comprises at least one in-memory computing unit, wherein the in-memory computing unit comprises a neural network, and is used for receiving the image data in the preset format sent by the preprocessing unit, performing acceleration computation on the image data in the preset format distributed in the neural network, and sending the image data after the acceleration computation to the post-processing unit; the in-memory computing module is further configured to distribute the received image data in the preset format, distribute the image data in the preset format to a plurality of in-memory computing units, and perform acceleration computation, where the method includes: distributing the image data in the preset format to corresponding in-memory computing units according to a preset execution sequence; convolving, pooling, activating and/or scaling the image data of the preset format distributed in the neural network to obtain image data after acceleration calculation; the preset execution sequence comprises an execution instruction corresponding to each layer of operators of the neural network, arrangement of weights corresponding to each layer of operators in a memory and data path configuration in an image processing device;
The post-processing unit is used for receiving the image data after the acceleration calculation sent by the in-memory calculation module, and performing image data processing on the image data after the acceleration calculation to obtain target image data;
the controller is used for controlling the data flow direction of the image processing device through a control bus and carrying out parameter configuration on the preprocessing unit, the in-memory computing module, the post-processing unit and the memory;
the memory is used for storing the acquired original image data and the configuration data of the image processing device, wherein the configuration data comprises the configuration parameters of the controller, the configuration parameters of the in-memory computing module, the first intermediate data of the in-memory computing module and the second intermediate data of the post-processing unit;
the preprocessing unit is also used for acquiring original statistical data corresponding to the original image data; the controller is further configured to dynamically adjust a configuration parameter of the in-memory computing module according to the obtained original statistical data.
2. The image processing apparatus according to claim 1, wherein the distributing the image data of the preset format to the corresponding in-memory computing unit in a preset execution order includes:
Acquiring a preset execution sequence of the neural network corresponding to each in-memory computing unit, wherein the preset execution sequence comprises an execution instruction corresponding to each layer of operator of the neural network, the arrangement of weights corresponding to each layer of operator in a memory and the configuration of data paths in an image processing device;
and distributing the image data in the preset format to the neural network corresponding to each in-memory computing unit based on the preset execution sequence.
3. The image processing apparatus according to claim 1, wherein the convoluting, pooling, activating and/or scaling of the image data of the preset format allocated into a neural network, further comprises:
and respectively transmitting the convolved image data, the pooled image data, the activated image data and the scaled image data to a post-processing unit and/or a memory.
4. The image processing apparatus according to claim 1, wherein the dynamically adjusting the configuration parameters of the in-memory computing module according to the acquired raw statistical data comprises:
determining configuration parameters of the neural network in the in-memory calculation module during the previous frame based on the original statistical data of the original image data of the previous frame, wherein the configuration parameters comprise a weight value, a bias value, a quantization value and a gain value;
Taking the configuration parameter of the previous frame as the initial configuration parameter of the current frame of the neural network;
comparing the original statistical data of the previous frame of original image data with the original statistical data of the current frame of original image data to obtain a comparison result;
and adjusting initial configuration parameters of the neural network in the current frame based on the comparison result.
5. The image processing apparatus according to claim 1, wherein the image preprocessing of the acquired original image data to convert the original image data into image data of a preset format, comprises:
performing brightness statistics and chromaticity statistics on a global or local region of interest of the original image data to obtain corresponding original statistics data; and/or
And performing black level compensation, nonlinear transformation and normalization processing on the original image data to obtain image data in a preset format.
6. The image processing apparatus according to claim 1, wherein the performing image data processing on the image data after the acceleration calculation to obtain target image data includes:
and carrying out inverse normalization, fixed point and data truncation processing on the image data after the acceleration calculation by the post-processing unit to obtain target image data.
7. An image processing method, characterized by comprising the steps of:
performing image preprocessing on the obtained original image data to convert the original image data into image data in a preset format, and sending the image data in the preset format to an in-memory computing module;
receiving the image data in the preset format, performing acceleration calculation on the image data in the preset format distributed in the neural network, and sending the image data after the acceleration calculation to a post-processing unit; distributing the received image data in the preset format, distributing the image data in the preset format into a plurality of in-memory computing units for acceleration computing, and comprising the following steps: distributing the image data in the preset format to corresponding in-memory computing units according to a preset execution sequence; convolving, pooling, activating and/or scaling the image data of the preset format distributed in the neural network to obtain image data after acceleration calculation; the preset execution sequence comprises an execution instruction corresponding to each layer of operators of the neural network, arrangement of weights corresponding to each layer of operators in a memory and data path configuration in an image processing device;
Receiving the image data after the acceleration calculation, and performing image data processing on the image data after the acceleration calculation to obtain target image data;
the method comprises the steps that a controller controls the data flow direction of an image processing device according to a control bus, and parameter configuration is carried out on a pre-processing unit, an in-memory computing module, a post-processing unit and a memory;
storing the acquired original image data and configuration data of an image processing device, wherein the configuration data comprises configuration parameters of a controller, configuration parameters of the in-memory computing module, first intermediate data of the in-memory computing module and second intermediate data of the post-processing unit;
acquiring original statistical data corresponding to the original image data through the preprocessing unit;
and dynamically adjusting the configuration parameters of the in-memory computing module by the controller according to the acquired original statistical data.
8. The image processing method according to claim 7, wherein the distributing the image data of the preset format to the corresponding in-memory computing unit in a preset execution order includes:
acquiring a preset execution sequence of the neural network corresponding to each in-memory computing unit, wherein the preset execution sequence comprises an execution instruction corresponding to each layer of operator of the neural network, the arrangement of weights corresponding to each layer of operator in a memory and the configuration of data paths in an image processing device;
And distributing the image data in the preset format to the neural network corresponding to each in-memory computing unit based on the preset execution sequence.
9. The image processing method according to claim 7, wherein after the convolving, pooling, activating and/or scaling the image data of the preset format allocated to the neural network to obtain the image data after the acceleration calculation, further comprising:
and respectively transmitting the image data obtained after convolution, the pooled image data, the activated image data and the zoomed image data to a target position for storage.
10. The image processing method according to claim 7, wherein the dynamically adjusting the configuration parameters according to the acquired raw statistical data includes:
determining configuration parameters of the neural network in the in-memory calculation module during the previous frame based on the original statistical data of the original image data of the previous frame, wherein the configuration parameters comprise a weight value, a bias value, a quantization value and a gain value;
taking the configuration parameter of the previous frame as the initial configuration parameter of the current frame of the neural network;
comparing the original statistical data of the previous frame of original image data with the original statistical data of the current frame of original image data to obtain a comparison result;
And adjusting initial configuration parameters of the neural network in the current frame based on the comparison result.
11. The image processing method according to claim 7, wherein the image preprocessing of the acquired original image data to convert the original image data into image data of a preset format, comprises:
performing brightness statistics and chromaticity statistics on a global or local region of interest of the original image data to obtain corresponding original statistics data; and/or
And performing black level compensation, nonlinear transformation and normalization processing on the original image data to obtain image data in a preset format.
12. The image processing method according to claim 7, wherein the performing image data processing on the image data after the acceleration calculation to obtain target image data includes:
and carrying out inverse normalization, fixed point and data truncation on the image data after the acceleration calculation to obtain target image data.
13. A chip comprising the image processing apparatus according to any one of claims 1 to 6.
14. An electronic device comprising an image processing apparatus according to any one of claims 1-6.
15. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the image processing method according to any one of claims 7-12.
CN202310044515.0A 2023-01-30 2023-01-30 Image processing device, method, chip, electronic equipment and storage medium Active CN115797228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310044515.0A CN115797228B (en) 2023-01-30 2023-01-30 Image processing device, method, chip, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310044515.0A CN115797228B (en) 2023-01-30 2023-01-30 Image processing device, method, chip, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115797228A CN115797228A (en) 2023-03-14
CN115797228B true CN115797228B (en) 2023-06-23

Family

ID=85429131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310044515.0A Active CN115797228B (en) 2023-01-30 2023-01-30 Image processing device, method, chip, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115797228B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402724B (en) * 2023-06-08 2023-08-11 江苏游隼微电子有限公司 RYB format RAW image color restoration method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018221224A1 (en) * 2017-05-29 2018-12-06 オリンパス株式会社 Image processing device, image processing method, and image processing program
CN113810593A (en) * 2020-06-15 2021-12-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578453B (en) * 2017-10-18 2019-11-01 北京旷视科技有限公司 Compressed image processing method, apparatus, electronic equipment and computer-readable medium
CN110555345B (en) * 2018-06-01 2022-06-28 赛灵思电子科技(北京)有限公司 Intelligent image analysis system and method
US11669943B2 (en) * 2020-10-16 2023-06-06 Microsoft Technology Licensing, Llc Dual-stage system for computational photography, and technique for training same
CN114913063A (en) * 2021-02-10 2022-08-16 京东方科技集团股份有限公司 Image processing method and device
CN113628093A (en) * 2021-07-29 2021-11-09 苏州浪潮智能科技有限公司 Method, system, equipment and storage medium for accelerating image processing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018221224A1 (en) * 2017-05-29 2018-12-06 オリンパス株式会社 Image processing device, image processing method, and image processing program
CN113810593A (en) * 2020-06-15 2021-12-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN115797228A (en) 2023-03-14

Similar Documents

Publication Publication Date Title
US5712682A (en) Camera having an adaptive gain control
JP2022519469A (en) Image quality evaluation method and equipment
WO2023010754A1 (en) Image processing method and apparatus, terminal device, and storage medium
US20090317017A1 (en) Image characteristic oriented tone mapping for high dynamic range images
US20090161953A1 (en) Method of high dynamic range compression with detail preservation and noise constraints
US20190294931A1 (en) Systems and Methods for Generative Ensemble Networks
US8781225B2 (en) Automatic tone mapping method and image processing device
CN115797228B (en) Image processing device, method, chip, electronic equipment and storage medium
CN111292269B (en) Image tone mapping method, computer device, and computer-readable storage medium
Sakaue et al. Adaptive gamma processing of the video cameras for the expansion of the dynamic range
WO2023010750A1 (en) Image color mapping method and apparatus, electronic device, and storage medium
CN107302657A (en) Suitable for the image capturing system of Internet of Things
US8456541B2 (en) Image processing apparatus and image processing program
CN111741228B (en) Exposure adjusting method and device for panoramic image
CN110570384A (en) method and device for carrying out illumination equalization processing on scene image, computer equipment and computer storage medium
CN115984570A (en) Video denoising method and device, storage medium and electronic device
CN116134478A (en) Inverse tone mapping with adaptive bright spot attenuation
CN114998122A (en) Low-illumination image enhancement method
CN114240767A (en) Image wide dynamic range processing method and device based on exposure fusion
CN110717864A (en) Image enhancement method and device, terminal equipment and computer readable medium
CN113706393A (en) Video enhancement method, device, equipment and storage medium
CN112634166A (en) Image processing method and device, electronic equipment and storage medium
CN114697621B (en) Edge preserving noise reduction algorithm using inverse exponential function optimization
CN113891081A (en) Video processing method, device and equipment
CN114565543A (en) Video color enhancement method and system based on UV histogram features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant