WO2024098873A1 - 处理方法、处理设备及存储介质 - Google Patents

处理方法、处理设备及存储介质 Download PDF

Info

Publication number
WO2024098873A1
WO2024098873A1 PCT/CN2023/113174 CN2023113174W WO2024098873A1 WO 2024098873 A1 WO2024098873 A1 WO 2024098873A1 CN 2023113174 W CN2023113174 W CN 2023113174W WO 2024098873 A1 WO2024098873 A1 WO 2024098873A1
Authority
WO
WIPO (PCT)
Prior art keywords
color component
component block
neural network
information
block
Prior art date
Application number
PCT/CN2023/113174
Other languages
English (en)
French (fr)
Inventor
刘雨田
Original Assignee
深圳传音控股股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳传音控股股份有限公司 filed Critical 深圳传音控股股份有限公司
Publication of WO2024098873A1 publication Critical patent/WO2024098873A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present application relates to the technical field of signal data processing, and in particular to a processing method, a processing device and a storage medium.
  • a mathematical prediction model with parameters is constructed by artificial design, and the optimal parameters of the mathematical prediction model are calculated.
  • the mathematical prediction model designed in H.266/VVC is basically a linear prediction model, and its linear characteristics limit the expressive power and prediction accuracy of the prediction model; and/or, when predicting the chroma signal through a neural network prediction model, if a single neural network prediction model is used for prediction, there will be a problem of low accuracy in predicting the color component signal.
  • the present application provides a processing method, a processing device and a storage medium, aiming to solve the technical problem of how to improve the accuracy of color component signal prediction.
  • the present application provides a processing method, which can be applied to a processing device (such as an intelligent terminal or a server), comprising the steps of:
  • S2 Predict or obtain the corresponding first color component block according to the second color component information and/or the target neural network.
  • step S1 At least one of the following is included:
  • All data subsets are obtained, and the neural network corresponding to each data subset is trained to obtain the target neural network.
  • the method further includes:
  • the first color component block as a label, taking at least one of second color component information corresponding to the first color component block, neighbor information corresponding to the first color component block, and encoding parameters as data elements;
  • a data subset corresponding to the data element is determined according to the mode selection module and the data element.
  • the method before predicting or obtaining the corresponding first color component block according to the target neural network, the method further includes:
  • Step S22 input at least one of the neighbor information, the second color component information and the encoding parameters corresponding to the first color component block to be predicted into the mode selection module, so that the mode selection module determines the target neural network corresponding to the first color component block.
  • the method further includes:
  • Step S21 If the first color component block adjacent to the first color component block does not contain the first color component information, the first color component block adjacent to the first color component block is filled with the first color component information according to a preset first color component filling rule to obtain neighbor information corresponding to the first color component block.
  • predicting or obtaining the corresponding first color component block according to the target neural network includes at least one of the following:
  • the prediction result of the target neural network is used as the predicted first color component block
  • each target neural network obtains or determining a prediction result of each target neural network, and including at least one of the following:
  • a first color component block is determined based on a function of all of the prediction results.
  • the method further comprises:
  • Step S4 Acquire or determine first color component information corresponding to the first color component block
  • Step S5 Predict according to the first color component information and the target neural network, or predict according to the first color component information to obtain or predict the corresponding first color component block.
  • the present application also provides a processing method, which can be applied to a processing device (such as an intelligent terminal or a server), comprising the following steps:
  • S10 Obtain or determine at least one of second color component information, neighbor information, and encoding parameters corresponding to the first color component block;
  • S30 Determine a data subset corresponding to the data element according to the mode selection module and the data element, so as to be used for training a target neural network for color component signal prediction.
  • the step S30 includes at least one of the following:
  • the mode selection module determines a data subset corresponding to the data element using at least one of the second color component information, the neighbor information, and the encoding parameter corresponding to the data element;
  • the data elements are classified according to preset data rules to classify the data elements into the corresponding data subsets.
  • step S30 the method further includes:
  • Step S40 Acquire or determine all data subsets, and train the neural network corresponding to each data subset to obtain a target neural network.
  • the method further comprises:
  • the corresponding first color component block is predicted or obtained.
  • the present application also provides a processing device, comprising:
  • An acquisition module used for acquiring or determining second color component information
  • a prediction module is used to predict or obtain the corresponding first color component block according to the second color component information and/or the target neural network.
  • the present application also provides a processing device, comprising:
  • a determination module used to obtain or determine at least one of second color component information, neighbor information, and encoding parameters corresponding to the first color component block
  • a data element module configured to use the first color component block as a label and at least one of the second color component information, the neighbor information and the encoding parameter as a data element;
  • a training module is used to determine a data subset corresponding to the data element according to the mode selection module and the data element, so as to train a target neural network for color component signal prediction.
  • the present application also provides a processing device, comprising: a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the steps of any of the above processing methods are implemented.
  • the present application also provides a storage medium, wherein the storage medium stores a computer program, and when the computer program is executed by a processor, the steps of any of the processing methods described above are implemented.
  • the processing method of the present application can be applied to a processing device, by acquiring or determining the second color component information to be predicted, and then predicting according to the second color component information and/or the target neural network to obtain a predicted first color component block.
  • the first color component block can be accurately predicted based on the second color component information and/or the target neural network corresponding to the first color component block to be predicted, so that the color component signal in the first color component block can be obtained, the accuracy of the color component signal prediction is improved, and the complexity of the color component signal prediction is reduced.
  • FIG1 is a schematic diagram of the hardware structure of a mobile terminal for implementing various embodiments of the present application.
  • FIG2 is a diagram of a communication network system architecture provided in an embodiment of the present application.
  • FIG3 is a schematic flow chart of a processing method according to the first embodiment
  • FIG4 is a schematic diagram of a YUV image in the processing method of the present application.
  • FIG5 is a schematic diagram of an image of a luma component in the processing method of the present application.
  • FIG6 is a schematic diagram of an image of a chroma blue component in the processing method of the present application.
  • FIG7 is a schematic diagram of an image of a chroma red component in the processing method of the present application.
  • FIG8 is a schematic diagram of an image after luma component segmentation in the processing method of the present application.
  • FIG9 is a schematic diagram of an image after chroma blue component segmentation in the processing method of the present application.
  • FIG10 is a schematic diagram of pixel data based on the luma component block of FIG8 in the processing method of the present application;
  • FIG11 is a schematic diagram of pixel data of a first color component block to be predicted in the processing method of the present application.
  • FIG12 is a schematic diagram of neighbor information of a first color component block to be predicted in the processing method of the present application.
  • FIG13 is a schematic flow chart of a processing method according to a second embodiment
  • FIG14 is a schematic diagram of the workflow of the mode selection module in the processing method of the present application.
  • FIG15 is a schematic diagram of the workflow of chroma component prediction in the processing method of the present application.
  • FIG16 is a schematic diagram of a process of selecting neural network 1 for prediction when predicting chroma components in the processing method of the present application;
  • FIG17 is a schematic diagram of a process of selecting a neural network 3 for prediction when predicting the chroma component in the processing method of the present application;
  • FIG18 is a schematic flow chart of a processing method according to a fourth embodiment.
  • FIG19 is a schematic flow chart of a processing method according to a fifth embodiment.
  • FIG20 is a schematic flow chart of a processing method according to a third embodiment
  • FIG21 is a schematic diagram of functional modules of a processing device provided in an embodiment of the present application.
  • FIG. 22 is a schematic diagram of functional modules of another processing device provided in an embodiment of the present application.
  • first, second, third, etc. may be used to describe various information in this article, these information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information
  • second information may also be referred to as the first information.
  • word “if” as used herein can be interpreted as “at the time of -- or "when" or "in response to determination”.
  • singular forms “one”, “one” and “the” are intended to also include plural forms, unless there is an opposite indication in the context.
  • “comprising at least one of the following: A, B, C” means “any of the following: A; B; C; A and B; A and C; B and C; A and B and C”, and for another example, “A, B or C” or “A, B and/or C” means “any of the following: A; B; C; A and B; A and C; B and C; A and B and C”.
  • An exception to this definition will only occur when a combination of elements, functions, steps or operations are inherently mutually exclusive in some manner.
  • the words “if” and “if” may be interpreted as “at the time of” or “when” or “in response to determining” or “in response to detecting”, depending on the context.
  • the phrases “if it is determined” or “if (stated condition or event) is detected” may be interpreted as “when it is determined” or “in response to determining” or “when detecting (stated condition or event)” or “in response to detecting (stated condition or event)", depending on the context.
  • step codes such as S10, S20, etc. are used for the purpose of more clearly and concisely describing the corresponding content, and do not constitute a substantial restriction on the order.
  • S20 first and then S10, etc., but these should all be within the scope of protection of this application.
  • S10 first and then S10, etc.
  • module means, “component” or “unit” used to represent elements are only used to facilitate the description of the present application, and have no specific meanings. Therefore, “module”, “component” or “unit” can be used in a mixed manner.
  • the processing device in this application can be a smart terminal or a server, etc.
  • the specific reference needs to be clarified in combination with the context.
  • the smart terminal can be implemented in various forms, for example, it can include processing devices such as mobile phones, tablet computers, laptops, PDAs, portable media players (PMPs), navigation devices, wearable devices, smart bracelets, pedometers, etc., as well as fixed terminals such as digital TVs and desktop computers.
  • PMPs portable media players
  • navigation devices wearable devices
  • smart bracelets smart bracelets
  • pedometers etc.
  • fixed terminals such as digital TVs and desktop computers.
  • the subsequent description will be explained by taking the mobile terminal as an example. It will be understood by those skilled in the art that, in addition to components specifically used for mobile purposes, the structure according to the implementation of the present application can also be applied to fixed-type terminals.
  • FIG. 1 is a schematic diagram of the hardware structure of a mobile terminal for implementing various embodiments of the present application.
  • the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, A/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111.
  • RF Radio Frequency
  • the radio frequency unit 101 can be used for receiving and sending signals during information transmission or calls. Specifically, after receiving the downlink information of the base station, it is sent to the processor 110 for processing; in addition, the uplink data is sent to the base station.
  • the radio frequency unit 101 includes but is not limited to an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, etc.
  • the radio frequency unit 101 can also communicate with the network and other devices through wireless communication.
  • the above-mentioned wireless communications may use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication), GPRS (General Packet Radio Service), CDMA2000 (Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division Duplexing-Long Term Evolution), TDD-LTE (Time Division Duplexing-Long Term Evolution) and 5G, etc.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA2000 Code Division Multiple Access 2000
  • WCDMA Wideband Code Division Multiple Access
  • TD-SCDMA Time Division-Synchronous Code Division Multiple Access
  • FDD-LTE Frequency Division Duplexing-Long Term Evolution
  • TDD-LTE Time Division Duplexing-Long Term Evolution
  • 5G etc.
  • WiFi is a short-range wireless transmission technology.
  • the mobile terminal can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 102, which provides users with wireless broadband Internet access.
  • FIG1 shows the WiFi module 102, it is understandable that it is not a necessary component of the mobile terminal and can be omitted as needed without changing the essence of the invention.
  • the audio output unit 103 can convert the audio data received by the RF unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output it as sound when the mobile terminal 100 is in a call signal reception mode, a talk mode, a recording mode, a voice recognition mode, a broadcast reception mode, etc. Moreover, the audio output unit 103 can also provide audio output related to a specific function performed by the mobile terminal 100 (for example, a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, etc.
  • the A/V input unit 104 is used to receive audio or video signals.
  • the A/V input unit 104 may include a graphics processor (GPU) 1041 and a microphone 1042, and the graphics processor 1041 processes the image data of a static picture or video obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode.
  • the processed image frame can be displayed on the display unit 106.
  • the image frame processed by the graphics processor 1041 can be stored in the memory 109 (or other storage medium) or sent via the radio frequency unit 101 or the WiFi module 102.
  • the microphone 1042 can receive sound (audio data) via the microphone 1042 in the operation modes such as the telephone call mode, the recording mode, the voice recognition mode, etc., and can process such sound into audio data.
  • the processed audio (voice) data can be converted into a format output that can be sent to a mobile communication base station via the radio frequency unit 101 in the case of the telephone call mode.
  • the microphone 1042 can implement various types of noise elimination (or suppression) algorithms to eliminate (or suppress) noise or interference generated in the process of receiving and sending audio signals.
  • the mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light
  • the proximity sensor can turn off the display panel 1061 and/or the backlight when the mobile terminal 100 is moved to the ear.
  • the accelerometer sensor can detect the direction of the The size of the acceleration on the upper (generally three axes) can be detected, and the size and direction of gravity can be detected when stationary.
  • sensors such as fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc.
  • the display unit 106 is used to display information input by the user or information provided to the user.
  • the display unit 106 may include a display panel 1061, which may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the user input unit 107 can be used to receive input digital or character information, and to generate key signal input related to the user settings and function control of the mobile terminal.
  • the user input unit 107 may include a touch panel 1071 and other input devices 1072.
  • the touch panel 1071 also known as a touch screen, can collect the user's touch operation on or near it (such as the user's operation on the touch panel 1071 or near the touch panel 1071 using any suitable object or accessory such as a finger, stylus, etc.), and drive the corresponding connection device according to a pre-set program.
  • the touch panel 1071 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into the touch point coordinates, and then sends it to the processor 110, and can receive and execute the command sent by the processor 110.
  • the touch panel 1071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 107 may further include other input devices 1072.
  • the other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, a function key (such as a volume control key, a switch key, etc.), a trackball, a mouse, a joystick, etc., which are not specifically limited here.
  • a function key such as a volume control key, a switch key, etc.
  • a trackball such as a mouse, a joystick, etc.
  • the touch panel 1071 may cover the display panel 1061.
  • the touch panel 1071 detects a touch operation on or near it, it is transmitted to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event.
  • the touch panel 1071 and the display panel 1061 are used as two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 can be integrated to implement the input and output functions of the mobile terminal, which is not limited here.
  • the interface unit 108 serves as an interface through which at least one external device can be connected to the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, an audio input/output (I/O) port, a video I/O port, a headphone port, etc.
  • the interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and an external device.
  • the memory 109 can be used to store software programs and various data.
  • the memory 109 can mainly include a program storage area and a data storage area.
  • the program storage area can store an operating system, an application required for at least one function (such as a sound playback function, an image playback function, etc.), etc.;
  • the data storage area can store data created according to the use of the mobile phone (such as audio data, a phone book, etc.), etc.
  • the memory 109 can include a high-speed random access memory, and can also include a non-volatile memory, such as at least one disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 110 is the control center of the mobile terminal. It uses various interfaces and lines to connect various parts of the entire mobile terminal. It executes various functions of the mobile terminal and processes data by running or executing software programs and/or modules stored in the memory 109, and calling data stored in the memory 109, so as to monitor the mobile terminal as a whole.
  • the processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor.
  • the application processor mainly processes the operating system, user interface, and application programs
  • the modem processor mainly processes wireless communications. It is understandable that the above-mentioned modem processor may not be integrated into the processor 110.
  • the mobile terminal 100 may also include a power supply 111 (such as a battery) for supplying power to various components.
  • a power supply 111 may be logically connected to the processor 110 through a power management system, so that the power management system can manage charging, discharging, and power consumption.
  • the mobile terminal 100 may also include a Bluetooth module, etc., which will not be described in detail herein.
  • the communication network system is an LTE system of universal mobile communication technology.
  • the LTE system includes UE (User Equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, EPC (Evolved Packet Core) 203 and the operator's IP service 204, which are sequentially connected in communication.
  • UE 201 can be the above-mentioned terminal 100, which will not be repeated here.
  • E-UTRAN 202 includes eNodeB 2021 and other eNodeBs 2022 , etc.
  • eNodeB 2021 may be connected to other eNodeBs 2022 via a backhaul (eg, an X2 interface), and eNodeB 2021 is connected to EPC 203 , and eNodeB 2021 may provide UE 201 with access to EPC 203 .
  • a backhaul eg, an X2 interface
  • EPC203 may include MME (Mobility Management Entity) 2031, HSS (Home Subscriber Server) 2032, other MMEs 2033, SGW (Serving Gate Way) 2034, PGW (PDN Gate Way) 2035 and PCRF (Policy and Charging Rules Function) 2036.
  • MME 2031 is a control node that processes signaling between UE 201 and EPC 203, providing bearer and connection management.
  • HSS 2032 is used to provide some registers to manage functions such as home location register (not shown in the figure), and save some user-specific information such as service features and data rates. All user data can be sent through SGW2034.
  • PGW2035 can provide IP address allocation and other functions for UE 201.
  • PCRF2036 is the policy and charging control policy decision point for service data flow and IP bearer resources. It selects and provides available policy and charging control decisions for the policy and charging execution functional unit (not shown in the figure).
  • IP service 204 may include the Internet, intranet, IMS (IP Multimedia Subsystem) or other IP services.
  • IMS IP Multimedia Subsystem
  • FIG. 3 is a flowchart of the first embodiment of the processing method of the present application.
  • the processing method of the present application can be applied to a processing device (such as a smart terminal or a server), including:
  • Step S1 obtaining or determining second color component information
  • the processing device first determines a frame of image to be predicted, and obtains or determines a first color component block to be predicted in the frame of image, and The second color component information corresponding to the first color component block.
  • the processing device may be an intelligent terminal, such as a mobile phone, a computer, etc., or a server, or a cloud server.
  • the processing device may store various images and videos in advance, and may select an image to be predicted from various images as a frame of image. Or extract a frame of image from a video sequence of a video.
  • the processing device receives an image or video input by a user, and extracts a frame of image from the image or video for prediction.
  • the processing device receives an image or video sent by other network devices, and extracts a frame of image from the image or video for prediction.
  • the processing device establishes a communication connection with a network device in the network side of the mobile communication system in advance, so that the network device can send the image or video to the terminal device through the communication connection, and the terminal device receives the image or video.
  • a frame of image can be in YUV format, and there are Y component image, U component image and V component image in the YUV image, that is, there is a brightness component image and two chrominance component images.
  • the Y component image is a luma component image.
  • the U component image is a chroma blue component image.
  • the V component image is a chroma red component image.
  • the component block ratio between the Y component image, the U component image and the V component image can be 4:2:0, or other ratio modes, which are not limited here. Then the YUV image is segmented to obtain at least one component block, that is, at least one Y component block, at least one U component block and at least one V component block.
  • the second color component information is brightness information; if it is a V component block, the second color component information is brightness information.
  • the first color component block can be the first color component block to be predicted.
  • the first color component block to be predicted is a U component block, as shown in FIG4, there is a frame of image I, and image I is in YUV format, then the Y component image of image I is shown in FIG5, the U component image is shown in FIG6, and the V component image is shown in FIG7.
  • the first color component block to be predicted is a V component block
  • its operation method is the same as that of the aforementioned U component block.
  • the second color component information may be chrominance information, such as chrominance information corresponding to a V component block, or chrominance information corresponding to a U component block.
  • the second color component information may be a U component block at a position corresponding to the first color component block to be predicted in a U component image, and the chrominance information in the U component block is obtained.
  • the second color component information may be a V component block at a position corresponding to the first color component block to be predicted in a V component image, and the chrominance information in the V component block is obtained.
  • the application processing method may further include at least one of the following:
  • Method 1 obtaining or determining second color component information in a second color component block corresponding to the first color component block;
  • the original YUV image can be determined first, and then the first color component block to be predicted in the YUV image can be obtained or determined, and then the second color component block can be determined.
  • the first color component block and the second color component block are on different component images, and the corresponding position of the first color component block in the original YUV image is consistent with the corresponding position of the second color component block.
  • the second color component block can be a luminance component block corresponding to the chrominance component block, such as a Y component block, and the luminance information in the corresponding luminance component block is used as the second color component information.
  • the second color component block can be a chrominance component block corresponding to the luminance component block, such as a U component block and/or a V component block, and the chrominance information in the corresponding chrominance component block is used as the second color component information.
  • Method 2 Obtain all data subsets, and train the neural network corresponding to each data subset to obtain the target neural network.
  • the target neural network can be trained first, and then subsequent predictions can be performed based on the target neural network. Before training the target neural network, it is necessary to first construct a data subset corresponding to each neural network, and then train the neural network corresponding to each data subset to obtain the target neural network.
  • a training image to be trained for a neural network can be obtained in a network device or its own storage area, or a video sequence can be obtained, and each frame of the video sequence can be used as a training image.
  • the luma (brightness) information corresponding to the chroma (chrominance) signal to be predicted, the neighbor information and encoding parameters of the chroma signal to be predicted can be used as a piece of data, and the chroma block to be predicted is used as the target label of the data, forming a data element in the data set.
  • the mode selection module divides each group of data elements in the data set into a data subset corresponding to the optimal data network.
  • each group of data elements can be divided into multiple data subsets or into one data subset, which is not limited here.
  • high-detail data elements with smaller quantization parameters are divided into data subsets corresponding to a neural network with a larger receptive field.
  • each neural network can be trained to obtain a target neural network.
  • the number of target neural networks in this embodiment is at least one.
  • the data elements in the data subset are input into the neural network for training until the trained neural network converges or achieves the expected effect.
  • the training method of the neural network can be carried out by the gradient descent method. For example, the mean square error or cross entropy is used as the loss function. Each gradient descent training is to minimize the loss. After multiple trainings, the ideal accuracy is achieved and the training ends. Other methods can also be used for training, which are not limited here.
  • the target neural network corresponding to the first color component block can be screened out from each target neural network through the mode selection module.
  • the target neural network corresponding to the first color component block can be screened according to the screening mode set in advance to obtain the target neural network corresponding to the first color component block, for example, the target neural network with the smallest index parameter is selected as the target neural network corresponding to the first color component block.
  • the index parameter can be rate-distortion, etc.
  • Step S2 predict or obtain the corresponding first color component block according to the second color component information and/or the target neural network.
  • prediction can be performed directly based on the second color component information to obtain the predicted first color component block.
  • the second color component information is brightness information and the first color component block to be predicted is a chrominance component block (such as a U component block or a V component block)
  • the second color component information can be input into a pre-set model for training to obtain the predicted first color component block, and then the color component signal (such as color information) in the first color component block can be obtained.
  • a comparison table in advance, in which at least one color component information and a corresponding color component block are set, and the first color component block is obtained by querying the comparison table according to the second color component information, and the first color component block obtained by querying is used as the predicted first color component block.
  • prediction can be performed directly according to the target neural network to obtain the predicted first color component block.
  • prediction parameters input by the user or other terminal can be obtained, and the prediction parameters can be used as the prediction parameters.
  • the first color component information and/or the second color component information are input into the target neural network for model training, and the predicted first color component block is output.
  • the prediction parameters may include parameter information related to the first color component block to be predicted, such as the first color component block adjacent to the first color component block to be predicted, and optionally, adjacent includes at least one of adjacent to the left, adjacent to the upper side, adjacent to the upper left, adjacent to the lower left, and adjacent to the upper right.
  • the target neural network may be a nonlinear algorithm or module, such as matrix weighted intra prediction technology (Matrix Weighted Intra Prediction, MIP), and the target neural network may include at least one of the following neural networks, such as: Convolutional Neural Network (CNN), Residual neural network (ResNet), Long Short-Term Memory Artificial Neural Network (LSTM), Recurrent Neural Network (RNN), Three-Dimensional Convolutional Neural Network (3D-CNN), Fully Connected Neural Network (FCNN), etc.
  • CNN Convolutional Neural Network
  • Residual neural network Residual neural network
  • LSTM Long Short-Term Memory Artificial Neural Network
  • RNN Recurrent Neural Network
  • 3D-CNN Three-Dimensional Convolutional Neural Network
  • FCNN Fully Connected Neural Network
  • the target neural network corresponding to the first color component block to be predicted can be determined in at least one target neural network through the mode selection module, and then the second color component information can be input into the target neural network for prediction to obtain the predicted first color component block.
  • the second color component information is brightness component information
  • the brightness component information is input into the trained target neural network for prediction, and the color component information is output and used as the color component signal of the predicted first color component block.
  • the second color component information is chrominance component information
  • the chrominance component information is input into the trained target neural network for prediction, and the color component information is output and used as the color component signal of the predicted first color component block.
  • the processing method of the present application may further include:
  • the data subset before predicting the first color component block according to the target neural network, it is also necessary to construct a data subset so as to perform model training on the preset neural network according to the data subset to obtain a trained target neural network.
  • the second color component information corresponding to the first color component block such as luma information corresponding to the chroma signal; neighbor information, such as neighbor information of the chroma signal; and at least one of the encoding parameters are obtained or determined, and the first color component block is used as a label, and the second color component information corresponding to the first color component block, the neighbor information corresponding to the first color component block, and at least one of the encoding parameters are used as data elements, and the data elements are input into the mode selection module, so as to select the data subset corresponding to the data element from the preset multiple data subsets in the mode selection module, and store the data element in the data subset corresponding to the data element.
  • the luma component when determining the data element, can be reconstructed through the intra-prediction module to obtain the luma components of all the adopted video sequences, and the data subset can be constructed according to the luma components.
  • the process is as follows: for any luma component L, it is divided into N n*m luma blocks, and the i-th luma block is ⁇ i , then:
  • the chroma component C1 corresponding to the luma component L is divided into N components
  • the chroma block of size ⁇ i is recorded as Ci , where (1 ⁇ i ⁇ N).
  • the data element corresponding to Ci may include at least one of ⁇ i and hi .
  • input encoding parameters such as bit rate and quantization parameters may also be obtained, and the encoding parameters may also be used as one of the data elements.
  • N ( li , hi ) data elements can be generated.
  • N ( li , hi ) and encoding parameters as data elements and Ci as data label N ( li , hi , c i ) data pairs can be generated, and then the data pairs are aggregated into a data set, and each data element in the data set is divided into its corresponding data subset through the mode selection module.
  • the predicted first color component block is obtained, thereby achieving the acquisition of the color component signal in the first color component block, improving the accuracy of the color component signal prediction, and reducing the complexity of the color component signal prediction.
  • FIG. 13 is a schematic diagram of a specific flow chart before step S2 in the first embodiment of the processing method of the present application.
  • an embodiment of step S2 of the processing method of the present application may include:
  • Predictions are made according to the target neural network to obtain or predict the corresponding first color component block.
  • the processing device After the processing device obtains the trained target neural network, it can first filter out the target neural network corresponding to the first color component block from each target neural network through the mode selection module, and then perform prediction based on the target neural network to obtain the predicted first color component block.
  • the target neural network can be a nonlinear algorithm or module, such as matrix weighted intra-frame prediction technology
  • the target neural network can include at least one of the following neural networks, such as: convolutional neural network, residual network, long short-term memory artificial neural network, recurrent neural network, three-dimensional convolutional neural network, fully connected neural network, etc.
  • prediction can be performed by at least one target neural network, and the prediction results can be calculated by weighted average calculation or other calculation methods to obtain the first color component block after prediction.
  • the first color component block may be a brightness component block, such as a Y component block; or may be a chrominance component block, such as a U component block or a V component block.
  • the method may further include:
  • Step S22 input at least one of the neighbor information, the second color component information and the encoding parameters corresponding to the first color component block into the mode selection module, so that the mode selection module determines the target neural network corresponding to the first color component block.
  • the mode selection module determines the target neural network corresponding to the first color component block according to at least one of the neighbor information, second color component information and encoding parameters corresponding to the first color component block in at least one trained neural network.
  • the target neural network in this embodiment can perform data classification processing, and classify the neighbor information, second color component information and encoding parameters corresponding to the received first color component block into the corresponding target neural network, so as to perform prediction through the target neural network to obtain the predicted first color component block.
  • the neighbor information adjacent to the first color component block to be predicted in the chroma component C is determined and input into the mode selection module.
  • the mode selection module makes a decision on the received coding parameter, the neighbor information and the luma component block ⁇ i to output the signal category, determines the target neural network according to the signal category, and then predicts according to the target neural network to obtain the predicted first color component block.
  • the neighbor information includes: first color component information in a first color component block adjacent to the first color component block to be predicted, and optionally, the first color component block adjacent to the first color component block to be predicted includes at least one of a first color component block adjacent to the upper side of the first color component block to be predicted, a first color component block adjacent to the left side of the first color component block to be predicted, and a first color component block located on the upper left side of the first color component block to be predicted.
  • the first color component block to be predicted is a U component block to be predicted
  • its neighbor information may be a known U component block adjacent to the U component block to be predicted, and if the adjacent known U component block does not have chrominance information, filling processing may be performed.
  • the encoding parameters include quantization parameters, and the neural network corresponding to the quantization parameters is used as the target neural network.
  • the second color component information may be acquired in the manner described in the first embodiment.
  • the mode selection module can be a selector, and the selector can be a traditional algorithm or a specific neural network model, which is not limited here.
  • the selector reads information data from the encoded video data stream, and the data indicates the target neural network.
  • the selector uses the mean square error as the selector decision condition, assuming that the mean square error of the pixel values of the input luma block is very small (that is, the pixel values are very close), it can be considered that the mean square error of the chroma block to be predicted is also relatively small, so a neural network that is good at such prediction can be selected from at least one trained neural network as the target neural network for prediction.
  • mode selection module selects at least one neural network
  • prediction can be performed by the at least one neural network, and relevant calculations can be performed on the predicted results to obtain the final predicted first color component block.
  • At least one mode selection module in this embodiment may be provided, and each mode selection module may be run to achieve parallel operation and improve prediction efficiency.
  • a luma block ⁇ i of a luma component L, a chroma component C, and a block to be predicted C i in the chroma component C and neighbor information h i of the block to be predicted C i are included.
  • a luma block ⁇ i corresponding to the block to be predicted C i is selected in the luma component L, and neighbor information h i adjacent to the block to be predicted C i in the chroma component C is selected, and the selected luma block ⁇ i and neighbor information h i , as well as at least one of the encoding parameters are input into the mode selection module.
  • the mode selection module can select a target neural network from various neural networks according to the input luma block ⁇ i and the neighbor information h i , and at least one of the encoding parameters, such as: Then the target neural network is used to predict the predicted first color component block, that is,
  • the mode selection module may follow the screening rules set in advance, such as selecting a neural network structure with a deeper depth and a larger receptive field as the target neural network for high-detail chroma prediction with a smaller quantization parameter.
  • step S22 the following steps may also be included:
  • Step S21 If the first color component block adjacent to the first color component block does not contain the first color component information, the first color component block adjacent to the first color component block is filled with the first color component information according to a preset first color component filling rule to obtain neighbor information corresponding to the first color component block.
  • the first color component block adjacent to the first color component block includes at least one of a first color component block adjacent to the upper side of the first color component block, a first color component block adjacent to the left side of the first color component block, and a first color component block located on the upper left side of the first color component block.
  • the first color component block adjacent to the upper side of the first color component block has the first color component information. If the first color component block adjacent to the upper side of the first color component block does not have the color component information, the first color component block adjacent to the upper side of the first color component block can be filled with the first color component information according to the first color component filling rule set by extraction, so as to obtain the neighbor information corresponding to the first color component block.
  • the first color component filling rule can be filled according to a fixed value set in advance, such as a value of 128. It can also be based on the existing first color component information. Average calculation is performed and the average calculation result is filled.
  • the first color component rule can also be that when the first color component information is a pixel, if all reference pixels are not available, all parameter pixels are filled with half of the maximum value of the pixel. It can be that if all reference pixels are available, the available reference pixels are copied for filling. It can be that when the reference pixel part is available and the lower left reference pixel is available, the nearest available reference pixel is filled upward and rightward from the lower left reference pixel. When some reference pixels are available and the lower left reference pixel is not available, the pixels are searched from the lower left reference pixel to the right until the first available reference pixel is found, and the previous pixels are filled with the value of the available reference pixel, and then the subsequent pixels are traversed to fill the nearest available pixels.
  • the specific method is not limited here.
  • the first color component block adjacent to the left side of the first color component block determines whether the first color component block adjacent to the left side of the first color component block has first color component information. If the first color component block adjacent to the left side of the first color component block does not have color component information, the first color component block adjacent to the left side of the first color component block can be filled with the first color component information according to the first color component filling rule set by the extraction to obtain the neighbor information corresponding to the first color component block.
  • the first color component information can be filled in the first color component block adjacent to the upper left side of the first color component block according to the first color component filling rule set by extraction to obtain neighbor information corresponding to the first color component block.
  • predicting or obtaining the corresponding first color component block according to the target neural network may include at least one of the following:
  • Method 1 performing prediction according to the target neural network to obtain a third color component signal, predicting the first color component signal according to the third color component signal, and determining the predicted first color component block according to the first color component signal;
  • the prediction result is used as the third color component signal.
  • the second color component signal is the brightness signal corresponding to the Y component block
  • the chrominance signal corresponding to the V component block can be predicted by the chrominance signal corresponding to the U component block
  • the predicted chrominance signal corresponding to the V component block is used as the predicted first color component signal
  • the first color component block with the first color component signal is detected, and the position of the first color component block corresponds to the position of the second color component block corresponding to the second color component signal
  • the first color component block with the first color component signal can be used as the predicted first color component block.
  • the chrominance signal corresponding to the U component block can also be predicted by the target neural network, and the predicted chrominance signal corresponding to the U component block is used as the third color component signal, and then the chrominance signal corresponding to the V component block is predicted according to the chrominance signal corresponding to the U component block, and the predicted chrominance signal corresponding to the V component block is used as the predicted first color component block.
  • the luminance signal corresponding to the Y component can be predicted by the target neural network and used as the third color component signal, and then the chrominance signal corresponding to the U component block and/or the V component block can be predicted based on the luminance signal corresponding to the Y component, and the predicted chrominance signal corresponding to the U component block and/or the V component block can be used as the first color component signal.
  • a mapping table between the third color component signal and the first color component signal can be set in advance, and then prediction can be made based on the mapping table, or prediction can be made through a neural network model, which is not limited here.
  • Method 2 performing prediction according to the target neural network corresponding to the first color component signal to obtain the first color component signal, and determining the first color component block according to the first color component signal;
  • prediction can be directly performed based on the target neural network corresponding to the first color component signal to obtain the first color component signal, and then the predicted first color component block is determined based on the first color component signal. At this time, the predicted first color component block has the first color component signal.
  • Method three inputting the second color component information and the neighbor information into the target neural network to obtain or predict the corresponding first color component block;
  • the second color component information and neighbor information corresponding to the first color component block to be predicted obtained in advance can be input into the target neural network for training prediction, and the predicted first color component block can be determined based on the output result.
  • the loss function set in advance can be used for training and prediction. For example, when performing chroma component block
  • the second color component information is luma block ⁇ i
  • the chroma component block The neighbor information h i and luma block ⁇ i are input into the target neural network for prediction, and the output is At this time, the chroma component predicted by the luma component It can be expressed as:
  • Mode 4 inputting the second color component information, the neighbor information and the encoding parameter into the target neural network to obtain or predict the corresponding first color component block;
  • the target neural network after determining the target neural network, the second color component information and neighbor information corresponding to the first color component block to be predicted, as well as the encoding parameters (such as bit rate) obtained in advance can be input into the target neural network for training prediction, and the predicted first color component block can be determined based on the output result.
  • the target neural network can use a pre-set loss function for training prediction.
  • the chroma component block When performing chroma component block
  • the second color component information is luma block ⁇ i , the chroma component block
  • the neighbor information h i and luma block ⁇ i , as well as the encoding parameter p are input into the target neural network for prediction, and the output is At this time, the chroma component predicted by the luma component It can be expressed as:
  • the mode selection module may make a decision based on all the received information to output the signal category, that is, select a neural network. If neural network 1 is selected from neural network 1 to neural network j to neural network k, prediction is performed based on function F1 of neural network 1 to obtain the predicted chroma component, and the prediction result is output, such as 110, 110, 108, 108.
  • Method 5 If there is only one target neural network, the prediction result of the target neural network is used as the first color component block;
  • prediction can be made directly based on the target neural network, such as inputting the neighbor information and the second color component information corresponding to the first color component block to be predicted into the target neural network for prediction, or inputting the neighbor information, the second color component information and the encoding parameters corresponding to the first color component block to be predicted into the target neural network for prediction, and directly determining the predicted first color component block based on the prediction result.
  • the mode selection module when the block to be predicted is determined in chroma component C, the luma component block corresponding to the block to be predicted in luma component L and the neighbor information adjacent to the block to be predicted in chroma component block C, as well as the encoding parameters, can be input into the mode selection module.
  • QP is a quantization parameter, which is the sequence number of the quantization step.
  • the mode selection module can make a decision based on all the information received to output the signal category, that is, select a neural network.
  • neural network 3 is selected from neural network 1 to neural network j to neural network k, prediction is performed according to the function of neural network 3 to obtain the predicted chroma component, and the prediction result is output, such as 8 pixels such as 110, 109, 109, 109, 108, 108, 108, 110, etc.
  • Method 6 If there is at least one target neural network, obtaining or determining a prediction result of each target neural network, and including at least one of the following:
  • prediction can be performed through all target neural networks to obtain prediction results of each target neural network.
  • the neighbor information and the second color component information corresponding to the first color component block to be predicted are input into each target neural network for prediction to obtain corresponding prediction results.
  • the neighbor information and the second color component information corresponding to the first color component block to be predicted are input into each target neural network for prediction to obtain corresponding prediction results.
  • the second color component information and the encoding parameters are input into each target neural network for prediction to obtain a corresponding prediction result.
  • all the prediction results are summarized to obtain a predicted first color component block.
  • corresponding mathematical operations may be performed, such as weighted average calculation, to obtain a predicted first color component block.
  • the best prediction result can be directly selected from all the prediction results as the predicted first color component block.
  • the best prediction result can be randomly selected from each best prediction result as the predicted first color component block, or each best prediction result can be calculated according to a certain function calculation method to obtain the predicted first color component block.
  • a function set in advance can be used to perform calculations to determine the predicted first color component block.
  • the function can be a neural network model or a traditional mathematical algorithm.
  • the target neural network is determined by inputting the obtained neighbor information, second color component information and at least one of the encoding parameters corresponding to the first color component block to be predicted into the mode selection module, and then prediction is performed based on the target neural network to obtain the predicted first color component block.
  • the target neural network can use a variety of methods to perform prediction, thereby improving the efficiency of predicting the first color component block and improving the accuracy of color component signal prediction.
  • FIG. 20 is a flowchart of the third embodiment of the processing method of the present application.
  • the processing method of the present application can be applied to a processing device (such as a server or a smart terminal), including the following steps:
  • Step S4 Acquire or determine first color component information corresponding to the first color component block
  • Step S5 Predict according to the first color component information and the target neural network, or predict according to the first color component information to obtain or predict the corresponding first color component block.
  • the processing device first determines a frame of image to be predicted, and obtains or determines a first color component block to be predicted in the frame of image, and first color component information corresponding to the first color component block.
  • the first color component block can be a first color component block to be predicted.
  • the first color component information is chroma information.
  • the first color component block is a luminance component block, the first color component information is luminance information.
  • a target neural network corresponding to the first color component block is screened out from each target neural network through a mode selection module.
  • the first color component information can be chromaticity information and the first color component block to be predicted is a chromaticity component block (such as a U component block or a V component block)
  • the first color component information can be input into a pre-set model for training to obtain the predicted first color component block, and then the color component signal (such as color information) in the first color component block can be obtained.
  • a comparison table in advance, in which at least one color component information and a corresponding color component block are set, and the first color component block is obtained by querying the comparison table according to the second color component information, and the first color component block obtained by querying is used as the predicted first color component block.
  • the target neural network corresponding to the first color component block to be predicted can be determined in at least one target neural network through the mode selection module, and then the first color component information can be input into the target neural network for prediction to obtain the predicted first color component block.
  • a predicted first color component block is obtained, thereby achieving acquisition of the color component signal in the first color component block, improving the accuracy of the color component signal prediction, and reducing the complexity of the color component signal prediction.
  • FIG. 18 is a flowchart of the fourth embodiment of the processing method of the present application.
  • the processing method of the present application can be applied to a processing device (such as a server or a smart terminal), including the following steps:
  • S10 Obtain or determine at least one of second color component information, neighbor information, and encoding parameters corresponding to the first color component block;
  • the processing device first determines a frame of image in the video sequence, and obtains or determines a first color component block in the frame of image, and at least one of second color component information corresponding to the first component block, neighbor information corresponding to the first component block, and encoding parameters corresponding to the first component block, so as to perform subsequent model training.
  • a frame of image can be used as a training image, and the training image includes multiple frames of image.
  • the processing device may store various images and videos in advance, and may select a frame of image from each image. Or extract a frame of image from a video sequence of a video.
  • the processing device receives an image or video input by a user, and extracts a frame of image from the image or video.
  • the processing device receives an image or video sent by other network devices, and extracts a frame of image from the image or video.
  • the processing device establishes a communication connection with a network device in the network side of the mobile communication system in advance, so that the network device can send the image or video to the terminal device through the communication connection, and the terminal device receives the image or video.
  • a frame of image includes at least a YUV image, and there are a Y component image, a U component image and a V component image in the YUV image, that is, there is a brightness component image and two chrominance component images.
  • the Y component image is a luma component image.
  • the U component image is a chroma blue component image.
  • the V component image is a chroma red component image.
  • the component block ratio between the Y component image, the U component image and the V component image can be 4:2:0, or other ratio modes, which are not limited here. Then the YUV image is segmented to obtain at least one component block, that is, at least one Y component block, at least one U component block and at least one V component block.
  • the second color component information is brightness information; and when it is a V component block, the second color component information is brightness information.
  • the second color component information may be chrominance information, such as chrominance information corresponding to a V component block, or chrominance information corresponding to a U component block.
  • the second color component information may be a U component block at a position corresponding to the first color component block in a U component image, and the chrominance information in the U component block is obtained.
  • the second color component information may be a position corresponding to the first color component block in a V component image. The V component block at the corresponding position and obtain the chrominance information in the V component block.
  • the neighbor information includes: first color component information in a first color component block adjacent to the first color component block, and optionally, the first color component block adjacent to the first color component block includes at least one of a first color component block adjacent to the upper side of the first color component block, a first color component block adjacent to the left side of the first color component block, and a first color component block located on the upper left side of the first color component block.
  • the first color component block is a U component block
  • its neighbor information may be a known U component block adjacent to the U component block, and if the adjacent known U component block does not have chromaticity information, filling processing may be performed.
  • the first color component block adjacent to the upper side of the first color component block to be predicted determines whether the first color component block adjacent to the upper side of the first color component block to be predicted has the first color component information. If the first color component block adjacent to the upper side of the first color component block to be predicted does not have the color component information, the first color component block adjacent to the upper side of the first color component block to be predicted can be filled with the first color component information according to the first color component filling rule set by extraction, so as to obtain the neighbor information corresponding to the first color component block to be predicted.
  • the first color component filling rule can be filled according to a fixed value set in advance, such as a value of 128. It can also be based on the existing first color component information. Average calculation is performed and the average calculation result is filled. The specific method is not limited here.
  • the first color component information can be filled in the first color component block adjacent to the left side of the first color component block to be predicted according to the first color component filling rule set by the extraction to obtain the neighbor information corresponding to the first color component block to be predicted.
  • the first color component information can be filled in the first color component block adjacent to the upper left side of the first color component block to be predicted according to the first color component filling rule set by the extraction to obtain the neighbor information corresponding to the first color component block to be predicted.
  • the encoding parameters include quantization parameters, bit rate, etc.
  • the processing device when it obtains at least one of the second color component information, neighbor information and encoding parameters, it can use the first color component block as a label and take the second color component information corresponding to the first color component block, the neighbor information corresponding to the first color component block and at least one of the encoding parameters as data elements.
  • S30 Determine a data subset corresponding to the data element according to the mode selection module and the data element, so as to be used for training a target neural network for color component signal prediction.
  • the data element can be input into the mode selection module, and then the mode selection module selects the corresponding data subset from each data subset according to the label in the data element, and adds it to the corresponding data subset, so that the neural network associated with the data subset can be trained according to the data subset later. That is, in this embodiment, the data subset is used to train the target neural network for color component signal prediction, and after the target neural network training is completed, the first color component block to be predicted can be predicted according to the target neural network to obtain the predicted first color component block.
  • the number of data subsets can be multiple or one.
  • each target neural network corresponds to at least one data subset, so that training can be performed according to the data subset to obtain a trained target neural network.
  • the luma component when determining the data element, can be reconstructed through the intra-prediction module to obtain the luma components of all the adopted video sequences, and the data subset can be constructed according to the luma components.
  • the process is as follows: for any luma component L, it is divided into N n*m luma blocks, and the i-th luma block is ⁇ i , then:
  • the chroma component C1 corresponding to the luma component L is divided into N components
  • the chroma block of size ⁇ i is recorded as C i , where (1 ⁇ i ⁇ N).
  • the data element corresponding to C i may include at least one of ⁇ i and hi .
  • input encoding parameters such as bit rate and quantization parameters may also be obtained, and the encoding parameters may also be used as one of the data elements.
  • N data pairs can be generated.
  • N ( li , hi , c ) data pairs can be generated, and then the data pairs are aggregated into a data set, and each data element in the data set is divided into its corresponding data subset through the mode selection module.
  • step S30 in the processing method of the present application may include at least one of the following:
  • Method 1 inputting the data element into the mode selection module, so that the mode selection module determines the data subset corresponding to the data element by using at least one of the second color component information, the neighbor information and the encoding parameter corresponding to the data element;
  • the mode selection module determines the data subset corresponding to the data element in at least one data subset according to at least one of the neighbor information, the second color component information and the encoding parameters corresponding to the first color component block.
  • the mode selection module can be a selector, and the selector can be a traditional algorithm or a specific neural network model, which is not limited here.
  • the selector uses the mean square error as the selector decision condition, assuming that the mean square error of the pixel values of the input luma block is very small (that is, the pixel values are very close), it can be considered that the mean square error of the chroma block to be predicted is also relatively small, so a neural network that is good at such prediction and a corresponding data subset can be selected in at least one neural network, and used as the data subset corresponding to the data element.
  • each data subset when screening each data subset, it can be screened according to a pre-set screening mode to obtain a data subset corresponding to the data element, for example, select a data subset corresponding to a target neural network with the smallest indicator parameter.
  • the indicator parameter can be rate-distortion, etc.
  • the first color component block of each frame image in a preset video sequence, and at least one of its corresponding second color component information and neighbor information are obtained or determined, and then for each first color component block, at least one of its first color component block, its corresponding second color component information and neighbor information is used to create a data subset.
  • corresponding encoding parameters are obtained, and the encoding parameters are also used as a member of the created data subset.
  • Method 2 classify the data elements using preset data rules to classify the data elements into the corresponding data subsets.
  • each data subset is associated with at least one target neural network to be used for color component signal prediction.
  • the data rules can be rules set in advance by the user, such as setting the data rules according to the principle of the highest prediction accuracy of the target neural network corresponding to the data subset.
  • the first color component block is a U component block, as shown in FIG4, there is a frame of image I, and image I is in YUV format, then the Y component image of image I is shown in FIG5, the U component image is shown in FIG6, and the V component image is shown in FIG7.
  • the first color component block to be predicted is a V component block
  • its operation mode is the same as that of the aforementioned U component block.
  • the U component block in FIG10 can be used as the first color component block, and when there is no signal in the first color component block adjacent to the upper left side of the first color component block, it can be filled according to the preset filling rule, for example, as shown in FIG12, the upper left neighbor information after filling. Then, the first color component block is used as a label, and its corresponding neighbor information and its corresponding Y component block are stored in the data subset as data elements.
  • step S30 in the processing method of the present application the following steps may be further included:
  • Step S40 Acquire or determine all data subsets, and train the neural network corresponding to each data subset to obtain a target neural network.
  • the processing device when the processing device completes the construction of the data subsets corresponding to each neural network, and there is at least one data element in each data subset, the corresponding neural network can be trained according to the data elements in each data subset to obtain the target neural network for color component signal prediction.
  • the color component signal predicted by the target neural network can be made more accurate.
  • Figure 19 is a flowchart of the fifth embodiment of the processing method of the present application.
  • the processing method of the present application may further include:
  • S200 Predict or obtain a corresponding first color component block according to the second color component information and/or the target neural network.
  • the processing device first determines a frame of image to be predicted, and obtains or determines a first color component block to be predicted in the frame of image, and second color component information corresponding to the first color component block to be predicted.
  • the first color component block in this embodiment can be the first color component block to be predicted.
  • the processing device may store various images and videos in advance, and may select an image to be predicted from various images as a frame of image. Or extract a frame of image from a video sequence of a video.
  • the processing device receives an image or video input by a user, and extracts a frame of image from the image or video for prediction.
  • the processing device receives an image or video sent by other network devices, and extracts a frame of image from the image or video for prediction.
  • the processing device establishes a communication connection with a network device in the network side of the mobile communication system in advance, so that the network device can send the image or video to the terminal device through the communication connection, and the terminal device receives the image or video.
  • a frame of image includes at least a YUV image, and there are a Y component image, a U component image and a V component image in the YUV image, that is, there is a brightness component image and two chrominance component images.
  • the Y component image is a luma component image.
  • the U component image is a chroma blue component image.
  • the V component image is a chroma red component image.
  • the component block ratio between the Y component image, the U component image and the V component image can be 4:2:0, or other ratio modes, which are not limited here. Then the YUV image is segmented to obtain at least one component block, that is, at least one Y component block, at least one U component block and at least one V component block.
  • the second color component information is brightness information; if it is a V component block, the second color component information is brightness information.
  • the first color component block to be predicted is a U component block, as shown in FIG4, there is a frame of image I, and image I is in YUV format, then the Y component image of image I is shown in FIG5, the U component image is shown in FIG6, and the V component image is shown in FIG7.
  • the Y component data is shown in FIG10
  • the U component data is shown in FIG11 .
  • the operation method is the same as that of the aforementioned U component block.
  • the second color component information may be chrominance information, such as chrominance information corresponding to a V component block, or chrominance information corresponding to a U component block.
  • the second color component information may be a U component block at a position corresponding to the first color component block to be predicted in a U component image, and the chrominance information in the U component block is obtained.
  • the second color component information may be a V component block at a position corresponding to the first color component block to be predicted in a V component image, and the chrominance information in the V component block is obtained.
  • the original YUV image can be determined first, and then the first color component block to be predicted in the YUV image can be obtained or determined, and then the second color component block can be determined.
  • the first color component block to be predicted and the second color component block are on different component images, and the corresponding position of the first color component block to be predicted in the original YUV image is consistent with the corresponding position of the second color component block.
  • the second color component block can be a luminance component block corresponding to the chrominance component block, such as a Y component block, and the luminance information in the corresponding luminance component block is used as the second color component information.
  • the second color component block can be a chrominance component block corresponding to the luminance component block, such as a U component block and/or a V component block, and the chrominance information in the corresponding chrominance component block is used as the second color component information.
  • all data subsets are obtained, and the neural network corresponding to each data subset is trained to obtain a target neural network.
  • the target neural network can be trained first, and then subsequent predictions can be performed based on the target neural network. Before training the target neural network, it is necessary to first construct a data subset corresponding to each neural network, and then train the neural network corresponding to each data subset to obtain the target neural network.
  • a training image to be trained for a neural network can be obtained in a network device or its own storage area, or a video sequence can be obtained, and each frame of the video sequence can be used as a training image.
  • the luma (brightness) information corresponding to the chroma (chrominance) signal to be predicted, the neighbor information and encoding parameters of the chroma signal to be predicted can be used as a piece of data, and the chroma block to be predicted is used as the target label of the data, forming a data element in the data set.
  • the mode selection module divides each group of data elements in the data set into a data subset corresponding to the optimal data network.
  • high-detail data elements with smaller quantization parameters are divided into data subsets corresponding to a neural network with a larger receptive field.
  • each neural network can be trained to obtain a target neural network.
  • the number of target neural networks in this embodiment is at least one.
  • the data elements in the data subset are input into the neural network for training until the trained neural network converges or achieves the expected effect.
  • the way to train the neural network can be trained in the way of gradient descent method, for example, using mean square error or cross entropy as the loss function, each gradient descent method training is to minimize the loss, and after multiple trainings, the ideal accuracy is achieved, and the training ends. Other methods can also be used for training, which are not limited here.
  • the target neural network corresponding to the first color component block can be screened out from each target neural network through the mode selection module.
  • the target neural network corresponding to the first color component block can be screened according to the screening mode set in advance to obtain the target neural network corresponding to the first color component block, for example, the target neural network with the smallest index parameter is selected as the target neural network corresponding to the first color component block.
  • the index parameter can be rate-distortion, etc.
  • the prediction can be directly performed according to the second color component information to obtain the predicted first color component block.
  • the second color component information when the second color component information is brightness information and the first color component block to be predicted is a chrominance component block (such as a U component block or a V component block), the second color component information can be input into a model set in advance for training to obtain the predicted first color component block, and then the color component signal (such as color information) in the first color component block can be obtained.
  • a comparison table is set in advance, and at least one color component information and a corresponding color component block are set in the comparison table. The first color component block is obtained by querying in the comparison table according to the second color component information, and the first color component block obtained by querying is used as the predicted first color component block.
  • prediction can be performed directly according to the target neural network to obtain the predicted first color component block.
  • prediction parameters input by the user or other terminal can be obtained, and the prediction parameters and/or the second color component information can be input into the target neural network for model training, and the predicted first color component block can be output.
  • the prediction parameters may include parameter information related to the first color component block to be predicted, such as the first color component block adjacent to the first color component block to be predicted, and optionally, adjacent includes at least one of left adjacent, upper adjacent, upper left adjacent, lower left adjacent and upper right adjacent.
  • the target neural network may be a nonlinear algorithm or module, such as matrix weighted intra-frame prediction technology
  • the target neural network may include at least one of the following neural networks, such as: convolutional neural network, residual network, long short-term memory artificial neural network, recurrent neural network, three-dimensional convolutional neural network, fully connected neural network, etc.
  • the target neural network corresponding to the first color component block to be predicted can be first determined in at least one target neural network, and then the second color component information can be input into the target neural network for prediction to obtain the predicted first color component block.
  • the second color component information is brightness component information
  • the brightness component information is input into the trained target neural network for prediction, and the color component information is output and used as the color component signal of the predicted first color component block.
  • the second color component information is chrominance component information
  • the chrominance component information is input into the trained target neural network for prediction, and the color component information is output and used as the color component signal of the predicted first color component block.
  • the predicted first color component block is obtained, thereby achieving the acquisition of the color component signal in the first color component block, improving the accuracy of the color component signal prediction, and reducing the complexity of the color component signal prediction.
  • the present application also provides a processing device, please refer to FIG. 21, which is a schematic diagram of the functional modules of the processing device of the present application.
  • the processing device of the present application is applied to a processing device, and the processing device of the present application includes:
  • An acquisition module used for acquiring or determining second color component information
  • a prediction module is used to predict or obtain a corresponding first color component block according to the second color component information and/or the target neural network.
  • the processing device further includes at least one of the following:
  • a first determination module used to obtain or determine second color component information in a second color component block corresponding to the first color component block to be predicted
  • the data network training module is used to obtain all data subsets, and train the neural network corresponding to each data subset to obtain the target neural network.
  • the processing device further includes:
  • a second determination module used to obtain or determine at least one of second color component information, neighbor information, and encoding parameters corresponding to the first color component block
  • a construction module configured to use the first color component block as a label and to use at least one of the second color component information corresponding to the first color component block, the neighbor information corresponding to the first color component block, and the encoding parameter as a data element;
  • a third determination module is used to determine a data subset corresponding to the data element according to the mode selection module and the data element.
  • the prediction module comprises:
  • the prediction unit is used to predict or obtain the corresponding first color component block according to the target neural network.
  • the prediction unit before the prediction unit, it also includes:
  • An input unit is used to input at least one of the neighbor information, second color component information and encoding parameters corresponding to the first color component block to be predicted into the mode selection module, so that the mode selection module determines the target neural network corresponding to the first color component block.
  • the input unit before the input unit, also include:
  • a filling unit is used to fill the first color component block adjacent to the first color component block with the first color component information according to a preset first color component filling rule if the first color component information does not exist in the first color component block adjacent to the first color component block, so as to obtain neighbor information corresponding to the first color component block.
  • the prediction unit is configured to perform at least one of the following:
  • the prediction result of the target neural network is used as the predicted first color component block
  • the prediction result of each target neural network is obtained or determined, and at least one of the following is included:
  • a first color component block is determined based on a function of all of the prediction results.
  • processing device is further configured to execute:
  • Predictions are made based on the first color component information and the target neural network, or predictions are made based on the first color component information to obtain or predict the corresponding first color component block.
  • FIG. 22 is a schematic diagram of the functional modules of the processing device of the present application.
  • the processing device of the present application is applied to a processing device, and the processing device of the present application includes:
  • a determination module used to obtain or determine at least one of second color component information, neighbor information, and encoding parameters corresponding to the first color component block
  • a data element module configured to use the first color component block as a label and at least one of the second color component information, the neighbor information and the encoding parameter as a data element;
  • a training module is used to determine a data subset corresponding to the data element according to the mode selection module and the data element, so as to train a target neural network for color component signal prediction.
  • the training module is used to perform at least one of the following:
  • the mode selection module determines a data subset corresponding to the data element using at least one of the second color component information, the neighbor information, and the encoding parameter corresponding to the data element;
  • the data elements are classified according to preset data rules to classify the data elements into the corresponding data subsets.
  • the training module after the training module, it also includes:
  • the data subset training module is used to obtain or determine all data subsets, and train the neural network corresponding to each data subset to obtain the target neural network.
  • the processing device further includes:
  • the color component prediction module is used to obtain or determine the second color component information; and predict or obtain the corresponding first color component block based on the second color component information and/or the target neural network.
  • each module in the above-mentioned processing device corresponds to each step in the above-mentioned processing method embodiment, and its functions and implementation processes are no longer described one by one here.
  • An embodiment of the present application also provides a processing device, which includes a memory and a processor.
  • the memory stores a processing program, and when the processing program is executed by the processor, the steps of the processing method in any of the above embodiments are implemented.
  • An embodiment of the present application further provides a storage medium having a processing program stored thereon.
  • the processing program is executed by a processor, the steps of the processing method in any of the above embodiments are implemented.
  • An embodiment of the present application further provides a computer program product, which includes a computer program code.
  • the computer program code runs on a computer, the computer executes the methods in the above various possible implementation modes.
  • An embodiment of the present application also provides a chip, including a memory and a processor, wherein the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device equipped with the chip executes the methods in various possible implementation modes as described above.
  • serial numbers of the embodiments of the present application are for description only and do not represent the advantages and disadvantages of the embodiments.
  • the steps in the method of the embodiment of the present application can be adjusted in order, combined and deleted according to actual needs.
  • the units in the device of the embodiment of the present application can be combined, divided and deleted according to actual needs.
  • the computer software product is stored in a storage medium as mentioned above (such as ROM/RAM, disk, CD-ROM), and includes a number of instructions for enabling a terminal device (which can be a mobile phone, computer, server, controlled terminal, or network device, etc.) to execute the method of each embodiment of the present application.
  • a terminal device which can be a mobile phone, computer, server, controlled terminal, or network device, etc.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • Computer instructions can be stored in a storage medium, or transmitted from one storage medium to another storage medium.
  • computer instructions can be transmitted from a website site, computer, server or data center to another website site, computer, server or data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.) mode.
  • the storage medium can be any available medium that a computer can access or a data storage device such as a server or data center that includes one or more available media integration. Available media can be magnetic media, (e.g., floppy disk, storage disk, tape), optical media (e.g., DVD), or semiconductor media (e.g., solid-state storage disk Solid State Disk (SSD)), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请提出了一种处理方法、处理设备及存储介质,处理方法可应用于处理设备,包括以下步骤:获取或确定第二颜色分量信息;根据所述第二颜色分量信息和/或目标神经网络,预测或者得到对应的第一颜色分量块。本申请技术方案可以提高颜色分量信号预测的精准度。

Description

处理方法、处理设备及存储介质
本申请要求于2022年11月7日提交中国专利局、申请号为202211382700.2、发明名称为“处理方法、处理设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中。
技术领域
本申请涉及信号数据处理技术领域,具体涉及一种处理方法、处理设备及存储介质。
背景技术
一些实现中,在利用luma(亮度)和chroma(色度)中的一个或两个对chroma信号进行预测时,是通过人为设计构建带参数的数学预测模型,并计算数学预测模型最优化参数。
在构思及实现本申请过程中,发明人发现至少存在如下问题:H.266/VVC中设计的数学预测模型基本上是线性预测模型,其线性特性限制了预测模型的表达能力及预测精准度;和/或,在通过神经网络预测模型预测chroma信号时,若采用单个神经网络预测模型进行预测,会存在预测颜色分量信号精准度低的问题。
前面的叙述在于提供一般的背景信息,并不一定构成现有技术。
技术解决方案
针对上述技术问题,本申请提供一种处理方法、处理设备及存储介质,旨在解决如何提高颜色分量信号预测的精准度的技术问题。
本申请提供一种处理方法,可应用于处理设备(如智能终端或服务器),包括步骤:
S1:获取或确定第二颜色分量信息;
S2:根据所述第二颜色分量信息和/或目标神经网络,预测或者得到对应的第一颜色分量块。
可选地,所述步骤S1之前,包括以下至少一项:
获取或确定与待预测第一颜色分量块对应的第二颜色分量块中的第二颜色分量信息;
获取所有的数据子集,根据每个所述数据子集对其对应的神经网络进行训练,得到目标神经网络。
可选地,所述获取所有的数据子集之前,还包括:
获取或确定第一颜色分量块对应的第二颜色分量信息、邻居信息和编码参数中的至少一个;
以所述第一颜色分量块为标签,将所述第一颜色分量块对应的第二颜色分量信息、所述第一颜色分量块对应的邻居信息和编码参数中的至少一个作为数据元素;
根据模式选择模块和所述数据元素确定所述数据元素对应的数据子集。
可选地,在所述根据目标神经网络,预测或者得到对应的第一颜色分量块之前,还包括:
步骤S22:将待预测第一颜色分量块对应的邻居信息、第二颜色分量信息和编码参数中的至少一个输入至模式选择模块,以使所述模式选择模块确定所述第一颜色分量块对应的目标神经网络。
可选地,所述步骤S22之前,还包括:
步骤S21:若与所述第一颜色分量块相邻的第一颜色分量块中不存在第一颜色分量信息,则根据预设的第一颜色分量填充规则对与所述第一颜色分量块相邻的第一颜色分量块进行第一颜色分量信息填充,以得到所述第一颜色分量块对应的邻居信息。
可选地,所述根据目标神经网络,预测或者得到对应的第一颜色分量块包括至少以下一项:
根据所述目标神经网络进行预测,得到第三颜色分量信号,根据所述第三颜色分量信号预测第一颜色分量信号,根据所述第一颜色分量信号确定第一颜色分量块;
根据第一颜色分量信号对应的目标神经网络进行预测,得到第一颜色分量信号,并根据所述第一颜色分量信号确定预测后的第一颜色分量块;
将第二颜色分量信息和邻居信息输入到所述目标神经网络,得到或者预测对应的第一颜色分量块;
将所述第二颜色分量信息、邻居信息和编码参数输入到所述目标神经网络,得到或者预测对应的第一颜色分量块;
若只存在一个所述目标神经网络,则将所述目标神经网络进行预测的预测结果作为预测后的第一颜色分量块;
若存在至少一个所述目标神经网络,则获取或确定每个所述目标神经网络进行预测的预测结果,并包括以下至少一项:
对所有所述预测结果进行汇总,得到或者预测对应的第一颜色分量块;
选取所有所述预测结果中的一种预测结果作为第一颜色分量块;
根据所有所述预测结果的一种函数确定第一颜色分量块。
可选地,所述方法还包括:
步骤S4:获取或确定第一颜色分量块对应的第一颜色分量信息;
步骤S5:根据第一颜色分量信息和目标神经网络进行预测,或者根据第一颜色分量信息进行预测,得到或者预测对应的第一颜色分量块。
本申请还提供一种处理方法,可应用于处理设备(如智能终端或服务器),包括以下步骤:
S10:获取或确定第一颜色分量块对应的第二颜色分量信息、邻居信息和编码参数中的至少一个;
S20:以第一颜色分量块为标签,将第二颜色分量信息、邻居信息和编码参数中的至少一个作为数据元素;
S30:根据模式选择模块和所述数据元素确定所述数据元素对应的数据子集,以用于训练进行颜色分量信号预测的目标神经网络。
可选地,所述步骤S30,包括以下至少一项:
将数据元素输入至模式选择模块,以使模式选择模块利用数据元素对应的第二颜色分量信息、邻居信息和编码参数中的至少一个,确定数据元素对应的数据子集;
利用预设的数据规则,将数据元素进行数据分类,以将数据元素分类至所述对应的数据子集。
可选地,所述步骤S30之后,还包括:
步骤S40:获取或确定所有的数据子集,根据每个所述数据子集对其对应的神经网络进行训练,得到目标神经网络。
可选地,所述方法还包括:
获取或确定第二颜色分量信息;
基于模式选择模块确定第一颜色分量块对应的目标神经网络;
根据所述第二颜色分量信息和/或目标神经网络,预测或者得到对应的第一颜色分量块。
本申请还提供一种处理装置,包括:
获取模块,用于获取或确定第二颜色分量信息;
预测模块,用于根据所述第二颜色分量信息和/或目标神经网络,预测或者得到对应的第一颜色分量块。
本申请还提供一种处理装置,包括:
确定模块,用于获取或确定第一颜色分量块对应的第二颜色分量信息、邻居信息和编码参数中的至少一个;
数据元素模块,用于以第一颜色分量块为标签,将第二颜色分量信息、邻居信息和编码参数中的至少一个作为数据元素;
训练模块,用于根据模式选择模块和所述数据元素确定所述数据元素对应的数据子集,以用于训练进行颜色分量信号预测的目标神经网络。
本申请还提供一种处理设备,包括:存储器、处理器,其中,所述存储器上存储有计算机程序,所述计算机程序被所述处理器执行时实现如上任一所述处理方法的步骤。
本申请还提供一种存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上任一所述处理方法的步骤。
如上所述,本申请的处理方法,可应用于处理设备,通过获取或确定待预测第二颜色分量信息,从而根据第二颜色分量信息和/或目标神经网络进行预测,得到预测后的第一颜色分量块。通过上述技术方案,能够基于待预测第一颜色分量块对应的第二颜色分量信息和/或目标神经网络准确预测第一颜色分量块,从而可以实现获取到第一颜色分量块中的颜色分量信号,提高了颜色分量信号预测的精准度,降低了颜色分量信号预测的复杂度。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为实现本申请各个实施例的一种移动终端的硬件结构示意图;
图2为本申请实施例提供的一种通信网络***架构图;
图3是根据第一实施例示出的处理方法的流程示意图;
图4是本申请处理方法中YUV图像的图像示意图;
图5是本申请处理方法中luma分量的图像示意图;
图6是本申请处理方法中chroma blue分量的图像示意图;
图7是本申请处理方法中chroma red分量的图像示意图;
图8是本申请处理方法中luma分量分割后的图像示意图;
图9是本申请处理方法中chroma blue分量分割后的图像示意图;
图10是本申请处理方法中基于图8的luma分量块的像素数据示意图;
图11是本申请处理方法中待预测第一颜色分量块的像素数据示意图;
图12是本申请处理方法中待预测第一颜色分量块的邻居信息示意图;
图13是根据第二实施例示出的处理方法的流程示意图;
图14是本申请处理方法中模式选择模块的工作流程示意图;
图15是本申请处理方法中chroma分量预测的工作流程示意图;
图16是本申请处理方法中chroma分量预测时选择神经网络1进行预测的流程示意图;
图17是本申请处理方法中chroma分量预测时选择神经网络3进行预测的流程示意图;
图18是根据第四实施例示出的处理方法的流程示意图;
图19是根据第五实施例示出的处理方法的流程示意图;
图20是根据第三实施例示出的处理方法的流程示意图;
图21是本申请实施例提供的一种处理装置的功能模块示意图;
图22是本申请实施例提供的另一种处理装置的功能模块示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。
本申请的实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素,此外,本申请不同实施例中具有同样命名的部件、特征、要素可能具有相同含义,也可能具有不同含义,其具体含义需以其在该具体实施例中的解释或者进一步结合 该具体实施例中上下文进行确定。
应当理解,尽管在本文可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本文范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加。本申请使用的术语“或”、“和/或”、“包括以下至少一个”等可被解释为包括性的,或意味着任一个或任何组合。例如,“包括以下至少一个:A、B、C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A和B和C”,再如,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A和B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。
应该理解的是,虽然本申请实施例中的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
取决于语境,如在此所使用的词语“如果”、“若”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。
需要说明的是,在本文中,采用了诸如S10、S20等步骤代号,其目的是为了更清楚简要地表述相应内容,不构成顺序上的实质性限制,本领域技术人员在具体实施时,可能会先执行S20后执行S10等,但这些均应在本申请的保护范围之内。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或者“单元”的后缀仅为了有利于本申请的说明,其本身没有特定的意义。因此,“模块”、“部件”或者“单元”可以混合地使用。
本申请中的处理设备可以是智能终端,也可以是服务器等,具体所指,需要结合上下文明确,智能终端可以以各种形式来实施,例如,可以包括诸如手机、平板电脑、笔记本电脑、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)、便捷式媒体播放器(Portable Media Player,PMP)、导航装置、可穿戴设备、智能手环、计步器等处理设备,以及诸如数字TV、台式计算机等固定终端。后续描述中将以移动终端为例进行说明,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本申请的实施方式的构造也能够应用于固定类型的终端。
请参阅图1,其为实现本申请各个实施例的一种移动终端的硬件结构示意图,该移动终端100可以包括:RF(Radio Frequency,射频)单元101、WiFi模块102、音频输出单元103、A/V(音频/视频)输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、处理器110、以及电源111等部件。本领域技术人员可以理解,图1中示出的移动终端结构并不构成对移动终端的限定,移动终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。下面结合图1对移动终端的各个部件进行具体的介绍:
射频单元101可用于收发信息或通话过程中,信号的接收和发送,具体的,将基站的下行信息接收后,给处理器110处理;另外,将上行的数据发送给基站。通常,射频单元101包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元101还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于GSM(Global System of Mobile communication,全球移动通讯***)、GPRS(General Packet Radio Service,通用分组无线服务)、CDMA2000(Code Division Multiple Access 2000,码分多址2000)、WCDMA(Wideband Code Division Multiple Access,宽带码分多址)、TD-SCDMA(Time Division-Synchronous Code Division Multiple Access,时分同步码分多址)、FDD-LTE(Frequency Division Duplexing-Long Term Evolution,频分双工长期演进)、TDD-LTE(Time Division Duplexing-Long Term Evolution,分时双工长期演进)和5G等。
WiFi属于短距离无线传输技术,移动终端通过WiFi模块102可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图1示出了WiFi模块102,但是可以理解的是,其并不属于移动终端的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
音频输出单元103可以在移动终端100处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将射频单元101或WiFi模块102接收的或者在存储器109中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元103还可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元103可以包括扬声器、蜂鸣器等等。
A/V输入单元104用于接收音频或视频信号。A/V输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元106上。经图形处理器1041处理后的图像帧可以存储在存储器109(或其它存储介质)中或者经由射频单元101或WiFi模块102进行发送。麦克风1042可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风1042接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由射频单元101发送到移动通信基站的格式输出。麦克风1042可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
移动终端100还包括至少一种传感器105,比如光传感器、运动传感器以及其他传感器。可选地,光传感器包括环境光传感器及接近传感器,可选地,环境光传感器可根据环境光线的明暗来调节显示面板1061的亮度,接近传感器可在移动终端100移动到耳边时,关闭显示面板1061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向 上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
显示单元106用于显示由用户输入的信息或提供给用户的信息。显示单元106可包括显示面板1061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1061。
用户输入单元107可用于接收输入的数字或字符信息,以及产生与移动终端的用户设置以及功能控制有关的键信号输入。可选地,用户输入单元107可包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1071上或在触控面板1071附近的操作),并根据预先设定的程式驱动相应的连接装置。触控面板1071可包括触摸检测装置和触摸控制器两个部分。可选地,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器110,并能接收处理器110发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1071。除了触控面板1071,用户输入单元107还可以包括其他输入设备1072。可选地,其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种,具体此处不做限定。
可选地,触控面板1071可覆盖显示面板1061,当触控面板1071检测到在其上或附近的触摸操作后,传送给处理器110以确定触摸事件的类型,随后处理器110根据触摸事件的类型在显示面板1061上提供相应的视觉输出。虽然在图1中,触控面板1071与显示面板1061是作为两个独立的部件来实现移动终端的输入和输出功能,但是在某些实施例中,可以将触控面板1071与显示面板1061集成而实现移动终端的输入和输出功能,具体此处不做限定。
接口单元108用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元108可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端100和外部装置之间传输数据。
存储器109可用于存储软件程序以及各种数据。存储器109可主要包括存储程序区和存储数据区,可选地,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器109可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器110是移动终端的控制中心,利用各种接口和线路连接整个移动终端的各个部分,通过运行或执行存储在存储器109内的软件程序和/或模块,以及调用存储在存储器109内的数据,执行移动终端的各种功能和处理数据,从而对移动终端进行整体监控。处理器110可包括一个或多个处理单元;优选的,处理器110可集成应用处理器和调制解调处理器,可选地,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
移动终端100还可以包括给各个部件供电的电源111(比如电池),优选的,电源111可以通过电源管理***与处理器110逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。尽管图1未示出,移动终端100还可以包括蓝牙模块等,在此不再赘述。
为了便于理解本申请实施例,下面对本申请的移动终端所基于的通信网络***进行描述。
请参阅图2,图2为本申请实施例提供的一种通信网络***架构图,该通信网络***为通用移动通信技术的LTE***,该LTE***包括依次通讯连接的UE(User Equipment,用户设备)201,E-UTRAN(Evolved UMTS Terrestrial Radio Access Network,演进式UMTS陆地无线接入网)202,EPC(Evolved Packet Core,演进式分组核心网)203和运营商的IP业务204。可选地,UE201可以是上述终端100,此处不再赘述。
E-UTRAN202包括eNodeB2021和其它eNodeB2022等。可选地,eNodeB2021可以通过回程(backhaul)(例如X2接口)与其它eNodeB2022连接,eNodeB2021连接到EPC203,eNodeB2021可以提供UE201到EPC203的接入。
EPC203可以包括MME(Mobility Management Entity,移动性管理实体)2031,HSS(Home Subscriber Server,归属用户服务器)2032,其它MME2033,SGW(Serving Gate Way,服务网关)2034,PGW(PDN Gate Way,分组数据网络网关)2035和PCRF(Policy and Charging Rules Function,政策和资费功能实体)2036等。可选地,MME2031是处理UE201和EPC203之间信令的控制节点,提供承载和连接管理。HSS2032用于提供一些寄存器来管理诸如归属位置寄存器(图中未示)之类的功能,并且保存有一些有关服务特征、数据速率等用户专用的信息。所有用户数据都可以通过SGW2034进行发送,PGW2035可以提供UE 201的IP地址分配以及其它功能,PCRF2036是业务数据流和IP承载资源的策略与计费控制策略决策点,它为策略与计费执行功能单元(图中未示)选择及提供可用的策略和计费控制决策。
IP业务204可以包括因特网、内联网、IMS(IP Multimedia Subsystem,IP多媒体子***)或其它IP业务等。
虽然上述以LTE***为例进行了介绍,但本领域技术人员应当知晓,本申请不仅仅适用于LTE***,也可以适用于其他无线通信***,例如GSM、CDMA2000、WCDMA、TD-SCDMA、5G以及未来新的网络***(如6G)等,此处不做限定。
基于上述移动终端硬件结构以及通信网络***,提出本申请各个实施例。
第一实施例
请参照图3,图3为本申请处理方法第一实施例的流程示意图。在本实施例中,本申请处理方法可以应用于处理设备(如智能终端或服务器),包括:
步骤S1:获取或确定第二颜色分量信息;
在本实施例中,处理设备先确定待进行预测的一帧图像,并获取或确定一帧图像中待预测的第一颜色分量块,以及 与第一颜色分量块对应的第二颜色分量信息。可选地,处理设备可以是智能终端,如手机、电脑等,也可以是服务器,还可以是云服务器。
可选地,处理设备可以提前存储各个图像和视频,并可以在各个图像中选择一个待进行预测的图像作为一帧图像。或者在视频的视频序列中抽取一帧图像。或者,处理设备接收用户输入的图像或视频,并在图像或视频中抽取一帧图像进行预测。或者,处理设备接收由其它网络设备发送的图像或视频,并在图像或视频中抽取一帧图像进行预测,此时处理设备预先与所处移动通信***网络侧中的网络设备建立通信连接,从而,网络设备即可通过该通信连接向该终端设备下发图像或视频,该终端设备即接收得到图像或视频。
可选地,一帧图像可以是YUV格式,YUV图像中存在Y分量图像、U分量图像和V分量图像,即存在一个亮度分量图像和两个色度分量图像。可选地,Y分量图像为luma分量图像。U分量图像为chroma blue分量图像。V分量图像为chroma red分量图像。并且Y分量图像、U分量图像和V分量图像三者之间的分量块比例可以为4:2:0,也可以是其他比例模式,在此不做限制。然后对YUV图像进行分割,得到至少一个分量块,即至少一个Y分量块、至少一个U分量块和至少一个V分量块。
可选地,在待预测第一颜色分量块为色度分量块时,如为U分量块时,第二颜色分量信息为亮度信息;如为V分量块时,第二颜色分量信息为亮度信息。可选地,需要确定在亮度分量图像中与待预测第一颜色分量块对应位置的亮度分量块,然后再获取该亮度分量块的亮度信息。可选地,第一颜色分量块可以为待预测第一颜色分量块。
例如,当待预测第一颜色分量块为U分量块时,如图4所示,存在一帧图像I,且图像I是YUV格式,则图像I的Y分量图像如图5所示,U分量图像如图6所示,V分量图像如图7所示。当图像I的分辨率为832*480时,可以设定n=m=16,如图8所示,将图像I的Y分量图像分割为52*30个块,如图9所示,将图像I的U分量图像平均分为52*30个8*8的块,再图10所示,以第二行第二列的块为例,则Y分量数据如图10所示,U分量数据如图11所示。此外,当待预测第一颜色分量块为V分量块时,其操作方式和前述U分量块的方式相同。
可选地,在待预测第一颜色分量块为亮度块时,如为Y分量块时,第二颜色分量信息可以为色度信息,如V分量块对应的色度信息,或者U分量块对应的色度信息。可选地,第二颜色分量信息可以是在U分量图像中与待预测第一颜色分量块对应位置的U分量块,并获取U分量块中的色度信息。可选地,第二颜色分量信息可以是在V分量图像中与待预测第一颜色分量块对应位置的V分量块,并获取V分量块中的色度信息。
可选地,在步骤S1之前,本申请处理方法,还可以包括以下至少一项:
方式一:获取或确定与第一颜色分量块对应的第二颜色分量块中的第二颜色分量信息;
在本实施例中,可以先确定原始的YUV图像,然后再获取或确定YUV图像中待预测的第一颜色分量块,再确定第二颜色分量块,可选地,第一颜色分量块和第二颜色分量块在不同的分量图像上,且第一颜色分量块在原始的YUV图像中对应的位置和第二颜色分量块对应的位置一致。
可选地,当第一颜色分量块为色度分量块时,如U分量块和/或V分量块,则第二颜色分量块可以为与该色度分量块对应的亮度分量块,如Y分量块,并将该对应的亮度分量块中的亮度信息作为第二颜色分量信息。
可选地,当第一颜色分量块为亮度分量块时,如Y分量块,则第二颜色分量块可以为与该亮度分量块对应的色度分量块,如U分量块和/或V分量块,并将该对应的色度分量块中的色度信息作为第二颜色分量信息。
方式二:获取所有的数据子集,根据每个所述数据子集对其对应的神经网络进行训练,得到目标神经网络。
在本实施例中,还可以先训练好目标神经网络,然后再根据目标神经网络来进行后续的预测。而在训练目标神经网络之前,需要先构建好每个神经网络对应的数据子集,然后再根据每个数据子集对其对应的神经网络进行训练,得到目标神经网络。
可选地,可以先在网络设备或自身存储区域中获取待进行神经网络训练的训练图像,或者获取视频序列,将视频序列中的每一帧图像作为训练图像。并且针对训练图像,可以将待预测chroma(色度)信号对应的luma(亮度)信息、待预测chroma信号的邻居信息和编码参数作为一条数据,并以待预测chroma块为该条数据的目标标签,组成数据集中的一条数据元素。然后模式选择模块将数据集中的每一组数据元素划分到最优的数据网络对应的数据子集中,可选地,可以将每一组数据元素划分到多个数据子集中,也可以划分到一个数据子集中,在此不做限制。例如将量化参数较小的高细节数据元素划分到感受野较大的神经网络对应的数据子集中。当获取到每个神经网络对应的数据子集后,就可以进行每个神经网络的训练,以得到目标神经网络。可选地,本实施例中的目标神经网络的数量至少为一个。在训练时,将数据子集中的数据元素输入到神经网络中进行训练,直至训练后的神经网络收敛、或达到预期效果。训练神经网络的方式可以按照梯度下降法的方式进行训练,例如,用均方误差或者交叉熵作为损失函数,每次梯度下降法训练为了极小化损失,多次训练之后达到理想的精度,训练结束。还可以采用其他方式进行训练,在此不做限制。
可选地,可以通过模式选择模块在各个目标神经网络中筛选出与第一颜色分量块对应的目标神经网络。可选地,在对各个目标神经网络进行筛选时,可以按照提前设置的筛选模式进行筛选,以得到与第一颜色分量块对应的目标神经网络,例如,选择某一项指标参数最小的目标神经网络作为与第一颜色分量块对应的目标神经网络。指标参数可以是rate-distortion(率失真)等。
步骤S2:根据所述第二颜色分量信息和/或目标神经网络,预测或者得到对应的第一颜色分量块。
在本实施例中,当获取到第二颜色分量信息后,就可以直接根据第二颜色分量信息进行预测,得到预测后的第一颜色分量块。可选地,在第二颜色分量信息为亮度信息,待预测第一颜色分量块为色度分量块(如U分量块或者V分量块)时,可以通过将第二颜色分量信息输入到提前设置的模型中进行训练,得到预测后的第一颜色分量块,然后再获取第一颜色分量块中的颜色分量信号(如颜色信息)。还可以是提前设置一个对照表,对照表中设置有至少一个颜色分量信息和与之对应的颜色分量块,根据第二颜色分量信息在对照表中查询得到第一颜色分量块,并将查询得到的第一颜色分量块作为预测后的第一颜色分量块。
可选地,当获取到第一颜色分量块对应的目标神经网络后,可以直接根据目标神经网络进行预测,得到预测后的第一颜色分量块。可选地,在直接根据目标神经网络进行预测时,可以获取用户或其它终端输入的预测参数,并将预测参 数和/或第二颜色分量信息输入到目标神经网络中进行模型训练,输出得到预测后的第一颜色分量块。可选地,预测参数可以包括与待预测第一颜色分量块相关的参数信息,如与待预测第一颜色分量块相邻的第一颜色分量块,可选地,相邻包括左侧相邻、上侧相邻、左上侧相邻、左下侧相邻和右上侧相邻中的至少一种。可选地,目标神经网络可以是一种非线性算法或模块,如矩阵加权帧内预测技术(Matrix Weighted Intra Prediction,MIP),且目标神经网络可以包括以下神经网络中的至少一种,如:卷积神经网络(Convolutional Neural Network,CNN),残差网络(Residual neural network,ResNet),长短期记忆人工神经网络(Long Short-Term Memory,LSTM),循环神经网络(Recurrent Neural Network,RNN),三维卷积神经网络(3D-CNN),全连接神经网络(Fully Connected Neural Network,FCNN)等。
可选地,当获取到第二颜色分量信息和训练好的目标神经网络后,可以先通过模式选择模块在至少一个目标神经网络中确定与待预测第一颜色分量块对应的目标神经网络,然后将第二颜色分量信息输入到该目标神经网络中进行预测,得到预测后的第一颜色分量块。例如,当第二颜色分量信息为亮度分量信息时,将亮度分量信息输入到训练好的目标神经网络中进行预测,输出得到颜色分量信息,并将其作为预测后的第一颜色分量块的颜色分量信号。例如,当第二颜色分量信息为色度分量信息时,将色度分量信息输入到训练好的目标神经网络中进行预测,输出得到颜色分量信息,并将其作为预测后的第一颜色分量块的颜色分量信号。
可选地,获取所有的数据子集之前,本申请处理方法,还可以包括:
获取或确定第一颜色分量块对应的第二颜色分量信息、邻居信息和编码参数中的至少一个;以所述第一颜色分量块为标签,将所述第一颜色分量块对应的第二颜色分量信息、所述第一颜色分量块对应的邻居信息和编码参数中的至少一个作为数据元素;根据模式选择模块和所述数据元素确定所述数据元素对应的数据子集。
在本实施例中,在根据目标神经网络进行预测第一颜色分量块之前,还需要构建数据子集,以便根据数据子集对预设的神经网络进行模型训练,得到训练好的目标神经网络。在构建数据子集时,可以先获取至少一个训练图像,对每个训练图像,获取或确定第一颜色分量块对应的第二颜色分量信息,如chroma信号对应的luma信息;邻居信息,如chroma信号的邻居信息;和编码参数中的至少一个,以第一颜色分量块为标签,将第一颜色分量块对应的第二颜色分量信息、第一颜色分量块对应的邻居信息和编码参数中的至少一个作为数据元素,并将数据元素输入到模式选择模块中,以便根据模式选择模块中在预设的多个数据子集中选择出数据元素对应的数据子集,将数据元素存储至数据元素对应的数据子集中。
例如,在确定数据元素时,可以先通过intra-prediction(帧内预测)模块完成对luma分量的重构,以得到所有采用的视频序列的luma分量,并根据luma分量来构建数据子集。过程如下,对任一luma分量L,将其均分成N个n*m大小的luma块,记第i个luma块为∫i,则有:可选地,类似luma分量的处理,将luma分量L对应的chroma分量C1均分为N个大小的chroma块,记第i个chroma块为Ci,其中,(1≤i≤N)。当待预测第一颜色分量块为Ci时,且待预测第一颜色分量块的邻居信息为hi时,此时Ci对应的数据元素可以包括∫i和hi中的至少一个。可选地,还可以获取输入的编码参数,如码率和量化参数等,并将编码参数也作为数据元素中的一种。
可选地,在构建数据子集时,对于以(li,hi)为数据元素,Ci作为数据标签,可以生成N个(li,hi,ci)数据对。或者以(li,hi)和编码参数为数据元素,Ci作为数据标签,可以生成N个(li,hi,ci)数据对,然后再将数据对汇总到数据集中,再通过模式选择模块将数据集中的每个数据元素划分到各自对应的数据子集中。
在本实施例中,通过获取或确定第一颜色分量块的第二颜色分量信息,基于模式选择模块确定目标神经网络,然后再根据第二颜色分量信息和/或目标神经网络进行预测,得到预测后的第一颜色分量块,从而可以实现获取到第一颜色分量块中的颜色分量信号,提高了颜色分量信号预测的精准度,降低了颜色分量信号预测的复杂度。
第二实施例
请参照图13,图13为本申请处理方法第一实施例中步骤S2之前的具体流程示意图。在本实施例中,上述本申请处理方法的步骤S2中的一实施例,可以包括:
根据目标神经网络进行预测,得到或者预测对应的第一颜色分量块。
处理设备获取到训练好的目标神经网络后,可以先通过模式选择模块在各个目标神经网络中筛选出与第一颜色分量块对应的目标神经网络,然后再根据目标神经网络进行预测,以得到预测后的第一颜色分量块。
可选地,目标神经网络可以是一种非线性算法或模块,如矩阵加权帧内预测技术,且目标神经网络可以包括以下神经网络中的至少一种,如:卷积神经网络,残差网络,长短期记忆人工神经网络,循环神经网络,三维卷积神经网络,全连接神经网络等。
可选地,当通过模式选择模块筛选出的目标神经网络的数量为至少一个时,可以通过至少一个目标神经网络进行预测,并对预测的结果进行加权平均计算或其他计算方式进行计算,以得到进行预测后的第一颜色分量块。
可选地,第一颜色分量块可以为亮度分量块,如Y分量块;也可以为色度分量块,如U分量块或者V分量块。
可选地,在根据目标神经网络,预测或者得到对应的第一颜色分量块之前,还可以包括:
步骤S22:将第一颜色分量块对应的邻居信息、第二颜色分量信息和编码参数中的至少一个输入至模式选择模块,以使所述模式选择模块确定所述第一颜色分量块对应的目标神经网络。
在本实施例中,在确定目标神经网络之前,需要先在已重构好的YUV分量块中,获取待预测的第一颜色分量块对应的邻居信息、第二颜色分量信息和编码参数中的至少一种,然后再将待预测的第一颜色分量块对应的邻居信息、第二颜色分量信息和编码参数中的至少一种输入到模式选择模块中,以使模式选择模块在训练好的至少一个神经网络中根据第一颜色分量块对应的邻居信息、第二颜色分量信息和编码参数中的至少一种来确定第一颜色分量块对应的目标神经网络。可选地,本实施例中的目标神经网络可以进行数据分类处理,将接收的第一颜色分量块对应的邻居信息、第二颜色分量信息和编码参数分类到对应的目标神经网络中,以便通过目标神经网络进行预测,得到预测后的第一颜色分量块。
例如,如图14所示,包括luma分量L、chroma分量C,当待预测第一颜色分量块为chroma分量块时,确定chroma分量C中与待预测第一颜色分量块相邻的邻居信息,并输入至模式选择模块中。在luma分量L中获取与待预测第一颜色分量块对应的luma分量块∫i,并将其输入到模式选择模块中,将编码参数如QP=35输入至模式选择模块中,模式选择模块对接收到的编码参数、邻居信息和luma分量块∫i进行决策,以输出信号类别,根据信号类别确定目标神经网络,再根据目标神经网络进行预测,得到预测后的第一颜色分量块。
可选地,邻居信息包括:与所述待预测第一颜色分量块相邻的第一颜色分量块中的第一颜色分量信息,可选地,与所述待预测第一颜色分量块相邻的第一颜色分量块包括与所述待预测第一颜色分量块上侧相邻的第一颜色分量块、与所述待预测第一颜色分量块左侧相邻的第一颜色分量块和位于所述待预测第一颜色分量块左上侧的第一颜色分量块中的至少一种。例如,当待预测第一颜色分量块为待预测的U分量块时,则其邻居信息可以为与待预测的U分量块相邻的已知U分量块,若相邻的已知U分量块不存在色度信息,则可以进行填充处理。
可选地,编码参数包括量化参数,并将所述量化参数对应的神经网络作为所述目标神经网络。
可选地,第二颜色分量信息的获取方式可以按照第一实施例中所记载的获取方式进行。
可选地,模式选择模块可以是一种选择器,而选择器可以是一种传统算法,也可以是一个具体的神经网络模型,在此不做限制。例如,选择器从已编码视频数据流中读取到信息数据,该数据指示目标神经网络。又例如,当选择器利用均方差作为选择器判决条件时,假设输入的luma块的像素值均方差很小(即像素值很接近),则可以认为需要预测的chroma块的均方差也比较小,因此可以在训练好的至少一个神经网络中选择擅长此类预测的神经网络作为目标神经网络进行预测。
可选地,当模式选择模块选择出至少一个神经网络时,可以通过至少一个神经网络进行预测,并对预测的结果进行相关计算,以得到最终的预测后的第一颜色分量块。
可选地,本实施例中的模式选择模块可以设置至少一个,每个模式选择模块可以运行,以达到并行运行,提高预测的效率。
例如,如图15所示,包括luma分量L的luma块∫i,chroma分量C,以及在chroma分量C中的待预测区块Ci和待预测区块Ci的邻居信息hi。先在luma分量L中选择出与待预测区块Ci对应的luma块∫i,chroma分量C中与待预测区块Ci相邻的邻居信息hi,并将选择出的luma块∫i和邻居信息hi,以及编码参数中的至少一种输入到模式选择模块中,当训练好的神经网络包括神经网络1、神经网络j、神经网络k,且假设神经网络1为F1,神经网络j为Fj,神经网络k为Fk,则所有的神经网络 此时,模式选择模块可以根据输入的luma块∫i和邻居信息hi,以及编码参数中的至少一种在各个神经网络中选择出目标神经网络,如:再通过目标神经网络来预测得到预测后的第一颜色分量块,即
可选地,模式选择模块在进行筛选目标神经网络时,可以按照提前设置的筛选规则进行,如针对量化参数较小的高细节chroma预测,选择深度较深,感受野较大的神经网络结构作为目标神经网络。
可选地,在步骤S22之前,还可以包括:
步骤S21:若与所述第一颜色分量块相邻的第一颜色分量块中不存在第一颜色分量信息,则根据预设的第一颜色分量填充规则对与所述第一颜色分量块相邻的第一颜色分量块进行第一颜色分量信息填充,以得到所述第一颜色分量块对应的邻居信息。
在本实施例中,在获取第一颜色分量块的邻居信息时,由于邻居信息包括与所述第一颜色分量块相邻的第一颜色分量块中的第一颜色分量信息,可选地,与所述第一颜色分量块相邻的第一颜色分量块包括与所述第一颜色分量块上侧相邻的第一颜色分量块、与所述第一颜色分量块左侧相邻的第一颜色分量块和位于所述第一颜色分量块左上侧的第一颜色分量块中的至少一种。
可选地,判断与第一颜色分量块上侧相邻的第一颜色分量块是否存在第一颜色分量信息,若与第一颜色分量块上侧相邻的第一颜色分量块中不存在颜色分量信息,则可以根据提取设置的第一颜色分量填充规则对与第一颜色分量块上侧相邻的第一颜色分量块进行第一颜色分量信息填充,以得到第一颜色分量块对应的邻居信息。可选地,第一颜色分量填充规则可以是按照提前设置的固定值进行填充,如值128。也可以是基于已有的第一颜色分量信息进行平均计算,并将平均计算结果进行填充。可选地,第一颜色分量规则还可以是在第一颜色分量信息为像素时,若所有参考像素都不可以使用时,将所有参数像素都以像素最大值的一半进行填充。可以是若所有参考像素都可以使用时,则复制可以使用的参考像素进行填充。可以是在参考像素部分可用,且左下方参考像素可用时,则从左下方参考像素开始向上向右填充最邻近的可用的参考像素。可以是在参考像素部分可用,且左下方参考像素不可用时,从左下方参考像素开始像素向右进行查找,直至找到第一个可用参考像素,并以该可用参考像素的值填充之前的像素,再遍历之后的像素,填充最邻近的可用像素。具体方式在此不做限制。
可选地,判断与第一颜色分量块左侧相邻的第一颜色分量块是否存在第一颜色分量信息,若与第一颜色分量块左侧相邻的第一颜色分量块中不存在颜色分量信息,则可以根据提取设置的第一颜色分量填充规则对与第一颜色分量块左侧相邻的第一颜色分量块进行第一颜色分量信息填充,以得到第一颜色分量块对应的邻居信息。
可选地,判断与第一颜色分量块左上侧相邻的第一颜色分量块是否存在第一颜色分量信息,若与第一颜色分量块左上侧相邻的第一颜色分量块中不存在颜色分量信息,则可以根据提取设置的第一颜色分量填充规则对与第一颜色分量块左上侧相邻的第一颜色分量块进行第一颜色分量信息填充,以得到第一颜色分量块对应的邻居信息。
可选地,本申请处理方法中的根据目标神经网络,预测或者得到对应的第一颜色分量块可以包括至少以下一项:
方式一:根据所述目标神经网络进行预测,得到第三颜色分量信号,根据所述第三颜色分量信号预测第一颜色分量信号,根据所述第一颜色分量信号确定预测后的第一颜色分量块;
在本实施例中,在确定目标神经网络后,可以将第二颜色分量信息、邻居信息和编码参数中的至少一种输入到目标神经网络中进行预测,并将预测结果作为第三颜色分量信号。例如当第二颜色分量信号为Y分量块对应的亮度信号时,若通过目标神经网络预测得到的第三颜色分量信号为U分量块对应的色度信号时,此时可以继续通过U分量块对应的色度信号来预测V分量块对应的色度信号,并将预测得到的V分量块对应的色度信号作为预测得到的第一颜色分量信号,并在检测到具有第一颜色分量信号的第一颜色分量块,且该第一颜色分量块的位置与第二颜色分量信号对应的第二颜色分量块的位置对应时,就可以将此具有第一颜色分量信号的第一颜色分量块作为预测后的第一颜色分量块。可选地,还可以通过目标神经网络预测U分量块对应的色度信号,并将预测的U分量块对应的色度信号作为第三颜色分量信号,然后再根据U分量块对应的色度信号预测V分量块对应的色度信号,并将预测得到的V分量块对应的色度信号作为预测后的第一颜色分量块。可选地,还可以通过目标神经网络预测Y分量对应的亮度信号,并将其作为第三颜色分量信号,再根据Y分量对应的亮度信号预测U分量块和/或V分量块对应的色度信号,并将预测得到的U分量块和/或V分量块对应的色度信号作为第一颜色分量信号。
可选地,在根据第三颜色分量信号预测第一颜色分量信号时,可以通过提前设置第三颜色分量信号与第一颜色分量信号之间的映射表,再根据映射表进行预测,也可以通过神经网络模型进行预测,在此不做限制。
方式二:根据第一颜色分量信号对应的目标神经网络进行预测,得到第一颜色分量信号,并根据所述第一颜色分量信号确定第一颜色分量块;
在本实施例中,当通过模式选择模块筛选出第一颜色分量信号对应的目标神经网络后,就可以直接根据第一颜色分量信号对应的目标神经网络进行预测,得到第一颜色分量信号,然后再根据第一颜色分量信号确定预测后的第一颜色分量块,此时预测后的第一颜色分量块具有第一颜色分量信号。
方式三:将第二颜色分量信息和邻居信息输入到所述目标神经网络,得到或者预测对应的第一颜色分量块;
在本实施例中,在确定目标神经网络后,就可以将提前获取的待预测第一颜色分量块对应的第二颜色分量信息和邻居信息输入到目标神经网络中进行训练预测,并根据输出结果来确定得到预测后的第一颜色分量块。可选地,目标神经 网络进行训练预测时可以采用提前设置的损失函数进行训练预测。例如,在进行chroma分量块预测,且第二颜色分量信息为luma块∫i时,将chroma分量块的邻居信息hi和luma块∫i输入到目标神经网络中进行预测,输出得到此时由luma分量预测到的chroma分量可表示为:
方式四:将所述第二颜色分量信息、所述邻居信息和所述编码参数输入到所述目标神经网络,得到或者预测对应的第一颜色分量块;
在本实施例中,在确定目标神经网络后,就可以将提前获取的待预测第一颜色分量块对应的第二颜色分量信息和邻居信息,以及编码参数(如码率)输入到目标神经网络中进行训练预测,并根据输出结果来确定得到预测后的第一颜色分量块。可选地,目标神经网络进行训练预测时可以采用提前设置的损失函数进行训练预测。例如,在进行chroma分量块预测,且第二颜色分量信息为luma块∫i时,将chroma分量块的邻居信息hi和luma块∫i,以及编码参数p输入到目标神经网络中进行预测,输出得到此时由luma分量预测到的chroma分量可表示为:
例如,如图16所示,包括luma分量L和chroma分量C,当在chroma分量C中确定待预测区块后,可以将luma分量L中与待预测区块对应的luma分量块和chroma分量块C中与待预测区块相邻的邻居信息,以及编码参数一起输入到模式选择模块中。可选地,编码参数可以包括QP=32和码率=1.5Mbps。模式选择模块可以根据接收到的所有信息进行决策,以输出信号类别,即选定神经网络,若在神经网络1到神经网络j到神经网络k中选定神经网络1,则根据神经网络1的函数F1进行预测,得到预测后的chroma分量,并输出预测结果,如110,110,108,108。
方式五:若只存在一个所述目标神经网络,则将所述目标神经网络进行预测的预测结果作为第一颜色分量块;
在本实施例中,当通过模式选择模块有且仅筛选到一个目标神经网络时,则可以直接根据目标神经网络进行预测,如将待预测第一颜色分量块对应的邻居信息和第二颜色分量信息输入到目标神经网络中进行预测,或者将待预测第一颜色分量块对应的邻居信息、第二颜色分量信息和编码参数输入到目标神经网络中进行预测,并根据预测的结果直接确定预测后的第一颜色分量块。
例如,如图17所示,包括luma分量L和chroma分量C,当在chroma分量C中确定待预测区块后,可以将luma分量L中与待预测区块对应的luma分量块和chroma分量块C中与待预测区块相邻的邻居信息,以及编码参数一起输入到模式选择模块中。可选地,编码参数可以包括QP=32。可选地,QP为量化参数,为量化步长的序号。模式选择模块可以根据接收到的所有信息进行决策,以输出信号类别,即选定神经网络,若在神经网络1到神经网络j到神经网络k中选定神经网络3,则根据神经网络3的函数进行预测,得到预测后的chroma分量,并输出预测结果,如110,109,109,109,108,108,108,110等8个像素。
方式六:若存在至少一个所述目标神经网络,则获取或确定每个所述目标神经网络进行预测的预测结果,并包括以下至少一项:
一:对所有所述预测结果进行汇总,得到或者预测对应的第一颜色分量块;
在本实施例中,当通过模式选择模块进行筛选发现存在至少一个目标神经网络时,可以通过所有的目标神经网络进行预测,以得到每个目标神经网络进行预测的预测结果。例如,将待预测第一颜色分量块对应的邻居信息和第二颜色分量信息输入到每个目标神经网络中进行预测,得到相应的预测结果。或者,将待预测第一颜色分量块对应的邻居信息、 第二颜色分量信息和编码参数输入到每个目标神经网络中进行预测,得到相应的预测结果。然后再将所有的预测结果进行汇总,得到预测后的第一颜色分量块。可选地,在对所有预测结果进行汇总后,还可以进行相应的数学运算,例如进行加权平均计算,以得到预测后的第一颜色分量块。
二:选取所有所述预测结果中的一种预测结果作为预测后的第一颜色分量块;
在本实施例中,可以在获取到每个目标神经网络进行预测的预测结果之后,直接在所有的预测结果中选择最优的预测结果作为预测后的第一颜色分量块。可选地,当最优的预测结果存在至少一个时,可以随机在各个最优的预测结果中选择一个作为预测后的第一颜色分量块,也可以按照一定的函数计算方式对各个最优的预测结果进行计算,以得到预测后的第一颜色分量块。
三:根据所有所述预测结果的一种函数确定第一颜色分量块。
在本实施例中,可以在获取到每个目标神经网络进行预测的预测结果之后,通过提前设置的一种函数来进行计算,以确定预测后的第一颜色分量块。其中一种函数可以是神经网络模型,也可以是传统的数学算法。
在本实施例中,通过将获取的待预测第一颜色分量块对应的邻居信息、第二颜色分量信息和编码参数中的至少一个输入到模式选择模块中,以确定目标神经网络,然后再根据目标神经网络进行预测,得到预测后的第一颜色分量块,并且目标神经网络可以采用多种方式进行预测,从而可以提高预测第一颜色分量块的效率和提高颜色分量信号预测的精准度。
第三实施例
请参照图20,图20为本申请处理方法第三实施例的流程示意图。基于上述实施例,在本实施例中,本申请处理方法可应用于处理设备(如服务器或智能终端),包括以下步骤:
步骤S4:获取或确定第一颜色分量块对应的第一颜色分量信息;
步骤S5:根据第一颜色分量信息和目标神经网络进行预测,或者根据第一颜色分量信息进行预测,得到或者预测对应的第一颜色分量块。
处理设备先确定待进行预测的一帧图像,并获取或确定一帧图像中待预测的第一颜色分量块,以及与第一颜色分量块对应的第一颜色分量信息。可选地,第一颜色分量块可以为待预测第一颜色分量块。在第一颜色分量块为色度分量块时,第一颜色分量信息为色度信息。在第一颜色分量块为亮度分量块时,第一颜色分量信息为亮度信息。
可选地,通过模式选择模块在各个目标神经网络中筛选出与第一颜色分量块对应的目标神经网络。
当获取到第一颜色分量信息后,就可以直接根据第一颜色分量信息进行预测,得到预测后的第一颜色分量块。可选地,在第一颜色分量信息为色度信息,待预测的第一颜色分量块为色度分量块(如U分量块或者V分量块)时,可以通过将第一颜色分量信息输入到提前设置的模型中进行训练,得到预测后的第一颜色分量块,然后再获取第一颜色分量块中的颜色分量信号(如颜色信息)。还可以是提前设置一个对照表,对照表中设置有至少一个颜色分量信息和与之对应的颜色分量块,根据第二颜色分量信息在对照表中查询得到第一颜色分量块,并将查询得到的第一颜色分量块作为预测后的第一颜色分量块。
可选地,当获取到第一颜色分量信息和训练好的目标神经网络后,可以先通过模式选择模块在至少一个目标神经网络中确定与待预测第一颜色分量块对应的目标神经网络,然后将第一颜色分量信息输入到该目标神经网络中进行预测,得到预测后的第一颜色分量块。
在本实施例中,通过获取或确定第一颜色分量块的第一颜色分量信息,然后再根据第一颜色分量信息和/或目标神经网络进行预测,得到预测后的第一颜色分量块,从而可以实现获取到第一颜色分量块中的颜色分量信号,提高了颜色分量信号预测的精准度,降低了颜色分量信号预测的复杂度。
第四实施例
请参照图18,图18为本申请处理方法第四实施例的流程示意图。在本实施例中,本申请处理方法可应用于处理设备(如服务器或智能终端),包括以下步骤:
S10:获取或确定第一颜色分量块对应的第二颜色分量信息、邻居信息和编码参数中的至少一个;
在本实施例中,处理设备先确定视频序列中的一帧图像,并获取或确定一帧图像中的第一颜色分量块,以及与第一分量块对应的第二颜色分量信息,与第一分量块对应的邻居信息和与第一分量块对应的编码参数中的至少一个,以便进行后续的模型训练。可选地,一帧图像可以作为训练图像,训练图像包括多帧图像。
可选地,处理设备可以提前存储各个图像和视频,并可以在各个图像中选择一帧图像。或者在视频的视频序列中抽取一帧图像。或者,处理设备接收用户输入的图像或视频,并在图像或视频中抽取一帧图像。或者,处理设备接收由其它网络设备发送的图像或视频,并在图像或视频中抽取一帧图像,此时处理设备预先与所处移动通信***网络侧中的网络设备建立通信连接,从而,网络设备即可通过该通信连接向该终端设备下发图像或视频,该终端设备即接收得到图像或视频。
可选地,一帧图像至少包括YUV图像,YUV图像中存在Y分量图像、U分量图像和V分量图像,即存在一个亮度分量图像和两个色度分量图像。可选地,Y分量图像为luma分量图像。U分量图像为chroma blue分量图像。V分量图像为chroma red分量图像。并且Y分量图像、U分量图像和V分量图像三者之间的分量块比例可以为4:2:0,也可以是其他比例模式,在此不做限制。然后对YUV图像进行分割,得到至少一个分量块,即至少一个Y分量块、至少一个U分量块和至少一个V分量块。
可选地,在第一颜色分量块为色度分量块时,如为U分量块时,第二颜色分量信息为亮度信息;如为V分量块时,第二颜色分量信息为亮度信息。可选地,需要确定在亮度分量图像中与第一颜色分量块对应位置的亮度分量块,然后再获取该亮度分量块的亮度信息。
可选地,在第一颜色分量块为亮度块时,如为Y分量块时,第二颜色分量信息可以为色度信息,如V分量块对应的色度信息,或者U分量块对应的色度信息。可选地,第二颜色分量信息可以是在U分量图像中与第一颜色分量块对应位置的U分量块,并获取U分量块中的色度信息。可选地,第二颜色分量信息可以是在V分量图像中与第一颜色分量块 对应位置的V分量块,并获取V分量块中的色度信息。
可选地,邻居信息包括:与第一颜色分量块相邻的第一颜色分量块中的第一颜色分量信息,可选地,与第一颜色分量块相邻的第一颜色分量块包括与第一颜色分量块上侧相邻的第一颜色分量块、与第一颜色分量块左侧相邻的第一颜色分量块和位于第一颜色分量块左上侧的第一颜色分量块中的至少一种。例如,当第一颜色分量块为U分量块时,则其邻居信息可以为与U分量块相邻的已知U分量块,若相邻的已知U分量块不存在色度信息,则可以进行填充处理。
可选地,判断与待预测第一颜色分量块上侧相邻的第一颜色分量块是否存在第一颜色分量信息,若与待预测第一颜色分量块上侧相邻的第一颜色分量块中不存在颜色分量信息,则可以根据提取设置的第一颜色分量填充规则对与待预测第一颜色分量块上侧相邻的第一颜色分量块进行第一颜色分量信息填充,以得到待预测第一颜色分量块对应的邻居信息。可选地,第一颜色分量填充规则可以是按照提前设置的固定值进行填充,如值128。也可以是基于已有的第一颜色分量信息进行平均计算,并将平均计算结果进行填充。具体方式在此不做限制。
可选地,判断与待预测第一颜色分量块左侧相邻的第一颜色分量块是否存在第一颜色分量信息,若与待预测第一颜色分量块左侧相邻的第一颜色分量块中不存在颜色分量信息,则可以根据提取设置的第一颜色分量填充规则对与待预测第一颜色分量块左侧相邻的第一颜色分量块进行第一颜色分量信息填充,以得到待预测第一颜色分量块对应的邻居信息。
可选地,判断与待预测第一颜色分量块左上侧相邻的第一颜色分量块是否存在第一颜色分量信息,若与待预测第一颜色分量块左上侧相邻的第一颜色分量块中不存在颜色分量信息,则可以根据提取设置的第一颜色分量填充规则对与待预测第一颜色分量块左上侧相邻的第一颜色分量块进行第一颜色分量信息填充,以得到待预测第一颜色分量块对应的邻居信息。
可选地,编码参数包括量化参数,码率等。
S20:以第一颜色分量块为标签,将第二颜色分量信息、邻居信息和编码参数中的至少一个作为数据元素;
在本实施例中,当处理设备获取到第二颜色分量信息、邻居信息和编码参数中的至少一个时,就可以以第一颜色分量块为标签,将第一颜色分量块对应的第二颜色分量信息、第一颜色分量块对应的邻居信息和编码参数中的至少一个作为数据元素。
S30:根据模式选择模块和所述数据元素确定所述数据元素对应的数据子集,以用于训练进行颜色分量信号预测的目标神经网络。
在本实施例中,当确定数据元素后,可以将数据元素输入至模式选择模块中,再通过模式选择模块根据数据元素中的标签在各个数据子集中选择与之对应的数据子集,并将其添加至其对应的数据子集中,以便后续根据数据子集,对数据子集关联的神经网络进行训练,也就是在本实施例中,数据子集用于训练进行颜色分量信号预测的目标神经网络,并在目标神经网络训练完成后,就可以根据目标神经网络对待预测第一颜色分量块进行预测,得到预测后的第一颜色分量块。
可选地,数据子集的数量可以为多个,也可以为一个,可选地,每个目标神经网络至少对应一个数据子集,以便根据数据子集进行训练,得到训练好的目标神经网络。
例如,在确定数据元素时,可以先通过intra-prediction(帧内预测)模块完成对luma分量的重构,以得到所有采用的视频序列的luma分量,并根据luma分量来构建数据子集。过程如下,对任一luma分量L,将其均分成N个n*m大小的luma块,记第i个luma块为∫i,则有:可选地,类似luma分量的处理,将luma分量L对应的chroma分量C1均分为N个大小的chroma块,记第i个chroma块为Ci,其中,(1≤i≤N)。当待预测第一颜色分量块为Ci时,且待预测第一颜色分量块的邻居信息为hi时,此时Ci对应的数据元素可以包括∫i和hi中的至少一个。可选,还可以获取输入的编码参数,如码率和量化参数等,并将编码参数也作为数据元素中的一种。
可选地,在构建数据子集(li,hi,ci)时,对于以(li,hi)为数据元素,Ci作为数据标签,可以生成N个数据对。或者以(li,hi)和编码参数为数据元素,Ci作为数据标签,可以生成N个(li,hi,ci)数据对,然后再将数据对汇总到数据集中,再通过模式选择模块将数据集中的每个数据元素划分到各自对应的数据子集中。
可选地,本申请处理方法中的步骤S30可以包括至少以下一项:
方式一,将数据元素输入至模式选择模块,以使模式选择模块利用数据元素对应的第二颜色分量信息、邻居信息和编码参数中的至少一个,确定数据元素对应的数据子集;
在本实施例中,在确定数据子集之前,需要先在已重构好的YUV分量块中,获取第一颜色分量块对应的邻居信息、第二颜色分量信息和编码参数中的至少一种作为数据元素,然后再将数据元素中的第一颜色分量块对应的邻居信息、第二颜色分量信息和编码参数中的至少一种输入到模式选择模块中,以使模式选择模块在至少一个数据子集中根据第一颜色分量块对应的邻居信息、第二颜色分量信息和编码参数中的至少一种来确定数据元素对应的数据子集。
可选地,模式选择模块可以是一种选择器,而选择器可以是一种传统算法,也可以是一个具体的神经网络模型,在此不做限制。例如,当选择器利用均方差作为选择器判决条件时,假设输入的luma块的像素值均方差很小(即像素值很接近),则可以认为需要预测的chroma块的均方差也比较小,因此可以在至少一个神经网络中选择擅长此类预测的神经网络以及与之对应的数据子集,并将其作为数据元素对应的数据子集。可选地,在对各个数据子集进行筛选时,可以按照提前设置的筛选模式进行筛选,以得到与数据元素对应的数据子集,例如,选择某一项指标参数最小的目标神经网络对应的数据子集。指标参数可以是rate-distortion(率失真)等。
可选地,获取或确定预设的视频序列中每一帧图像的第一颜色分量块、及其对应的第二颜色分量信息和邻居信息中的至少一个,然后针对每个第一颜色分量块,均以其第一颜色分量块,其对应的第二颜色分量信息和邻居信息中的至少一个来创建数据子集。或者获取相应的编码参数,将编码参数也作为创建数据子集中的一员。
方式二,利用预设的数据规则,将数据元素进行数据分类,以将数据元素分类至所述对应的数据子集。
在本实施中,当处理设备获取到数据元素后,还可以根据提前设置的数据规则对数据元素进行数据分类,并将其存储至对应的数据子集中,可选地,每个数据子集至少与一个待进行颜色分量信号预测的目标神经网络关联。例如,量化参数较小的高细节数据元素划分到感受野较大的神经网络所对应的数据子集。可选地,数据规则可以是用户提前设置的规则,如按照数据子集对应的目标神经网络预测精确度最高的原则进行设置数据规则。
例如,当第一颜色分量块为U分量块时,如图4所示,存在一帧图像I,且图像I是YUV格式,则图像I的Y分量图像如图5所示,U分量图像如图6所示,V分量图像如图7所示。当图像I的分辨率为832*480时,可以设定n=m=16,如图8所示,将图像I的Y分量图像分割为52*30个块,如图9所示,将图像I的U分量图像平均分为52*30个8*8的块,再图10所示,以第二行第二列的块为例,则Y分量数据如图10所示,U分量数据如图11所示。此外,当待预测第一颜色分量块为V分量块时,其操作方式和前述U分量块的方式相同。在本实施例中可以以图10中的U分量块作为第一颜色分量块,并当第一颜色分量块的左上侧相邻的第一颜色分量块中不存在信号时,可以按照预设的填充规则对其进行填充,例如,如图12所示为填充后的左上侧邻居信息。然后再以第一颜色分量块为标签,将其对应的邻居信息和其对应的Y分量块作为数据元素存储至数据子集中。
可选地,本申请处理方法中的步骤S30之后,还包括:
步骤S40:获取或确定所有的数据子集,根据每个所述数据子集对其对应的神经网络进行训练,得到目标神经网络。
在本实施例中,当处理设备构建完成各个神经网络对应的数据子集,且每个数据子集中至少存在一个数据元素后,就可以根据每个数据子集中的数据元素对其对应的神经网络进行训练,以得到用于进行颜色分量信号预测的目标神经网络。
在本实施例中,通过以获取或确定的第一颜色分量块的第二颜色分量信息、邻居信息和编码参数中的至少一个作为数据元素,再根据数据元素对应的数据子集训练进行颜色分量信号预测的目标神经网络,从而可以使得经过目标神经网络预测的颜色分量信号的精准度更高。
第五实施例
请参照图19,图19为本申请处理方法第五实施例的流程示意图。在本实施例中,基于上述第四实施例,本申请处理方法还可以包括:
S100:获取或确定第一颜色分量块对应的第二颜色分量信息;
S200:根据所述第二颜色分量信息和/或目标神经网络,预测或者得到对应的第一颜色分量块。
在本实施例中,处理设备先确定待进行预测的一帧图像,并获取或确定一帧图像中待预测的第一颜色分量块,以及与待预测的第一颜色分量块对应的第二颜色分量信息。可选地,本实施例中的第一颜色分量块可以为待预测第一颜色分量块。
可选地,处理设备可以提前存储各个图像和视频,并可以在各个图像中选择一个待进行预测的图像作为一帧图像。或者在视频的视频序列中抽取一帧图像。或者,处理设备接收用户输入的图像或视频,并在图像或视频中抽取一帧图像进行预测。或者,处理设备接收由其它网络设备发送的图像或视频,并在图像或视频中抽取一帧图像进行预测,此时处理设备预先与所处移动通信***网络侧中的网络设备建立通信连接,从而,网络设备即可通过该通信连接向该终端设备下发图像或视频,该终端设备即接收得到图像或视频。
可选地,一帧图像至少包括YUV图像,YUV图像中存在Y分量图像、U分量图像和V分量图像,即存在一个亮度分量图像和两个色度分量图像。可选地,Y分量图像为luma分量图像。U分量图像为chroma blue分量图像。V分量图像为chroma red分量图像。并且Y分量图像、U分量图像和V分量图像三者之间的分量块比例可以为4:2:0,也可以是其他比例模式,在此不做限制。然后对YUV图像进行分割,得到至少一个分量块,即至少一个Y分量块、至少一个U分量块和至少一个V分量块。
可选地,在待预测第一颜色分量块为色度分量块时,如为U分量块时,第二颜色分量信息为亮度信息;如为V分量块时,第二颜色分量信息为亮度信息。可选地,需要确定在亮度分量图像中与待预测第一颜色分量块对应位置的亮度分量块,然后再获取该亮度分量块的亮度信息。
例如,当待预测第一颜色分量块为U分量块时,如图4所示,存在一帧图像I,且图像I是YUV格式,则图像I的Y分量图像如图5所示,U分量图像如图6所示,V分量图像如图7所示。当图像I的分辨率为832*480时,可以设定n=m=16,如图8所示,将图像I的Y分量图像分割为52*30个块,如图9所示,将图像I的U分量图像平均分为52*30 个8*8的块,再图10所示,以第二行第二列的块为例,则Y分量数据如图10所示,U分量数据如图11所示。此外,当待预测第一颜色分量块为V分量块时,其操作方式和前述U分量块的方式相同。
可选地,在待预测第一颜色分量块为亮度块时,如为Y分量块时,第二颜色分量信息可以为色度信息,如V分量块对应的色度信息,或者U分量块对应的色度信息。可选地,第二颜色分量信息可以是在U分量图像中与待预测第一颜色分量块对应位置的U分量块,并获取U分量块中的色度信息。可选地,第二颜色分量信息可以是在V分量图像中与待预测第一颜色分量块对应位置的V分量块,并获取V分量块中的色度信息。
可选地,获取或确定与待预测第一颜色分量块对应的第二颜色分量块中的第二颜色分量信息;
在本实施例中,可以先确定原始的YUV图像,然后再获取或确定YUV图像中的待预测第一颜色分量块,再确定第二颜色分量块,可选地,待预测第一颜色分量块和第二颜色分量块在不同的分量图像上,且待预测第一颜色分量块在原始的YUV图像中对应的位置和第二颜色分量块对应的位置一致。
可选地,当待预测第一颜色分量块为色度分量块时,如U分量块和/或V分量块,则第二颜色分量块可以为与该色度分量块对应的亮度分量块,如Y分量块,并将该对应的亮度分量块中的亮度信息作为第二颜色分量信息。
可选地,当待预测第一颜色分量块为亮度分量块时,如Y分量块,则第二颜色分量块可以为与该亮度分量块对应的色度分量块,如U分量块和/或V分量块,并将该对应的色度分量块中的色度信息作为第二颜色分量信息。
可选地,获取所有的数据子集,根据每个所述数据子集对其对应的神经网络进行训练,得到目标神经网络。
在本实施例中,还可以先训练好目标神经网络,然后再根据目标神经网络来进行后续的预测。而在训练目标神经网络之前,需要先构建好每个神经网络对应的数据子集,然后再根据每个数据子集对其对应的神经网络进行训练,得到目标神经网络。
可选地,可以先在网络设备或自身存储区域中获取待进行神经网络训练的训练图像,或者获取视频序列,将视频序列中的每一帧图像作为训练图像。并且针对训练图像,可以将待预测chroma(色度)信号对应的luma(亮度)信息、待预测chroma信号的邻居信息和编码参数作为一条数据,并以待预测chroma块为该条数据的目标标签,组成数据集中的一条数据元素。然后模式选择模块将数据集中的每一组数据元素划分到最优的数据网络对应的数据子集中。例如将量化参数较小的高细节数据元素划分到感受野较大的神经网络对应的数据子集中。当获取到每个神经网络对应的数据子集后,就可以进行每个神经网络的训练,以得到目标神经网络。可选地,本实施例中的目标神经网络的数量至少为一个。在训练时,将数据子集中的数据元素输入到神经网络中进行训练,直至训练后的神经网络收敛、或达到预期效果。训练神经网络的方式可以按照梯度下降法的方式进行训练,例如,用均方误差或者交叉熵作为损失函数,每次梯度下降法训练为了极小化损失,多次训练之后达到理想的精度,训练结束。还可以采用其他方式进行训练,在此不做限制。
可选地,可以通过模式选择模块在各个目标神经网络中筛选出与第一颜色分量块对应的目标神经网络。可选地,在对各个目标神经网络进行筛选时,可以按照提前设置的筛选模式进行筛选,以得到与第一颜色分量块对应的目标神经网络,例如,选择某一项指标参数最小的目标神经网络作为与第一颜色分量块对应的目标神经网络。指标参数可以是rate-distortion(率失真)等。可选地,当获取到第二颜色分量信息后,就可以直接根据第二颜色分量信息进行预测,得到预测后的第一颜色分量块。可选地,在第二颜色分量信息为亮度信息,待预测第一颜色分量块为色度分量块(如U分量块或者V分量块)时,可以通过将第二颜色分量信息输入到提前设置的模型中进行训练,得到预测后的第一颜色分量块,然后再获取第一颜色分量块中的颜色分量信号(如颜色信息)。还可以是提前设置一个对照表,对照表中设置有至少一个颜色分量信息和与之对应的颜色分量块,根据第二颜色分量信息在对照表中查询得到第一颜色分量块,并将查询得到的第一颜色分量块作为预测后的第一颜色分量块。
可选地,当获取到第一颜色分量块对应的目标神经网络后,就可以直接根据目标神经网络进行预测,得到预测后的第一颜色分量块。可选地,在直接根据目标神经网络进行预测时,可以获取用户或其它终端输入的预测参数,并将预测参数和/或第二颜色分量信息输入到目标神经网络中进行模型训练,输出得到预测后的第一颜色分量块。可选地,预测参数可以包括与待预测第一颜色分量块相关的参数信息,如与待预测第一颜色分量块相邻的第一颜色分量块,可选地,相邻包括左侧相邻、上侧相邻、左上侧相邻、左下侧相邻和右上侧相邻中的至少一种。可选地,目标神经网络可以是一种非线性算法或模块,如矩阵加权帧内预测技术,且目标神经网络可以包括以下神经网络中的至少一种,如:卷积神经网络,残差网络,长短期记忆人工神经网络,循环神经网络,三维卷积神经网络,全连接神经网络等。
可选地,当获取到第二颜色分量信息和训练好的目标神经网络后,可以先在至少一个目标神经网络中确定与待预测第一颜色分量块对应的目标神经网络,然后将第二颜色分量信息输入到该目标神经网络中进行预测,得到预测后的第一颜色分量块。例如,当第二颜色分量信息为亮度分量信息时,将亮度分量信息输入到训练好的目标神经网络中进行预测,输出得到颜色分量信息,并将其作为预测后的第一颜色分量块的颜色分量信号。例如,当第二颜色分量信息为色度分量信息时,将色度分量信息输入到训练好的目标神经网络中进行预测,输出得到颜色分量信息,并将其作为预测后的第一颜色分量块的颜色分量信号。
在本实施例中,通过获取或确定待预测第一颜色分量块的第二颜色分量信息,然后再根据第二颜色分量信息和/或目标神经网络进行预测,得到预测后的第一颜色分量块,从而可以实现获取到第一颜色分量块中的颜色分量信号,提高了颜色分量信号预测的精准度,降低了颜色分量信号预测的复杂度。
本申请实施例还提供一种处理装置,请参照图21,图21为本申请处理装置的功能模块示意图,本申请处理装置应用于处理设备,本申请处理装置包括:
获取模块,用于获取或确定第二颜色分量信息;
预测模块,用于根据第二颜色分量信息和/或目标神经网络,预测或者得到对应的第一颜色分量块。
可选地,处理装置还包括以下至少一项:
第一确定模块,用于获取或确定与待预测第一颜色分量块对应的第二颜色分量块中的第二颜色分量信息;
数据网络训练模块,用于获取所有的数据子集,根据每个所述数据子集对其对应的神经网络进行训练,得到目标神经网络。
可选地,处理装置还包括:
第二确定模块,用于获取或确定第一颜色分量块对应的第二颜色分量信息、邻居信息和编码参数中的至少一个;
构建模块,用于以所述第一颜色分量块为标签,将所述第一颜色分量块对应的第二颜色分量信息、所述第一颜色分量块对应的邻居信息和编码参数中的至少一个作为数据元素;
第三确定模块,用于根据模式选择模块和所述数据元素确定所述数据元素对应的数据子集。
可选地,预测模块,包括:
预测单元,用于根据目标神经网络,预测或者得到对应的第一颜色分量块。
可选地,在预测单元之前,还包括:
输入单元,用于将待预测第一颜色分量块对应的邻居信息、第二颜色分量信息和编码参数中的至少一个输入至模式选择模块,以使所述模式选择模块确定所述第一颜色分量块对应的目标神经网络。
可选地,在输入单元之前,还包括:
填充单元,用于若与所述第一颜色分量块相邻的第一颜色分量块中不存在第一颜色分量信息,则根据预设的第一颜色分量填充规则对与所述第一颜色分量块相邻的第一颜色分量块进行第一颜色分量信息填充,以得到所述第一颜色分量块对应的邻居信息。
可选地,预测单元用于执行以下至少一项:
根据所述目标神经网络进行预测,得到第三颜色分量信号,根据所述第三颜色分量信号预测第一颜色分量信号,根据所述第一颜色分量信号确定第一颜色分量块;
根据第一颜色分量信号对应的目标神经网络进行预测,得到第一颜色分量信号,并根据所述第一颜色分量信号确定第一颜色分量块;
将第二颜色分量信息和邻居信息输入到所述目标神经网络,得到或者预测对应的第一颜色分量块;
将所述第二颜色分量信息、所述邻居信息和所述编码参数输入到所述目标神经网络,得到或者预测对应的第一颜色分量块;
若只存在一个所述目标神经网络,则将所述目标神经网络进行预测的预测结果作为预测后的第一颜色分量块;
若存在多个所述目标神经网络,则获取或确定每个所述目标神经网络进行预测的预测结果,并包括以下至少一项:
对所有所述预测结果进行汇总,得到或者预测对应的第一颜色分量块;
选取所有所述预测结果中的一种预测结果作为第一颜色分量块;
根据所有所述预测结果的一种函数确定第一颜色分量块。
可选地,所述处理装置还用于执行:
获取或确定第一颜色分量块对应的第一颜色分量信息;
根据第一颜色分量信息和目标神经网络进行预测,或者根据第一颜色分量信息进行预测,得到或者预测对应的第一颜色分量块。
本申请实施例还提供一种处理装置,请参照图22,图22为本申请处理装置的功能模块示意图,本申请处理装置应用于处理设备,本申请处理装置包括:
确定模块,用于获取或确定第一颜色分量块对应的第二颜色分量信息、邻居信息和编码参数中的至少一个;
数据元素模块,用于以第一颜色分量块为标签,将第二颜色分量信息、邻居信息和编码参数中的至少一个作为数据元素;
训练模块,用于根据模式选择模块和所述数据元素确定所述数据元素对应的数据子集,以用于训练进行颜色分量信号预测的目标神经网络。
可选地,训练模块用于执行以下至少一项:
将数据元素输入至模式选择模块,以使模式选择模块利用数据元素对应的第二颜色分量信息、邻居信息和编码参数中的至少一个,确定数据元素对应的数据子集;
利用预设的数据规则,将数据元素进行数据分类,以将数据元素分类至所述对应的数据子集。
可选地,在训练模块之后,还包括:
数据子集训练模块,用于获取或确定所有的数据子集,根据每个所述数据子集对其对应的神经网络进行训练,得到目标神经网络。
可选地,处理装置还包括:
颜色分量预测模块,用于获取或确定第二颜色分量信息;根据所述第二颜色分量信息和/或目标神经网络,预测或者得到对应的第一颜色分量块。
可选地,上述处理装置中各个模块的功能实现,与上述处理方法实施例中各步骤相对应,其功能和实现过程在此处不再一一赘述。
本申请实施例还提供一种处理设备,处理设备包括存储器、处理器,存储器上存储有处理程序,处理程序被处理器执行时实现上述任一实施例中的处理方法的步骤。
本申请实施例还提供一种存储介质,存储介质上存储有处理程序,处理程序被处理器执行时实现上述任一实施例中的处理方法的步骤。
在本申请提供的处理设备和存储介质的实施例中,可以包含任一上述处理方法实施例的全部技术特征,说明书拓展和解释内容与上述方法的各实施例基本相同,在此不再做赘述。
本申请实施例还提供一种计算机程序产品,计算机程序产品包括计算机程序代码,当计算机程序代码在计算机上运行时,使得计算机执行如上各种可能的实施方式中的方法。
本申请实施例还提供一种芯片,包括存储器和处理器,存储器用于存储计算机程序,处理器用于从存储器中调用并运行计算机程序,使得安装有芯片的设备执行如上各种可能的实施方式中的方法。
可以理解,上述场景仅是作为示例,并不构成对于本申请实施例提供的技术方案的应用场景的限定,本申请的技术方案还可应用于其他场景。例如,本领域普通技术人员可知,随着***架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。本申请实施例设备中的单元可以根据实际需要进行合并、划分和删减。
在本申请中,对于相同或相似的术语概念、技术方案和/或应用场景描述,一般只在第一次出现时进行详细描述,后面再重复出现时,为了简洁,一般未再重复阐述,在理解本申请技术方案等内容时,对于在后未详细描述的相同或相似的术语概念、技术方案和/或应用场景描述等,可以参考其之前的相关详细描述。在本申请中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。本申请技术方案的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本申请记载的范围。通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,被控终端,或者网络设备等)执行本申请每个实施例的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络,或者其他可编程装置。计算机指令可以存储在存储介质中,或者从一个存储介质向另一个存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、存储盘、磁带)、光介质(例如,DVD),或者半导体介质(例如固态存储盘Solid State Disk(SSD))等。以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (13)

  1. 一种处理方法,其中,包括以下步骤:
    S1:获取或确定第二颜色分量信息;
    S2:根据所述第二颜色分量信息和/或目标神经网络,预测或者得到对应的第一颜色分量块。
  2. 如权利要求1所述的方法,其中,所述步骤S1之前,包括以下至少一项:
    获取或确定与第一颜色分量块对应的第二颜色分量块中的第二颜色分量信息;
    获取所有的数据子集,根据每个所述数据子集对其对应的神经网络进行训练,得到目标神经网络。
  3. 如权利要求2所述的方法,其中,所述获取所有的数据子集之前,还包括:
    获取或确定第一颜色分量块对应的第二颜色分量信息、邻居信息和编码参数中的至少一个;
    以所述第一颜色分量块为标签,将所述第一颜色分量块对应的第二颜色分量信息、所述第一颜色分量块对应的邻居信息和编码参数中的至少一个作为数据元素;
    根据模式选择模块和所述数据元素确定所述数据元素对应的数据子集。
  4. 如权利要求1至3中任一项所述的方法,其中,在所述根据目标神经网络,预测或者得到对应的第一颜色分量块之前,还包括:
    步骤S22:将第一颜色分量块对应的邻居信息、第二颜色分量信息和编码参数中的至少一个输入至模式选择模块,以使所述模式选择模块确定所述第一颜色分量块对应的目标神经网络。
  5. 如权利要求4所述的方法,其中,所述步骤S22之前,还包括:
    步骤S21:若与所述第一颜色分量块相邻的第一颜色分量块中不存在第一颜色分量信息,则根据预设的第一颜色分量填充规则对与所述第一颜色分量块相邻的第一颜色分量块进行第一颜色分量信息填充,以得到所述第一颜色分量块对应的邻居信息。
  6. 如权利要求1至3中任一项所述的方法,其中,所述根据目标神经网络,预测或者得到对应的第一颜色分量块包括至少以下一项:
    根据所述目标神经网络进行预测,得到第三颜色分量信号,根据所述第三颜色分量信号预测第一颜色分量信号,根据所述第一颜色分量信号确定第一颜色分量块;
    根据第一颜色分量信号对应的目标神经网络进行预测,得到第一颜色分量信号,并根据所述第一颜色分量信号确定第一颜色分量块;
    将第二颜色分量信息和邻居信息输入到所述目标神经网络,得到或者预测对应的第一颜色分量块;
    将所述第二颜色分量信息、邻居信息和编码参数输入到所述目标神经网络,得到或者预测对应的第一颜色分量块;
    若只存在一个所述目标神经网络,则将所述目标神经网络进行预测的预测结果作为第一颜色分量块;
    若存在至少一个所述目标神经网络,则获取或确定每个所述目标神经网络进行预测的预测结果,并包括以下至少一项:
    对所有所述预测结果进行汇总,得到或者预测对应的第一颜色分量块;
    选取所有所述预测结果中的一种预测结果作为第一颜色分量块;
    根据所有所述预测结果的一种函数确定第一颜色分量块。
  7. 如权利要求1至3中任一项所述的方法,其中,所述方法还包括:
    步骤S4:获取或确定第一颜色分量块对应的第一颜色分量信息;
    步骤S5:根据第一颜色分量信息和目标神经网络进行预测,或者根据第一颜色分量信息进行预测,得到或者预测对应的第一颜色分量块。
  8. 一种处理方法,其中,包括以下步骤:
    S10:获取或确定第一颜色分量块对应的第二颜色分量信息、邻居信息和编码参数中的至少一个;
    S20:以第一颜色分量块为标签,将第二颜色分量信息、邻居信息和编码参数中的至少一个作为数据元素;
    S30:根据模式选择模块和所述数据元素确定所述数据元素对应的数据子集,以用于训练进行颜色分量信号预测的目标神经网络。
  9. 如权利要求8所述的方法,其中,所述步骤S30,包括以下至少一项:
    将数据元素输入至模式选择模块,以使模式选择模块利用数据元素对应的第二颜色分量信息、邻居信息和编码参数中的至少一个,确定数据元素对应的数据子集;
    利用预设的数据规则,将数据元素进行数据分类,以将数据元素分类至所述对应的数据子集。
  10. 如权利要求8所述的方法,其中,所述步骤S30之后,还包括:
    步骤S40:获取或确定所有的数据子集,根据每个所述数据子集对其对应的神经网络进行训练,得到目标神经网络。
  11. 如权利要求8至10中任一项所述的方法,其中,所述方法还包括:
    获取或确定第二颜色分量信息;
    根据所述第二颜色分量信息和/或目标神经网络,预测或者得到对应的第一颜色分量块。
  12. 一种处理设备,其中,包括:存储器、处理器,其中,所述存储器上存储有处理程序,所述处理程序被所述处理器执行时实现如权利要求1或8所述的处理方法的步骤。
  13. 一种存储介质,其中,所述存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1或8所述的处理方法的步骤。
PCT/CN2023/113174 2022-11-07 2023-08-15 处理方法、处理设备及存储介质 WO2024098873A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211382700.2A CN115422986B (zh) 2022-11-07 2022-11-07 处理方法、处理设备及存储介质
CN202211382700.2 2022-11-07

Publications (1)

Publication Number Publication Date
WO2024098873A1 true WO2024098873A1 (zh) 2024-05-16

Family

ID=84208102

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/113174 WO2024098873A1 (zh) 2022-11-07 2023-08-15 处理方法、处理设备及存储介质

Country Status (2)

Country Link
CN (1) CN115422986B (zh)
WO (1) WO2024098873A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115422986B (zh) * 2022-11-07 2023-08-22 深圳传音控股股份有限公司 处理方法、处理设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109842799A (zh) * 2017-11-29 2019-06-04 杭州海康威视数字技术股份有限公司 颜色分量的帧内预测方法及装置
CN110602491A (zh) * 2019-08-30 2019-12-20 中国科学院深圳先进技术研究院 帧内色度预测方法、装置、设备及视频编解码***
US20200252654A1 (en) * 2017-10-12 2020-08-06 Mediatek Inc. Method and Apparatus of Neural Network for Video Coding
CN115422986A (zh) * 2022-11-07 2022-12-02 深圳传音控股股份有限公司 处理方法、处理设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11361506B2 (en) * 2018-04-09 2022-06-14 Dolby Laboratories Licensing Corporation HDR image representations using neural network mappings
EP4254955A3 (en) * 2018-08-09 2023-12-06 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Video image component prediction methods, decoder and encoder
US11322073B2 (en) * 2018-09-21 2022-05-03 Dell Products, Lp Method and apparatus for dynamically optimizing gamma correction for a high dynamic ratio image
CN109816663B (zh) * 2018-10-15 2021-04-20 华为技术有限公司 一种图像处理方法、装置与设备
CN115190312B (zh) * 2021-04-02 2024-06-07 西安电子科技大学 一种基于神经网络的跨分量色度预测方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200252654A1 (en) * 2017-10-12 2020-08-06 Mediatek Inc. Method and Apparatus of Neural Network for Video Coding
CN109842799A (zh) * 2017-11-29 2019-06-04 杭州海康威视数字技术股份有限公司 颜色分量的帧内预测方法及装置
CN110602491A (zh) * 2019-08-30 2019-12-20 中国科学院深圳先进技术研究院 帧内色度预测方法、装置、设备及视频编解码***
CN115422986A (zh) * 2022-11-07 2022-12-02 深圳传音控股股份有限公司 处理方法、处理设备及存储介质

Also Published As

Publication number Publication date
CN115422986A (zh) 2022-12-02
CN115422986B (zh) 2023-08-22

Similar Documents

Publication Publication Date Title
CN111708503B (zh) 一种投屏控制方法、设备及计算机可读存储介质
CN107885448B (zh) 应用触摸操作的控制方法、移动终端及可读存储介质
CN108198150B (zh) 一种图像坏点的消除方法、终端及存储介质
WO2024098873A1 (zh) 处理方法、处理设备及存储介质
WO2023185351A1 (zh) 图像处理方法、智能终端及存储介质
CN113556492B (zh) 缩略图生成方法、移动终端及可读存储介质
CN110675342A (zh) 视频帧优化方法、移动终端及计算机可读存储介质
CN113438747A (zh) 处理方法、处理设备及存储介质
CN107241504B (zh) 一种图像处理方法、移动终端和计算机可读存储介质
CN108320265B (zh) 一种图像处理方法、终端及计算机可读存储介质
CN116668704B (zh) 处理方法、处理设备及存储介质
WO2024087604A1 (zh) 图像处理方法、智能终端及存储介质
CN111970738A (zh) 一种网络切换控制方法、设备及计算机可读存储介质
CN108053453B (zh) 一种颜色的优化方法、终端及计算机可读存储介质
WO2023108444A1 (zh) 图像处理方法、智能终端及存储介质
CN114423041A (zh) 通信方法、智能终端及存储介质
CN113902918A (zh) 游戏图像二值化处理方法、设备及计算机可读存储介质
CN114723645A (zh) 图像处理方法、智能终端及存储介质
CN114040464A (zh) 小区接入方法、智能终端及存储介质
CN112887776A (zh) 一种降低音频延时的方法、设备及计算机可读存储介质
CN112887404A (zh) 一种音频传输控制方法、设备及计算机可读存储介质
CN116095738B (zh) 信号显示方法、智能终端及存储介质
CN219068846U (zh) Oled显示面板及智能终端
WO2023108443A1 (zh) 图像处理方法、智能终端及存储介质
CN216389737U (zh) 天线模组及智能终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23887568

Country of ref document: EP

Kind code of ref document: A1