WO2023050423A1 - 图像处理方法、设备和存储介质 - Google Patents

图像处理方法、设备和存储介质 Download PDF

Info

Publication number
WO2023050423A1
WO2023050423A1 PCT/CN2021/122444 CN2021122444W WO2023050423A1 WO 2023050423 A1 WO2023050423 A1 WO 2023050423A1 CN 2021122444 W CN2021122444 W CN 2021122444W WO 2023050423 A1 WO2023050423 A1 WO 2023050423A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
image data
image
information
data stream
Prior art date
Application number
PCT/CN2021/122444
Other languages
English (en)
French (fr)
Inventor
王洪伟
Original Assignee
深圳传音控股股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳传音控股股份有限公司 filed Critical 深圳传音控股股份有限公司
Priority to CN202180102138.0A priority Critical patent/CN117941342A/zh
Priority to PCT/CN2021/122444 priority patent/WO2023050423A1/zh
Publication of WO2023050423A1 publication Critical patent/WO2023050423A1/zh
Priority to US18/605,699 priority patent/US20240221127A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/93Regeneration of the television signal or of selected parts thereof
    • H04N5/935Regeneration of digital synchronisation signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/21Indexing scheme for image data processing or generation, in general involving computational photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning

Definitions

  • the present application relates to computational photography technology, in particular to an image processing method, device and storage medium.
  • most of the implementations of the computational photography system are based on the transformation of the original camera (camera) system.
  • the inventor found at least the following problems: the image processing efficiency is relatively low, and/or Or poor compatibility and consistency of computational photography systems.
  • embodiments of the present application provide an image processing method, device, and storage medium.
  • the present application provides an image processing method, the method comprising the following steps:
  • S3 Perform image processing on the first image data according to the target data stream.
  • the S3 step includes:
  • Parse the target data stream and perform image processing on the first image data to obtain second image data and feature data of the second image data.
  • the S1 step includes at least one of the following:
  • the feature data of the first image data includes at least one of the following: basic image information, imaging information, and semantic information of the first image data;
  • the acquiring feature data of the first image data includes:
  • the imaging information of the first image data is acquired through the imaging module and/or the auxiliary imaging module of the photography system.
  • the S2 step includes at least one of the following:
  • Each of the data items includes at least one type of feature information
  • Each item of feature information in each data item is arranged in a second specific order.
  • the present application also provides an image processing method, the method comprising the following steps:
  • S20 Determine or generate a target data stream according to the first image data
  • S30 Perform image processing on the first image data based on the target data stream.
  • the S10 step includes at least one of the following:
  • the basic image information of the first image data is acquired through an imaging module of the photography system
  • the preset rule indicates adding imaging information to the data stream, acquiring the imaging information of the first image data through an imaging module and/or an auxiliary imaging module of the photography system;
  • the preset rule indicates adding at least one kind of semantic information to the data stream, then acquire the at least one kind of semantic information of the first image data.
  • acquiring the at least one semantic information of the first image data includes at least one of the following:
  • the preset rule indicates adding basic semantic information to the data stream, then acquiring basic semantic information of the first image data;
  • the preset rule indicates adding optional semantic information to the data stream, at least one optional semantic information of the first image data is acquired.
  • the S10 step includes:
  • the preset rule is obtained from an entry parameter of the first interface, and the first image data is obtained according to the preset rule.
  • the S20 step includes:
  • the obtaining data required by the algorithm module in response to the data request of the algorithm module to which the data flow flows includes:
  • the data required by the algorithm module is determined or obtained according to the input parameters of the second interface, and transmitted to the algorithm module.
  • the S30 step includes:
  • Parse the target data stream and perform image processing on the first image data to obtain second image data and/or characteristic data of the second image data.
  • the present application also provides an image processing device, including:
  • a data acquisition unit configured to perform step S1: acquire first image data
  • a data stream unit configured to perform step S2: determine or generate a target data stream according to the data stream format
  • An image processing unit configured to perform step S3: perform image processing on the first image data according to the target data stream.
  • the present application also provides an image processing device, including:
  • a data acquisition unit configured to perform step S10: acquire the first image data according to preset rules
  • a data stream unit configured to perform step S20: determine or generate a target data stream according to the first image data
  • the image processing unit is configured to S30: perform image processing on the first image data based on the target data stream.
  • the present application also provides an electronic device, including: a processor and a memory;
  • the memory stores computer-executable instructions
  • the present application also provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when the computer-executable instructions are executed by a processor, they are used to implement any of the above-mentioned aspects. image processing method.
  • the present application further provides a computer program product, including a computer program, and when the computer program is executed by a processor, the image processing method described in the above aspect is implemented.
  • the data stream based on the data stream format may contain the first image data and/or the characteristic data of the first image data, according to the target data
  • the stream can implement image processing on the first image data, which improves the efficiency of image processing, and can improve the compatibility and consistency of the computational photography system when applied to the computational photography system.
  • FIG. 1 is a flow chart of an image processing method provided in an embodiment of the present application
  • FIG. 2 is a flow chart of another image processing method provided by the embodiment of the present application.
  • FIG. 3 is a flow chart of another image processing method provided in the embodiment of the present application.
  • FIG. 4 is a flow chart of another image processing method provided in the embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an image processing device provided in an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • first, second, third, etc. may be used herein to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of this document, first information may also be called second information, and similarly, second information may also be called first information.
  • first information may also be called second information, and similarly, second information may also be called first information.
  • second information may also be called first information.
  • the word “if” as used herein may be interpreted as “at” or “when” or “in response to a determination”.
  • the singular forms "a”, “an” and “the” are intended to include the plural forms as well, unless the context indicates otherwise.
  • A, B, C means “any of the following: A; B; C; A and B; A and C; B and C; A and B and C
  • A, B or C or "A, B and/or C” means "any of the following: A; B; C; A and B; A and C; B and C; A and B and C”. Exceptions to this definition will only arise when combinations of elements, functions, steps or operations are inherently mutually exclusive in some way.
  • the words “if”, “if” as used herein may be interpreted as “at” or “when” or “in response to determining” or “in response to detecting”.
  • the phrases “if determined” or “if detected (the stated condition or event)” could be interpreted as “when determined” or “in response to the determination” or “when detected (the stated condition or event) )” or “in response to detection of (a stated condition or event)”.
  • step codes such as S10 and S20 are used, the purpose of which is to express the corresponding content more clearly and concisely, and does not constitute a substantive limitation on the order.
  • S20 will be executed first, followed by S10, etc., but these should be within the scope of protection of this application.
  • the image processing method specifically provided by this application is applied to a computational photography system.
  • Most implementations of the computational photography system are based on the transformation of the original camera system.
  • the image data obtained from the camera system includes the captured image data itself. However, it does not include detailed feature information such as sensor information corresponding to the image, whether it has undergone some preprocessing, and image semantic information.
  • image processing image processing if multiple feature information corresponding to the image data needs to be obtained, it is necessary to obtain the corresponding feature information by calling the query interface corresponding to each feature information multiple times.
  • the calling module sets the output format of the image data captured by the underlying camera device through the void setParameters (Camera.Parameters params) interface.
  • the calling module obtains the captured image data itself by calling the application programming interface (such as Camera.PictureCallback()) provided by the camera system according to the imaging control request. If you want to analyze the image data, you need to call the corresponding query interface to obtain image encoding method, height and width and other characteristic information from Camera.Parameters, and analyze the captured image data according to these characteristic information.
  • the calling module may be any functional module for computational photography image processing, and calls the corresponding algorithm module through the application programming interface to realize the function of computational photography image processing.
  • the data stream based on the data stream format may include the first image data and/or the characteristic data of the first image data.
  • the first image data and/or the feature data of the first image data can be acquired; according to the data stream format, a data stream containing the first image data and/or the feature data of the first image data is generated, so that the subsequent
  • the first image data and/or feature data of the first image data can be obtained by analyzing the data stream, without calling the query interface of each feature data to obtain the feature data, which improves image processing efficiency, and reduces the dependence of the image processing algorithm module on other modules (such as query interface corresponding modules), when applied to the computational photography system, it can improve the compatibility and consistency of the computational photography system.
  • FIG. 1 is a flowchart of an image processing method provided by an embodiment of the present application.
  • the execution subject of the method provided in this embodiment may be an electronic device with image processing function (such as computational photography function), for example, it may be a mobile terminal such as a smart phone or a tablet computer, or a personal computer, a computational photography server, etc., in In other implementation manners, it may also be other electronic devices, which are not specifically limited here.
  • image processing function such as computational photography function
  • FIG. 1 the specific steps of the method are as follows:
  • Step S1 acquiring first image data.
  • the first image data refers to an original image that requires image processing
  • the target image requested by the user can be obtained by performing image processing on the original image
  • the first image data may include: images taken by the imaging module, and/or images acquired from other modules.
  • Step S2. Determine or generate a data stream according to the format of the data stream.
  • the data stream contains first image data.
  • the format of the data stream is preset, and is a format of a data stream including the first image data and/or the feature data of the first image data.
  • each algorithm module and/or function module can assign the first image data to be processed and/or the characteristic data of the first image data to the data stream, so that the data stream can It includes first image data and feature data of the first image data.
  • the first image data and/or feature data of the first image data may be directly extracted from the data stream.
  • Step S3 performing image processing on the first image data according to the data stream.
  • the first image data and/or various characteristic information in the characteristic data of the first image data may be extracted from the data stream.
  • one or more feature information used in the current image processing can be selected from the feature data, and corresponding image processing is performed on the first image data .
  • the required feature data of the first image data can be obtained directly from the data stream, without calling the query interface of each feature data to obtain feature data, which improves the efficiency of image processing and reduces
  • the dependence of the image processing algorithm module on other modules improves the compatibility and consistency of the computational photography system.
  • the captured first image data and the feature data of the first image data are acquired; according to the data stream format, a data stream including the first image data and feature data is generated, and in the data stream It includes not only the first image data captured, but also the characteristic data of the first image data; when any image processing is performed on the first image data, the first image data and the characteristics of the first image data can be obtained by analyzing the data stream data, there is no need to call the query interface of various feature data to obtain feature data, which improves the efficiency of image processing, and reduces the dependence of the algorithm module for image processing on other modules (such as the corresponding module of the query interface), and improves the computational photography system. compatibility and consistency.
  • FIG. 2 is a flow chart of another image processing method provided by the embodiment of the present application.
  • step S3 includes: analyzing the target data stream, and performing image processing on the first image data to obtain the second image data and the feature data of the second image data.
  • Step S201 Acquire first image data according to an imaging control instruction and/or an image acquisition instruction.
  • the first image data refers to an original image that requires image processing
  • the target image requested by the user can be obtained by performing image processing on the original image
  • the first image data may include: images taken by the imaging module, and/or images acquired from other modules.
  • acquiring the first image data includes: acquiring captured image data according to an imaging control instruction; and/or acquiring existing image data according to an image acquisition instruction.
  • the captured image data may be acquired from the camera service according to the imaging control instruction.
  • existing image data may be acquired by providing image services according to the image acquisition instruction.
  • Step S202 Determine or generate a target data stream according to the format of the data stream.
  • the format of the data stream is preset and is a format of a data stream including the first image data and/or the feature data of the first image data.
  • target data containing the first image data and/or the characteristic data of the first image data may be determined or generated according to the data stream format flow.
  • at least two data items sequentially arranged in a first specific order may determine or generate a target data stream.
  • each data item includes at least one type of characteristic information; each item of characteristic information in each data item is arranged in a second specific order.
  • the data stream includes first image data and feature data of the first image data, and the feature data of the first image data includes at least two data items sequentially arranged in a first specific order, Each data item includes one type of characteristic information in the characteristic data, and optionally each type of characteristic information includes at least one item of characteristic information; each item of characteristic information in each data item is arranged in sequence according to the second specific order.
  • the first specific order is a preset sequence of data items, which can be flexibly set according to the needs of actual application scenarios and the number of data items included in the characteristic data.
  • the specific order of the first specific order The sorting order is not specifically limited.
  • the second specific order refers to the arrangement order of the feature information in each data item, which can be flexibly set according to the needs of the actual application scenario and the number of feature information contained in the data item.
  • the second specific order The specific arrangement order is not specifically limited.
  • the feature data may include basic image information data items, imaging information data items, and semantic information data items, and these three data items are closely arranged in a first specific order.
  • each data item includes multiple attribute values corresponding to feature information.
  • each piece of feature information may also have a corresponding attribute label, which is used to uniquely identify a piece of feature information.
  • the second specific order of each attribute tag can be determined by sorting the attribute tags corresponding to each feature information in the data item.
  • each data item may include a data item flag and a data item length in addition to one or more pieces of feature information.
  • the data item flag can be located at the head of the data item to distinguish different data items.
  • the format of the data item is exemplarily described.
  • the format of the image basic information data item of a 1080x1920 grayscale image can be shown in Table 1 below:
  • the image basic information data items may include: data item flag, data item length, image width, image height, image format, step size, fill width, image frame type, and image frame number.
  • the data item flag can uniquely mark a data item, which is used to distinguish different data items.
  • 0 can be used to represent the basic image information data item
  • 1 can be used to represent the imaging information data item
  • 2 can be used to represent the semantic information data item
  • the data items contained in the feature data in the data stream format can be expanded, and the flags of each data item can be set and adjusted according to the needs of the actual application scenario, which is not specifically limited here.
  • the image format refers to the format of the first image data, for example, 0 indicates a grayscale image, 1 indicates a RAW format, 2 indicates a YUV420 format, and so on.
  • the attribute values corresponding to different image formats can be set according to the needs of actual application scenarios, and are not specifically limited here.
  • the image frame type includes single-frame imaging and multi-frame imaging, 0 can be used to indicate single-frame imaging, and 1 can be used to indicate multi-frame imaging.
  • the attribute values corresponding to each image frame type can be set according to the needs of actual application scenarios, and are not specifically limited here.
  • the image frame number refers to which image frame the current first image data is when the image frame type is a multi-frame image.
  • the image frame number has no practical meaning, and the image frame number can be 1, empty or other default values (such as 0), which are not specifically limited here.
  • each data item in the data stream is closely arranged, and each feature information in each data item is also closely arranged, so that the memory occupied by the data stream is arranged in a compact structure and occupies less memory.
  • the format of the data stream is simplified, which is convenient for parsing, reduces the complexity of data stream parsing, and improves the efficiency of data stream parsing.
  • the characteristic information may be arranged in a third specific order to determine or generate the target data stream.
  • the feature data may include multiple pieces of feature information, and the multiple pieces of feature information are sequentially arranged in a third specific order in the data stream.
  • the feature information may not be classified, and the attribute values of all the feature information included in the feature data are arranged in a third specific order, and the feature information is closely arranged, so that the memory structure occupied by the data stream is compact. , takes up less memory.
  • the third specific order may be set according to requirements of actual application scenarios, which is not specifically limited here.
  • Step S203 acquiring feature data of the first image data, and assigning the feature data to the target data stream.
  • the feature data of the first image data is used to describe basic information, imaging information, semantic information, etc. of the first image data.
  • the feature data of the first image data may also include other related information of the first image data used in the computational photographic image processing process, and this embodiment does not specifically limit the specific content of the feature data.
  • acquiring the feature data of the first image data may be performed before step S202, that is, part of the feature data of the first image data may be performed before step S202.
  • step S202 when the first image data is acquired from the imaging module, basic image information and/or imaging information of the first image data may also be acquired from the imaging module.
  • the feature data of the first image data can also be obtained, and the feature data of the first image data can be assigned to the data stream
  • the data stream format determine or generate the target data stream containing the first image data; after any subsequent algorithm module and/or function module obtains the characteristic data of the first image data, it can convert the first image data according to the data stream format
  • the characteristic data of the data is assigned to the data stream.
  • the feature data of the first image data may include at least one of the following: basic image information, imaging information, and semantic information of the first image data.
  • assigning the feature data to the target data stream may include at least one of the following:
  • any algorithm module and/or function module acquires one or more pieces of feature information of the first image data, it can assign the acquired one or more pieces of feature information to the data stream.
  • acquiring feature data of the first image data including:
  • the basic image information of the first image data may be obtained through an imaging module of the photography system.
  • the basic image information describes the basic attribute information of the first image data
  • the basic image information may include at least one of the following characteristic information of the first image data:
  • Image data length image width, image height, image format, step size, fill width, image frame type, image frame number.
  • an imaging module In a computational photography system, an imaging module generates first image data and is capable of determining basic attribute information of the image.
  • the basic image information of the first image data can be obtained from the imaging module of the computational photography system, and the basic image information of the first image data can be assigned to the data stream.
  • the basic image information of the first image data may be assigned to the data stream by the imaging module.
  • the imaging module acquires the first image data and the basic image information of the first image data, organizes the basic image information according to the format of the basic image information data items in the data stream format, and assigns them to the data stream.
  • the imaging information of the first image data may be acquired through an imaging module and/or an auxiliary imaging module of the photography system.
  • the imaging information describes imaging-related attributes of the first image data
  • the imaging information may include at least one of the following characteristic information of the first image data:
  • Sensitivity Expositivity, exposure value, focal length, geographic location information.
  • the imaging information of the first image data can be obtained from the imaging module and the auxiliary imaging module of the computational photography system, the imaging information is organized according to the format of the imaging information data item in the data stream format, and assigned to the data stream.
  • the semantic information describes the high-level semantic attributes of the first image data
  • the semantic information may include at least one of the following characteristic information of the first image data:
  • Scene information image depth, image semantics.
  • an image semantic extraction algorithm may be used to acquire semantic information of the first image data.
  • the semantic information of the first image data is organized according to the format of the semantic information data item in the data stream format, and assigned to the data stream.
  • the feature data of the first image data is obtained, and the feature data is assigned to the data stream.
  • the obtained target data stream includes not only the captured first image data, but also the feature data of the first image data.
  • Step S204 analyzing the target data stream, and performing image processing on the first image data to obtain the second image data and the feature data of the second image data.
  • the target data stream is parsed according to the data stream format to obtain the first image data and/or feature data of the first image data.
  • the target data stream is analyzed according to the data stream format of the data stream to obtain the first image data contained in the target data stream and/or various feature information in the feature data of the first image data.
  • image processing is performed on the first image data to obtain the second image data and the feature data of the second image data.
  • image processing is performed on the first image data to obtain the second image data and the feature data of the second image data.
  • one or more feature information used in the current image processing can be selected from the feature data, and corresponding image processing can be performed on the first image data , to obtain the second image data and the characteristic data of the second image data.
  • the required feature data of the first image data can be obtained directly from the data stream, without calling the query interface of each feature data to obtain feature data, which improves the efficiency of image processing and reduces
  • the dependence of the image processing algorithm module on other modules improves the compatibility and consistency of the computational photography system.
  • the data stream based on the data stream format contains the first image data and/or the characteristic data of the first image data; in this embodiment, the acquired first image data can be obtained according to the data stream format
  • the image data is assigned to the data stream.
  • the feature data of the first image data can be obtained, and the feature data can also be assigned to the data stream.
  • the data stream not only contains the first image data to be processed, but also contains the first image data feature data; when any subsequent algorithm module and/or function module performs any image processing on the first image data, the first image data and/or feature data of the first image data can be obtained by parsing the data stream, There is no need to call the query interface of various feature data to obtain the feature data, which improves the efficiency of image processing and reduces the dependence of the algorithm module for image processing on other modules (such as the corresponding module of the query interface). It can be applied to computational photography systems, Enables improved compatibility and consistency in computational photography systems.
  • An embodiment of the present application provides a data stream format, based on which the data stream in the data stream format contains the first image data and/or the feature data of the first image data, and the feature data of the first image data includes A plurality of data items arranged in sequence, each data item includes a type of feature information in the feature data, and optionally each type of feature information includes one or more feature information; each feature information in each data item is in accordance with The second specific order is arranged in turn, and each data item can include a data item flag and a data item length.
  • each data item in the data stream is closely arranged, and each feature information in each data item is also closely arranged, so that the data flow
  • the occupied memory is arranged in a compact structure and occupies less memory.
  • the format of the data stream is simplified, which is convenient for parsing, reduces the complexity of data stream parsing, and improves the efficiency of data stream parsing.
  • FIG. 3 is a flow chart of another image processing method provided by the embodiment of the present application.
  • the execution subject of the method provided in this embodiment may be an electronic device with a computational photography function and/or an image processing function, for example, it may be a mobile terminal such as a smart phone or a tablet computer, or a personal computer, a computational photography server, etc., in In other implementation manners, it may also be other electronic devices, which are not specifically limited here.
  • the specific steps of the method are as follows:
  • Step S10 Obtain the first image data according to preset rules.
  • the first image data refers to an original image that requires image processing
  • the target image requested by the user can be obtained by performing image processing on the original image
  • the first image data may include: images taken by the imaging module, and/or images acquired from other modules.
  • the preset rule is used to indicate that the first image data needs to be acquired, and can also indicate which types of feature data of the first image data need to be acquired, that is, which feature data of the first image data need to be assigned to in the data stream.
  • the preset rule may also indicate a specific manner of obtaining the first image data and/or various types of characteristic data.
  • the feature data of the first image data is used to describe basic information, imaging information, semantic information, etc. of the first image data.
  • the feature data of the first image data may also include other relevant information of the first image data used in other image processing processes, and this embodiment does not specifically limit the specific content of the feature data.
  • Step S20 Determine or generate a target data stream according to the first image data.
  • a data stream containing the first image data is determined or generated according to the first image data.
  • the feature data may also be assigned to the data stream.
  • each type of characteristic data assigns each type of characteristic data to the data stream; or, after first obtaining all the required characteristic data, assign all the required characteristic data into the data stream.
  • Step S30 Perform image processing on the first image data based on the target data stream.
  • any algorithm module or function module can analyze the target data stream, thereby obtaining the first image data and/or the characteristics of the first image data contained in the target data stream data, performing image processing on the first image data.
  • the image processing performed on the first image data may comprise basic image processing and/or computational photographic image processing.
  • the subsequent algorithm modules or functional modules can be When the first image data needs to be processed, the required first image data is directly obtained from the target data stream, and image processing is performed on the first image data, thereby improving the efficiency of image processing.
  • FIG. 4 is a flow chart of another image processing method provided by the embodiment of the present application. On the basis of any of the above embodiments, in this embodiment, as shown in Figure 4, the specific steps of the method are as follows:
  • Step S41 acquire the first image data and/or feature data of the first image data.
  • the computational photography system provides a first interface for acquiring image data, and a preset rule is configured in the first interface, and the preset rule is used to indicate that the first image data needs to be acquired , in addition, it may also indicate which types of characteristic data of the first image data need to be acquired, that is, which characteristic data of the first image data need to be assigned to the data stream.
  • a preset rule is acquired from an entry parameter of the first interface, and the first image data is acquired according to the preset rule.
  • the first interface is a relatively low-level interface. Based on the first interface, each APP or computational photography system can pre-develop and provide a relatively high-level functional interface that can call the first interface.
  • the functional interface is used to implement the corresponding basic function.
  • the first interface can be called automatically when the function module or algorithm module of the upper-layer APP or the computational photography system calls the function interface.
  • the functional modules may be functional modules such as taking pictures, recording videos, and making video calls, or other functional modules that need to collect images.
  • the functional interface may be a functional interface for obtaining image depth information and the like. In this step, if the preset rule indicates adding the basic image information to the data stream, the basic image information of the first image data is acquired through the imaging module of the photography system.
  • the basic image information describes the basic attribute information of the first image data
  • the basic image information may include at least one of the following characteristic information of the first image data:
  • Image data length image width, image height, image format, step size, fill width, image frame type, image frame number.
  • an imaging module In a computational photography system, an imaging module generates first image data and is capable of determining basic attribute information of the image.
  • the basic image information of the first image data can be obtained from the imaging module of the computational photography system, and the basic image information of the first image data can be assigned to the data stream.
  • the basic image information of the first image data may be assigned to the data stream by the imaging module.
  • the imaging module acquires the first image data and the basic image information of the first image data, organizes the basic image information according to the format of the basic image information data items in the data stream format, and assigns them to the data stream.
  • the imaging information of the first image data is acquired from the imaging module and the auxiliary imaging module of the photography system.
  • the imaging information describes imaging-related attributes of the first image data
  • the imaging information may include at least one of the following characteristic information of the first image data:
  • Sensitivity Expositivity, exposure value, focal length, geographic location information.
  • the imaging information of the first image data can be obtained from the imaging module and the auxiliary imaging module of the computational photography system, the imaging information is organized according to the format of the imaging information data item in the data stream format, and assigned to the data stream.
  • semantic configuration information may be pre-configured in preset rules, where the semantic configuration information includes the type of semantic information that needs to be added to the data stream.
  • the semantic information describes the high-level semantic attributes of the first image data
  • the semantic information may include at least one of the following characteristic information of the first image data:
  • Scene information image depth, image semantics.
  • the preset rule includes semantic configuration information
  • the semantic configuration information is used to indicate adding at least one semantic information to the data stream
  • at least one semantic information of the first image data is acquired.
  • the semantic information can be divided into basic semantic information and optional semantic information.
  • the basic semantic information can include one or more types of semantic information.
  • the preset rule indicates to add basic semantic information to the data stream, then acquire the basic semantic information of the first image data; if the preset rule indicates to add optional semantic information to the data stream, then acquire at least one of the first image data Optional semantic information.
  • the preset rule includes the first semantic configuration information
  • all types of basic semantic information must be obtained.
  • the preset rule includes the first semantic configuration information
  • the first semantic configuration information is used to indicate adding basic semantic information to the data stream, the basic semantic information of the first image data is acquired.
  • one or more optional semantic information to be obtained can be configured in the second semantic configuration information by configuring the second semantic configuration information in the preset rules.
  • the optional semantic information configured in the second semantic configuration information does not necessarily obtain all the optional semantic information.
  • the preset rule includes second semantic configuration information
  • the second semantic configuration information includes at least one optional semantic information
  • at least one optional semantic information of the first image data is acquired.
  • the basic semantic information may include: portrait segmentation information, salient object segmentation, lighting condition information (such as backlight, night scene, etc.) and so on.
  • the optional semantic information may include: depth information, photographing scene information (such as indoor, party, etc.), target segmentation information (such as segmentation information of food, plants, etc.), and the like.
  • semantic information are included in the basic semantic information and what types of semantic information are included in the optional semantic information can be set and adjusted according to the needs of actual application scenarios, and are not specifically limited here.
  • an image semantic extraction algorithm may be used to acquire the semantic information of the first image data.
  • the characteristic data required by the current algorithm module may also be obtained based on the request of each algorithm module.
  • the data required by the algorithm module (which may include the first image data, feature data of the first image data, other data related to the first image data, etc. ); after obtaining the data required by the algorithm module, assigning the obtained data to the data stream to obtain a target data stream containing the first image data and/or the feature data of the first image data.
  • the computational photography system may provide a second interface for acquiring image feature data, and the second interface is a unified interface for acquiring various types of feature data.
  • the algorithm module can call the second interface, and specify the type of characteristic data to be obtained in the entry parameter of the second interface.
  • Step S42 assigning the first image data and/or the feature data of the first image data to the data stream to obtain the target data stream.
  • the first image data and/or the characteristic data of the first image data After acquiring the first image data and/or the characteristic data of the first image data, assign the first image data and/or the characteristic data of the first image data to the data stream according to the data stream format, so as to obtain the target data stream .
  • the first image data or each type of characteristic data of the first image data may be assigned to the data stream after the first image data or each type of characteristic data required by the first image data is acquired; Alternatively, assign all required data to the stream after first fetching all required data.
  • the data stream format of the target data stream in this embodiment can be realized by using any data stream format provided in step S201 in the embodiment corresponding to FIG.
  • Step S43 according to the format of the data stream, analyze the target data stream to obtain the first image data and/or the feature data of the first image data.
  • the target data stream is parsed according to the data stream format to obtain the first image data and/or feature data of the first image data.
  • the data stream format is a preset data stream format including the first image data and/or feature data of the first image data.
  • the data stream format of the target data stream can be realized by using any data stream format provided in step S201 in the above embodiment corresponding to FIG.
  • Step S44 performing image processing on the first image data according to the first image data and/or the feature data of the first image data.
  • one or more feature information used in the current image processing can be selected from the feature data, and corresponding image processing can be performed on the first image data , to obtain the second image data and the characteristic data of the second image data.
  • the required feature data of the first image data can be obtained directly from the data stream, without calling the query interface of each feature data to obtain feature data, which improves the efficiency of image processing and reduces
  • the dependence of the image processing algorithm module on other modules improves the compatibility and consistency of the computational photography system.
  • the second interface acquires the feature data required by the algorithm module, and assigns the acquired feature data to the data stream, enabling the subsequent algorithm modules or function modules to directly process the first image data from the data stream Acquire the required feature data of the first image data, and perform image processing on the first image data, without calling the query interface of each feature data to obtain the feature data, which improves the efficiency of image processing and reduces the cost of image processing.
  • the dependence of the algorithm module on other modules improves the compatibility and consistency of the computational photography system.
  • FIG. 5 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • the image processing apparatus provided in the embodiment of the present application may execute the method flow provided in the embodiment corresponding to FIG. 1 or the embodiment corresponding to FIG. 2 .
  • the image processing device 50 includes: a data acquisition unit 501 , a data flow unit 502 and an image processing unit 503 .
  • the data acquisition unit 501 is configured to perform step S1: acquire first image data.
  • the data stream unit 502 is configured to perform step S2: determine or generate a target data stream according to a data stream format.
  • the image processing unit 503 is configured to perform step S3: perform image processing on the first image data according to the target data stream.
  • the S3 step includes:
  • Parse the target data stream and perform image processing on the first image data to obtain second image data and feature data of the second image data.
  • step S1 includes at least one of the following:
  • Feature data of the first image data is acquired.
  • the feature data of the first image data includes at least one of the following: basic image information, imaging information, and semantic information of the first image data.
  • acquiring feature data of the first image data including:
  • the imaging information of the first image data is acquired through the imaging module and/or the auxiliary imaging module of the photography system.
  • step S2 includes at least one of the following:
  • At least two data items sequentially arranged in a first specific order determine or generate a target data stream
  • the characteristic information is arranged according to the third specific order to determine or generate the target data flow.
  • Each data item includes at least one type of characteristic information
  • Each item of feature information in each data item is arranged in a second specific order.
  • the device provided in the embodiment of the present application may be specifically used to execute the method flow provided in the above-mentioned embodiment corresponding to FIG. 1 or the embodiment corresponding to FIG. 2 , and the specific functions will not be repeated here.
  • the captured first image data and the feature data of the first image data are acquired; according to the data stream format, a data stream including the first image data and feature data is generated, and in the data stream It includes not only the first image data captured, but also the characteristic data of the first image data; when any image processing is performed on the first image data, the first image data and the characteristics of the first image data can be obtained by analyzing the data stream
  • There is no need to call the query interface of various feature data to obtain the feature data which improves the efficiency of image processing, and reduces the dependence of the algorithm module for image processing on other modules (such as the corresponding module of the query interface), and improves the computational photography system. compatibility and consistency.
  • the embodiment of the present application also provides another image processing apparatus that can execute the method flow provided in the embodiment corresponding to FIG. 3 or the embodiment corresponding to FIG. 4 .
  • the image processing device includes: a data acquisition unit, a data flow unit and an image processing unit.
  • the data acquisition unit is configured to perform step S10: acquire the first image data according to a preset rule.
  • the data stream unit is configured to perform step S20: determine or generate a target data stream according to the first image data.
  • the image processing unit is used for S30: performing image processing on the first image data based on the target data stream.
  • step S10 includes at least one of the following:
  • the basic image information of the first image data is obtained through the imaging module of the photography system
  • the imaging information of the first image data is obtained through the imaging module and/or the auxiliary imaging module of the photography system;
  • the preset rule indicates adding at least one semantic information to the data stream, at least one semantic information of the first image data is acquired.
  • At least one semantic information of the first image data is acquired, including at least one of the following:
  • the preset rule indicates adding optional semantic information to the data stream, at least one optional semantic information of the first image data is acquired.
  • step S10 includes:
  • a preset rule is obtained from an entry parameter of the first interface, and the first image data is obtained according to the preset rule.
  • the S20 step includes:
  • the data required by the algorithm module including:
  • the data required by the algorithm module is determined or obtained according to the input parameters of the second interface, and transmitted to the algorithm module.
  • step S30 includes:
  • Analyzing the target data stream performing image processing on the first image data to obtain second image data and/or feature data of the second image data.
  • the device provided in the embodiment of the present application may be specifically used to execute the method flow provided in the above-mentioned embodiment corresponding to FIG. 3 or the embodiment corresponding to FIG. 4 , and specific functions will not be repeated here.
  • the second interface acquires the feature data required by the algorithm module, and assigns the acquired feature data to the data stream, so that subsequent algorithm modules or functional modules can directly process the first image data from the data stream Acquire the required feature data of the first image data, and perform image processing on the first image data, without calling the query interface of each feature data to obtain the feature data, improve the efficiency of image processing, and reduce the cost of image processing
  • the dependence of the algorithm module on other modules improves the compatibility and consistency of the computational photography system.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 100 includes: a processor 1001 and a memory 1002 .
  • Memory 1002 stores computer-executable instructions.
  • the processor 1001 executes the computer-executed instructions stored in the memory 1002, so that the processor 1001 executes the method process provided by any one of the above method embodiments, and the specific functions are not repeated here.
  • the captured first image data and the feature data of the first image data are acquired; according to the data stream format, a data stream including the first image data and feature data is generated, and in the data stream It includes not only the first image data captured, but also the characteristic data of the first image data; when any image processing is performed on the first image data, the first image data and the characteristics of the first image data can be obtained by analyzing the data stream data, there is no need to call the query interface of various feature data to obtain feature data, which improves the efficiency of image processing, and reduces the dependence of the algorithm module for image processing on other modules (such as the corresponding module of the query interface), and improves the computational photography system. compatibility and consistency.
  • the present application also provides a computational photography system, including: an imaging device, and at least one electronic device for implementing the method flow provided by any one of the above method embodiments.
  • the embodiment of the present application also provides an intelligent terminal, the intelligent terminal includes a memory and a processor, and a data processing program is stored in the memory, and when the data processing program is executed by the processor, the steps of the data processing method in any of the foregoing embodiments are implemented.
  • An embodiment of the present application further provides a computer-readable storage medium, on which a data processing program is stored, and when the data processing program is executed by a processor, the steps of the data processing method in any of the foregoing embodiments are implemented.
  • An embodiment of the present application further provides a computer program product, the computer program product includes computer program code, and when the computer program code is run on the computer, the computer is made to execute the methods in the above various possible implementation manners.
  • the embodiment of the present application also provides a chip, including a memory and a processor.
  • the memory is used to store a computer program
  • the processor is used to call and run the computer program from the memory, so that the device installed with the chip executes the above various possible implementation modes. Methods.
  • Units in the device in the embodiment of the present application may be combined, divided and deleted according to actual needs.
  • the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in other words, the part that contributes to the prior art, and the computer software product is stored in one of the above storage media (such as ROM/RAM, magnetic CD, CD), including several instructions to make a terminal device (which may be a mobile phone, computer, server, controlled terminal, or network device, etc.) execute the method of each embodiment of the present application.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • a computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, special purpose computer, a computer network, or other programmable apparatus.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g. Coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including a server, a data center, and the like integrated with one or more available media.
  • Usable media may be magnetic media, (eg, floppy disk, memory disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), among others.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请提出了一种图像处理方法、设备和存储介质,图像处理方法包括以下步骤:获取第一图像数据;按照数据流格式,确定或生成目标数据流;根据所述目标数据流,对所述第一图像数据进行图像处理。本申请通过设置数据流格式,基于该数据流格式的目标数据流可以包含第一图像数据和/或第一图像数据的特征数据,根据目标数据流可以实现对所述第一图像数据的图像处理,提高了图像处理的效率,提高了计算摄影***的兼容性和一致性。

Description

图像处理方法、设备和存储介质 技术领域
本申请涉及计算摄影技术,具体涉及一种图像处理方法、设备和存储介质。
背景技术
一些实现中,计算摄影***的实现方案大多是基于原有的相机(camera)***改造而来,在构思及实现本申请过程中,发明人发现至少存在如下问题:图像处理效率较为低下,和/或计算摄影***的兼容性和一致性较差。
前面的叙述在于提供一般的背景信息,并不一定构成现有技术。
发明内容
为了解决上述技术问题,本申请实施例提供一种图像处理方法、设备和存储介质。
第一方面,本申请提供一种图像处理方法,所述方法包括以下步骤:
S1:获取第一图像数据;
S2:按照数据流格式,确定或生成目标数据流;
S3:根据所述目标数据流,对所述第一图像数据进行图像处理。
可选地,所述S3步骤包括:
解析所述目标数据流,对所述第一图像数据进行图像处理,以得到第二图像数据和所述第二图像数据的特征数据。
可选地,所述S1步骤包括以下至少一种:
根据成像控制指令和/或图像获取指令,获取第一图像数据;
获取所述第一图像数据的特征数据。
可选地,所述第一图像数据的特征数据包括以下至少一项:所述第一图像数据的图像基本信息、成像信息、语义信息;
所述获取所述第一图像数据的特征数据之后,包括以下至少一种:
将所述第一图像数据的图像基本信息赋值到所述目标数据流中;
将所述第一图像数据的成像信息赋值到所述目标数据流中;
将所述第一图像数据的语义信息赋值到所述目标数据流中。
可选地,所述获取所述第一图像数据的特征数据,包括:
通过摄影***的成像模块获取所述第一图像数据的图像基本信息;
和/或,通过摄影***的成像模块和/或辅助成像模块,获取所述第一图像数据的成像信息。
可选地,所述S2步骤包括以下至少一种:
按照第一特定顺序依次排列的至少两个数据项确定或生成所述目标数据流;
按照第三特定顺序排列各项特征信息确定或生成所述目标数据流。
可选地,还包括以下至少一种:
每一所述数据项包括至少一类特征信息;
每一所述数据项中的各项特征信息按照第二特定顺序排列。
第二方面,本申请还提供一种图像处理方法,所述方法包括以下步骤:
S10:根据预设规则获取第一图像数据;
S20:根据所述第一图像数据确定或生成目标数据流;
S30:基于目标数据流,对所述第一图像数据进行图像处理。
可选地,所述S10步骤包括以下至少一种:
若所述预设规则指示向数据流中添加图像基本信息,则通过摄影***的成像模块获取所述第一图像数据的图像基本信息;
若所述预设规则指示向数据流中添加成像信息,则通过摄影***的成像模块和/或辅助成像模块,获取所述第一图像数据的成像信息;
若所述预设规则指示向数据流中添加至少一种语义信息,则获取所述第一图像数据的所述至少一种语义信息。
可选地,所述若所述预设规则指示向数据流中添加至少一种语义信息,则获取所述第一图像数据的所述至少一种语义信息,包括以下至少一种:
若所述预设规则指示向数据流中添加基础语义信息,则获取所述第一图像数据的基础语义信息;
若所述预设规则指示向数据流中添加可选语义信息,则获取所述第一图像数据的至少一种可选语义信息。
可选地,所述S10步骤包括:
响应于对第一接口的调用,从所述第一接口的入口参数中获取所述预设规则,并根据所述预设规则获取所述第一图像数据。
可选地,所述S20步骤包括:
响应于数据流流转到的算法模块的数据请求,获取所述算法模块所需的数据;
在获取到所述算法模块所需的数据之后,将获取到的数据赋值到数据流 中,以得到所述目标数据流。
可选地,所述响应于数据流流转到的算法模块的数据请求,获取所述算法模块所需的数据,包括:
响应于任一算法模块对第二接口的调用,根据所述第二接口的输入参数,确定或获取所述算法模块所需的数据,并传输给所述算法模块。
可选地,所述S30步骤包括:
解析所述目标数据流,对所述第一图像数据进行图像处理,以得到第二图像数据和/或所述第二图像数据的特征数据。
第三方面,本申请还提供一种图像处理装置,包括:
数据获取单元,用于执行步骤S1:获取第一图像数据;
数据流单元,用于执行步骤S2:按照数据流格式,确定或生成目标数据流;
图像处理单元,用于执行步骤S3:根据所述目标数据流,对所述第一图像数据进行图像处理。
第四方面,本申请还提供一种图像处理装置,包括:
数据获取单元,用于执行步骤S10:根据预设规则获取第一图像数据;
数据流单元,用于执行步骤S20:根据所述第一图像数据确定或生成目标数据流;
图像处理单元,用于S30:基于目标数据流,对所述第一图像数据进行图像处理。
第五方面,本申请还提供一种电子设备,包括:处理器和存储器;
所述存储器存储计算机执行指令;
所述计算机执行指令被所述处理器执行时实现上述任一方面所述的图像处理方法。
第七方面,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当所述计算机执行指令被处理器执行时用于实现上述任一方面所述的图像处理方法。
第八方面,本申请还提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现上述人一方面所述的图像处理方法。
本申请提供的图像处理方法、设备和存储介质,通过设置特定的数据流格式,基于该数据流格式的数据流中可以包含第一图像数据和/或第一图像数 据的特征数据,根据目标数据流可以实现对第一图像数据的图像处理,提高了图像处理的效率,应用于计算摄影***能够提高计算摄影***的兼容性和一致性。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种图像处理方法流程图;
图2为本申请实施例提供的另一种图像处理方法流程图;
图3为本申请实施例提供的另一种图像处理方法流程图;
图4为本申请实施例提供的另一种图像处理方法流程图;
图5为本申请实施例提供的一种图像处理装置的结构示意图;
图6为本申请实施例提供的一种电子设备的结构示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过 程、方法、物品或者装置中还存在另外的相同要素,此外,本申请不同实施例中具有同样命名的部件、特征、要素可能具有相同含义,也可能具有不同含义,其具体含义需以其在该具体实施例中的解释或者进一步结合该具体实施例中上下文进行确定。
应当理解,尽管在本文可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本文范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语"如果"可以被解释成为"在……时"或"当……时"或"响应于确定"。再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加。本申请使用的术语“或”、“和/或”、“包括以下至少一个”等可被解释为包括性的,或意味着任一个或任何组合。例如,“包括以下至少一个:A、B、C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A和B和C”,再如,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A和B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。
应该理解的是,虽然本申请实施例中的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
取决于语境,如在此所使用的词语“如果”、“若”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。
需要说明的是,在本文中,采用了诸如S10、S20等步骤代号,其目的是为了更清楚简要地表述相应内容,不构成顺序上的实质性限制,本领域技术人员在具体实施时,可能会先执行S20后执行S10等,但这些均应在本申请 的保护范围之内。
本申请具体提供的图像处理方法,应用于计算摄影***,计算摄影***的实现方案大多是基于原有的相机(camera)***改造而来,从相机***获取的图像数据包括拍摄的图像数据本身,但是不包括图像对应的传感器(sensor)信息、是否经过某些预处理、图像语义信息等详细的特征信息。在进行计算摄影图像处理时,如果需要获取图像数据对应的多项特征信息,则需要通过多次调用各项特征信息对应的查询接口,来获取对应的特征信息。
示例性地,以基于安卓***的相机***(Android Camera)框架为例,调用模块通过void setParameters(Camera.Parameters params)接口设置底层相机装置拍摄的图像数据的输出格式。调用模块根据成像控制请求,通过调用相机***提供的应用程序编程接口(如Camera.PictureCallback())获得拍摄的图像数据本身。如果想解析该图像数据,则需要调用对应的查询接口从Camera.Parameters中获取图像编码方式,高宽等特征信息,根据这些特征信息解析拍摄的图像数据。可选地,调用模块可以是进行计算摄影图像处理的任一功能模块,通过应用程序编程接口调用对应的算法模块,实现计算摄影图像处理的功能。
为了能够查询拍摄的图像数据的传感器(sensor)信息、是否经过某些预处理、图像语义信息等特征信息,可以通过扩展用于查询各项特征信息的私有接口,通过调用查询接口可以查询对应的特征信息。但需要各个移动厂商的支持,而且在计算摄影图像处理过程中需要多次调用查询接口,调用繁琐,导致计算摄影图像处理的效率低,且不利于计算摄影算法模块的集成。各个不同厂商扩展私有接口,对图像格式的定义不统一,导致各个模块之间交互的数据流中图像数据格式复杂,解析困难;进行计算摄影图像处理的模块所需要的成像信息、语义信息等都散落在各个模块中,不利于查找。
由于各移动厂商的移动终端硬件架构、成像器件可能不同,计算摄影的成像控制流程、图像数据处理流程,以及相关的控制命令、图像数据的定义和格式等方面也有所差异,扩展私有查询接口会带来兼容性和一致性的问题。
本申请提供的图像处理方法,通过设置特定的数据流格式,基于该数据流格式的数据流中可以包含第一图像数据和/或第一图像数据的特征数据。可选地,可以获取第一图像数据和/或第一图像数据的特征数据;按照数据流格式,生成包含第一图像数据和/或第一图像数据的特征数据的数据流,这样,后续对第一图像数据进行图像处理时,通过解析该数据流即可得到第 一图像数据和/或第一图像数据的特征数据,无需调用各项特征数据的查询接口来获取特征数据,提高了图像处理的效率,并且减少了进行图像处理的算法模块对其它模块(如查询接口对应模块)的依赖,应用于计算摄影***时,能够提高计算摄影***的兼容性和一致性。
下面以具体地实施例对本申请实施例的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本申请实施例的实施例进行描述。
图1为本申请实施例提供的一种图像处理方法流程图。本实施例提供的方法的执行主体可以是具有图像处理功能(如计算摄影功能)的电子设备,例如,可以是智能手机、平板电脑等移动终端,也可以是个人电脑、计算摄影服务器等,在其他实施方式中还可以是其他电子设备,此处不作具体限定。如图1所示,该方法具体步骤如下:
步骤S1、获取第一图像数据。
可选地,第一图数数据是指需要进行图像处理的原始图像,通过对原始图像进行图像处理可以得到用户请求的目标图像。
第一图像数据可以包括:成像模块拍摄的图像,和/或,从其他模块获取的图像。
步骤S2、按照数据流格式,确定或生成数据流。
可选地,数据流包含第一图像数据。
本实施例中,数据流格式为预先设置的,是包含第一图像数据和/或第一图像数据的特征数据的数据流的格式。
本实施例中,通过预先设置数据流格式,使得各个算法模块和/或功能模块能够将待处理的第一图像数据和/或第一图像数据的特征数据赋值到数据流中,使得数据流能够包含第一图像数据以及第一图像数据的特征数据。
在获取到第一图像数据的特征数据之后,按照数据流格式,确定或生成包含第一图像数据和和/或第一图像数据的特征数据的数据流,从而可以将第一图像数据和/或第一图像数据的特征数据赋值到数据流中。在后续进行图像处理时,可以直接从数据流中提取出第一图像数据和/或第一图像数据的特征数据。
步骤S3、根据数据流,对第一图像数据进行图像处理。
在进行图像处理时,根据数据流的数据流格式,可以从数据流中提取出第一图像数据和/或第一图像数据的特征数据中的各项特征信息。
可选地,根据第一图像数据的特征数据中的各项特征信息,可以从特征 数据中选择当前图像处理所用到的一项或多项特征信息,并对第一图像数据进行相应的图像处理。
这样,在进行图像处理时,可以直接从数据流中获取所需要的第一图像数据的特征数据,无需调用各项特征数据的查询接口来获取特征数据,提高了图像处理的效率,并且减少了进行图像处理的算法模块对其它模块(如查询接口对应模块)的依赖,提高了计算摄影***的兼容性和一致性。
本申请实施例通过设置数据流格式,获取拍摄的第一图像数据,并且获取第一图像数据的特征数据;按照数据流格式,生成包含第一图像数据和特征数据的数据流,该数据流中不仅包含拍摄的第一图像数据,还包含第一图像数据的特征数据;对第一图像数据进行任一图像处理时,通过解析该数据流即可得到第一图像数据和第一图像数据的特征数据,无需调用各项特征数据的查询接口来获取特征数据,提高了图像处理的效率,并且减少了进行图像处理的算法模块对其它模块(如查询接口对应模块)的依赖,提高了计算摄影***的兼容性和一致性。
图2为本申请实施例提供的另一种图像处理方法流程图。在上述图1对应实施例的基础上,本实施例中,S3步骤包括:解析目标数据流,对第一图像数据进行图像处理,以得到第二图像数据和第二图像数据的特征数据。
如图2所示,该方法具体步骤如下:
步骤S201、根据成像控制指令和/或图像获取指令,获取第一图像数据。
可选地,第一图数据是指需要进行图像处理的原始图像,通过对原始图像进行图像处理可以得到用户请求的目标图像。
第一图像数据可以包括:成像模块拍摄的图像,和/或,从其他模块获取的图像。
可选地,获取第一图像数据包括:根据成像控制指令,获取拍摄的图像数据;和/或,根据图像获取指令,获取已有的图像数据。
示例性地,可以根据成像控制指令,从相机服务获取拍摄的图像数据。
示例性地,可以根据图像获取指令,通过提供图像服务获取已有的图像数据。
步骤S202、按照数据流格式,确定或生成目标数据流。
可选地,数据流格式为预先设置的,是包含第一图像数据和/或第一图像数据的特征数据的数据流的格式。
该步骤中,在获取到第一图像数据和/或第一图像数据的特征数据之后,可以按照数据流格式,确定或生成包含第一图像数据和/或第一图像数据的特 征数据的目标数据流。本实施例的一种可选的实施方式中,上述步骤S2中,可以按照第一特定顺序依次排列的至少两个数据项确定或生成目标数据流。
可选地,每一数据项包括至少一类特征信息;每一数据项中的各项特征信息按照第二特定顺序排列。
在在目标数据流的数据流格式中,数据流包括第一图像数据和第一图像数据的特征数据,所第一图像数据的特征数据包括按照第一特定顺序依次排列的至少两个数据项,每一数据项包括特征数据中的一类特征信息,可选地每一类特征信息包括至少一项特征信息;每一数据项中的各项特征信息按照第二特定顺序依次排列。
可选地,第一特定顺序是预先设置的各个数据项的排列顺序,可以根据实际应用场景的需要以及特征数据包含的数据项的数量进行灵活地设置,本实施例对于第一特定顺序的具体排列顺序不做具体限定。
第二特定顺序是指每一数据项中各项特征信息的排列顺序,可以根据实际应用场景的需要以及数据项所包含的特征信息的数量进行灵活地设置,本实施例对于第二特定顺序的具体排列顺序不做具体限定。
示例性地,特征数据可以包括图像基本信息数据项、成像信息数据项、语义信息数据项,这三个数据项之间按照第一特定顺序紧密排列。可选地,各数据项包括多个特征信息对应的属性值。
另外,各项特征信息还可以具有对应的属性标签,用于唯一标识一项特征信息。在设置数据项中各项特征信息的第二特定顺序时,可以通过对数据项中各特征信息对应的属性标签进行排序,确定各属性标签的第二特定顺序。
可选地,每一数据项除了包括一项或多项特征信息,还可以包括数据项标志和数据项长度。可选地,数据项标志可以位于数据项的头部,用于区分不同的数据项。通过设置每一数据项的数据项标志和数据项长度,可以非常容易地解析出不同的数据项,降低了数据流解析的复杂度,提高数据流解析的效率。
例如,以图像基本信息数据项的格式为例,对数据项的格式进行示例性地说明。一个1080x1920的灰度图像的图像基本信息数据项的格式可以如下表1所示:
表1
Figure PCTCN2021122444-appb-000001
如表1所示,图像基本信息数据项可以包括:数据项标志、数据项长度、图像宽度、图像高度、图像格式、步长、填充宽度、图像帧类型和图像帧编号。
可选地,数据项标志能够唯一标志一个数据项,用于区别不同的数据项。例如,可以用0表示图像基本信息数据项,1表示成像信息数据项,2表示语义信息数据项等。数据流格式中特征数据包含的数据项可以扩展,各数据项的标志可以根据实际应用场景的需要进行设置和调整,此处不做具体限定。
图像格式是指第一图像数据的格式,例如0表示灰度图,1表示RAW格式,2表示YUV420格式等等。各不同图像格式对应的属性值可以根据实际应用场景的需要进行设置,此处不做具体限定。
图像帧类型包括单帧成像和多帧成像,可以用0表示单帧成像,1表示多帧成像。各图像帧类型对应的属性值可以根据实际应用场景的需要进行设置,此处不做具体限定。
图像帧编号是指当图像帧类型为多帧图像时,当前第一图像数据为第几帧图像。当图像帧类型为单帧成像时,图像帧编号没有实际意义,图像帧编号可以是1,也可以为空或者为其他默认值(如0),此处不做具体限定。
这一实施方式中,数据流中各个数据项紧密排列,每一数据项中的各项特征信息也紧密排列,从而使得数据流占用的内存排布结构紧凑,占用内存少。同时,数据流格式精简,便于解析,降低数据流解析的复杂度,提高数据流解析的效率。
本实施例的另一种可选地实施方式中,上述步骤S2中,可以按照第三特 定顺序排列各项特征信息确定或生成目标数据流。
在目标数据流的数据流格式中,特征数据可以包括多项特征信息,在数据流中多项特征信息按照第三特定顺序依次排列。
这一实施方式中,各项特征信息可以不分类,特征数据包含的所有特征信息的属性值按照第三特定顺序依次排列,各项特征信息紧密排列,从而使得数据流占用的内存排布结构紧凑,占用内存少。
可选地,第三特定顺序可以根据实际应用场景的需要进行设置,此处不做具体限定。
步骤S203、获取第一图像数据的特征数据,将特征数据赋值到目标数据流中。
可选地,第一图像数据的特征数据用于描述第一图像数据的基本信息、成像信息、语义信息等。
另外,第一图像数据的特征数据,还可以包括其他计算摄影图像处理过程中用到的第一图像数据的其他相关信息,本实施例对于特征数据的具体内容不做具体限定。
该步骤中,获取第一图像数据的特征数据可以在步骤S202之前进行,也即是第一图像数据的部分特征数据是可以在步骤S202之前进行的。
示例性地,在步骤S202之前,在从成像模块获取到第一图像数据时,还可以从成像模块获取第一图像数据的图像基本信息和/或成像信息。
同时之后,还可以获取第一图像数据的特征数据,将第一图像数据的特征数据赋值到数据流中
按照数据流格式,确定或生成包含第一图像数据的目标数据流;后续任一算法模块和/或功能模块在获取到第一图像数据的特征数据之后,可以按照数据流格式,将第一图像数据的特征数据赋值到数据流中。
可选地,第一图像数据的特征数据可以包括以下至少一项:第一图像数据的图像基本信息、成像信息、语义信息。
在获取第一图像数据的特征数据之后,将特征数据赋值到目标数据流中,可以包括以下至少一种:
将第一图像数据的图像基本信息赋值到目标数据流中;
将第一图像数据的成像信息赋值到目标数据流中;
将第一图像数据的语义信息赋值到目标数据流中。
本实施例中,任一算法模块和/或功能模块在获取到第一图像数据的一项或多项特征信息之后,即可将获取到的一项或多项特征信息赋值到数据流中。
可选地,获取第一图像数据的特征数据,包括:
通过摄影***的成像模块获取第一图像数据的图像基本信息;和/或,通 过摄影***的成像模块和/或辅助成像模块,获取第一图像数据的成像信息。
可选地,获取第一图像数据的基本图像基本信息时,可以通过摄影***的成像模块获取第一图像数据的图像基本信息。
可选地,图像基本信息描述了第一图像数据的基本属性信息,图像基本信息可以包括第一图像数据的以下至少一项特征信息:
图像数据长度、图像宽度、图像高度、图像格式、步长、填充宽度、图像帧类型、图像帧编号。
在计算摄影***中,成像模块生成第一图像数据,并且能够确定图像的基本属性信息。
该步骤中,可以从计算摄影***的成像模块获取第一图像数据的图像基本信息,并将第一图像数据的图像基本信息赋值到数据流中。
可选地,可以由成像模块将第一图像数据的图像基本信息赋值到数据流中。
示例性地,成像模块获取第一图像数据,同时获取第一图像数据的图像基本信息,将图像基本信息按照数据流格式中图像基本信息数据项的格式组织,并赋值到数据流中。
可选地,获取第一图像数据的成像信息时,可以通过摄影***的成像模块和/或辅助成像模块,获取第一图像数据的成像信息。
可选地,成像信息描述了第一图像数据的成像相关的属性,成像信息可以包括第一图像数据的以下至少一项特征信息:
感光度、曝光值、焦距、地理位置信息。
该步骤中,可以从计算摄影***的成像模块和辅助成像模块,获取第一图像数据的成像信息,将成像信息按照数据流格式中成像信息数据项的格式组织,并赋值到数据流中。
可选地,语义信息描述了第一图像数据的高级语义属性,语义信息可以包括第一图像数据的以下至少一项特征信息:
场景信息、图像深度、图像语义。
该步骤中,可以使用图像语义提取算法,获取第一图像数据的语义信息。
可选地,将第一图像数据的语义信息按照数据流格式中语义信息数据项的格式组织,并赋值到数据流中。
通过该步骤,获取第一图像数据的特征数据,将特征数据赋值到数据流中,得到的目标数据流中,不仅包含拍摄的第一图像数据,还包含第一图像数据的特征数据。
步骤S204、解析目标数据流,对第一图像数据进行图像处理,以得到第 二图像数据和第二图像数据的特征数据。
在将第一图像数据的特征数据赋值到目标数据流中之后,在后续进行图像处理时,根据数据流格式,解析目标数据流,得到第一图像数据和/或第一图像数据的特征数据。
在进行图像处理时,根据数据流的数据流格式,对目标数据流进行解析,可以得到目标数据流所包含的第一图像数据和/或第一图像数据的特征数据中的各项特征信息。
根据解析得到的第一图像数据和/或第一图像数据的特征数据,对第一图像数据进行图像处理,得到第二图像数据和第二图像数据的特征数据。示例性地,根据第一图像数据的特征数据中的各项特征信息,可以从特征数据中选择当前图像处理所用到的一项或多项特征信息,并对第一图像数据进行相应的图像处理,得到第二图像数据和第二图像数据的特征数据。
这样,在进行图像处理时,可以直接从数据流中获取所需要的第一图像数据的特征数据,无需调用各项特征数据的查询接口来获取特征数据,提高了图像处理的效率,并且减少了进行图像处理的算法模块对其它模块(如查询接口对应模块)的依赖,提高了计算摄影***的兼容性和一致性。
本申请实施例通过设置数据流格式,基于该数据流格式的数据流中包含第一图像数据和/或第一图像数据的特征数据;本实施例中可以按照数据流格式,将获取的第一图像数据赋值到数据流中,另外还可以获取第一图像数据的特征数据,将特征数据也赋值到数据流中,该数据流中不仅包含待处理的第一图像数据,还包含第一图像数据的特征数据;在后续任一算法模块和/或功能模块对第一图像数据进行任一图像处理时,通过解析该数据流即可得到第一图像数据和/或第一图像数据的特征数据,无需调用各项特征数据的查询接口来获取特征数据,提高了图像处理的效率,并且减少了进行图像处理的算法模块对其它模块(如查询接口对应模块)的依赖,应用于计算摄影***能,能够提高计算摄影***的兼容性和一致性。
本申请实施例提供一种数据流格式,基于该数据流格式的数据流中包含第一图像数据和/或第一图像数据的特征数据,所第一图像数据的特征数据包括按照第一特定顺序依次排列的多个数据项,每一数据项包括特征数据中的一类特征信息,可选地每一类特征信息包括一项或多项特征信息;每一数据项中的各项特征信息按照第二特定顺序依次排列,每一数据项可以包括数据项标志和数据项长度,这样,数据流中各个数据项紧密排列,每一数据项中的各项特征信息也紧密排列,从而使得数据流占用的内存排布结构紧凑,占用内存少。同时,数据流格式精简,便于解析,降低数据流解析的复杂度,提高数据流解析的效率。
图3为本申请实施例提供的另一种图像处理方法流程图。本实施例提供的方法的执行主体可以是具有计算摄影功能和/或图像处理功能的电子设备,例如,可以是智能手机、平板电脑等移动终端,也可以是个人电脑、计算摄影服务器等,在其他实施方式中还可以是其他电子设备,此处不作具体限定。如图3所示,该方法具体步骤如下:
步骤S10:根据预设规则获取第一图像数据。
可选地,第一图数数据是指需要进行图像处理的原始图像,通过对原始图像进行图像处理可以得到用户请求的目标图像。
第一图像数据可以包括:成像模块拍摄的图像,和/或,从其他模块获取的图像。
本实施例中,预设规则用于指示需要获取第一图像数据,另外还可以指示需要获取的第一图像数据的哪些类型的特征数据,也即需要将第一图像数据的哪些特征数据赋值到数据流中。
另外,预设规则还可以指示第一图像数据和/或各类型的特征数据的具体获取方式。
可选地,第一图像数据的特征数据用于描述第一图像数据的基本信息、成像信息、语义信息等。另外,第一图像数据的特征数据,还可以包括其他图像处理过程中用到的第一图像数据的其他相关信息,本实施例对于特征数据的具体内容不做具体限定。
步骤S20:根据第一图像数据确定或生成目标数据流。
在获取到第一图像数据之后,根据第一图像数据,确定或生成包含第一图像数据的数据流。
可选地,在获取到第一图像数据的特征数据之后,还可以将特征数据赋值到数据流中。
可选地,可以在获取到每一类所需的特征数据之后,将每一类特征数据赋值到数据流中;或者,首先获取所有所需的特征数据之后,将所有所需的特征数据赋值到数据流中。
步骤S30:基于目标数据流,对第一图像数据进行图像处理。
当目标数据流流转到任一算法模块或功能模块时,任一算法模块或功能模块可以解析目标数据流,从而获取到目标数据流所包含的第一图像数据和/或第一图像数据的特征数据,对第一图像数据进行图像处理。
可选地,对第一图像数据进行的图像处理可以包括基本图像处理和/或计算摄影图像处理。
本实施例通过根据预设规则获取第一图像数据,根据第一图像数据确定 或生成目标数据流,基于目标数据流,对第一图像数据进行图像处理,能够使得后续的算法模块或功能模块在需要对第一图像数据进行处理时,直接从目标数据流中获取所需要的第一图像数据,并对第一图像数据进行图像处理,提高了图像处理的效率。
图4为本申请实施例提供的另一种图像处理方法流程图。在上述任一实施例的基础上,本实施例中,如图4所示,该方法具体步骤如下:
步骤S41、根据预设规则,获取第一图像数据和/或第一图像数据的特征数据。
该步骤的一种可选地实施方式中,计算摄影***提供用于获取图像数据的第一接口,该第一接口中配置了预设规则,该预设规则用于指示需要获取第一图像数据,另外还可以指示需要获取的第一图像数据的哪些类型的特征数据,也即需要将第一图像数据的哪些特征数据赋值到数据流中。
响应于对用于获取图像数据的第一接口的调用,从第一接口的入口参数中获取预设规则,并根据预设规则获取第一图像数据。
另外,第一接口是一个较为底层的接口,第一接口的基础上,各APP或计算摄影***可以预先开发和提供能够调用第一接口的较为上层的功能接口,该功能接口用于实现对应的基础功能。上层的APP或计算摄影***应用的功能模块或算法模块调用功能接口时即可自动调用第一接口。
例如,功能模块可以是拍照、录制视频、视频通话等功能模块,或者还可以是其他需要采集图像的功能模块。功能接口可以是获取图像深度信息的功能接口等。该步骤中,若预设规则指示向数据流中添加图像基本信息,则通过摄影***的成像模块获取第一图像数据的图像基本信息。
可选地,图像基本信息描述了第一图像数据的基本属性信息,图像基本信息可以包括第一图像数据的以下至少一项特征信息:
图像数据长度、图像宽度、图像高度、图像格式、步长、填充宽度、图像帧类型、图像帧编号。
在计算摄影***中,成像模块生成第一图像数据,并且能够确定图像的基本属性信息。该步骤中,可以从计算摄影***的成像模块获取第一图像数据的图像基本信息,并将第一图像数据的图像基本信息赋值到数据流中。
可选地,可以由成像模块将第一图像数据的图像基本信息赋值到数据流中。示例性地,成像模块获取第一图像数据,同时获取第一图像数据的图像基本信息,将图像基本信息按照数据流格式中图像基本信息数据项的格式组织,并赋值到数据流中。
该步骤中,若预设规则指示向数据流中添加成像信息,则从摄影***的成像模块和辅助成像模块,获取第一图像数据的成像信息。
可选地,成像信息描述了第一图像数据的成像相关的属性,成像信息可以包括第一图像数据的以下至少一项特征信息:
感光度、曝光值、焦距、地理位置信息。
可选地,可以从计算摄影***的成像模块和辅助成像模块,获取第一图像数据的成像信息,将成像信息按照数据流格式中成像信息数据项的格式组织,并赋值到数据流中。
该步骤中,若预设规则指示向数据流中添加至少一种语义信息,则获取第一图像数据的至少一种语义信息。
可选地,可以预先在预设规则中配置语义配置信息,该语义配置信息包含需要向数据流中添加的语义信息的类型。
可选地,语义信息描述了第一图像数据的高级语义属性,语义信息可以包括第一图像数据的以下至少一项特征信息:
场景信息、图像深度、图像语义。
可选地,若预设规则包含语义配置信息,语义配置信息用于指示向数据流中添加至少一种语义信息,则获取第一图像数据的至少一种语义信息。
可选地,还可以将语义信息分为基础语义信息和可选语义信息两类,可选地,基础语义信息可以包括一种或多种类型的语义信息。
若预设规则指示向数据流中添加基础语义信息,则获取第一图像数据的基础语义信息;若预设规则指示向数据流中添加可选语义信息,则获取第一图像数据的至少一种可选语义信息。
示例性地,预设规则如果包含第一语义配置信息,则必须获取所有类型的基础语义信息。该步骤中,若预设规则包含第一语义配置信息,第一语义配置信息用于指示向数据流中添加基础语义信息,则获取第一图像数据的基础语义信息。
对于可选语义信息,可以通过在预设规则中配置第二语义配置信息,在第二语义配置信息中配置需要获取的一种或多种可选语义信息,在获取语义信息时,只需获取第二语义配置信息中配置的可选语义信息,而不一定获取所有可选语义信息。该步骤中,若预设规则包含第二语义配置信息,第二语义配置信息包括至少一种可选语义信息,则获取第一图像数据的至少一种可选语义信息。
例如,基础语义信息可以包括:人像分割信息、显著目标分割、光照条件信息(如逆光、夜景等)等。可选语义信息可以包括:深度信息、拍照场景信息(如室内、聚会等)、目标分割信息(例如食物、植物等的分割信息)等。
另外,基础语义信息包括哪些类型的语义信息,可选语义信息包括哪些 类型的语义信息,可以根据实际应用场景的需要进行设置和调整,此处不做具体限定。
在获取语义信息时,可以使用图像语义提取算法,获取第一图像数据的语义信息。
该步骤的另一种可选地实施方式中,还可以基于各个算法模块的请求,获取当前算法模块所需的特征数据。
该步骤中,响应于数据流流转到的算法模块的特征数据请求,获取算法模块所需的数据(可以包括第一图像数据、第一图像数据的特征数据、第一图像数据相关的其他数据等);在获取到算法模块所需的数据之后,将获取到的数据赋值到数据流中,以得到包含第一图像数据和/或第一图像数据的特征数据的目标数据流。
可选地,计算摄影***可以提供用于获取图像特征数据的第二接口,该第二接口为获取各类型特征数据的统一接口。无论需要获取第一图像数据的何种特征数据,算法模块都通过调用第二接口,在第二接口的入口参数指定所需获取的特征数据的类型即可。
该步骤中,响应于任一算法模块对第二接口的调用,根据第二接口的输入参数,确定或获取算法模块所需的数据,并将获取到的数据传输给算法模块。
步骤S42、将第一图像数据和/或第一图像数据的特征数据赋值到数据流中,以得到目标数据流。
在获取到第一图像数据和/或第一图像数据的特征数据之后,按照数据流格式,将第一图像数据和/或第一图像数据的特征数据赋值到数据流中,以得到目标数据流。
可选地,可以在获取到第一图像数据或第一图像数据的每一类所需的特征数据之后,将第一图像数据或第一图像数据的每一类特征数据赋值到数据流中;或者,首先获取所有所需的所有数据之后,将所有所需的数据赋值到数据流中。
本实施例中目标数据流的数据流格式,可以采用上述图2对应实施例中步骤S201中提供的任意一种数据流格式实现,具体参见步骤S201,此处不再赘述。
步骤S43、根据数据流格式,解析目标数据流,得到第一图像数据和/或第一图像数据的特征数据。
在将第一图像数据的特征数据赋值到目标数据流中之后,在后续进行图像处理时,根据数据流格式,解析目标数据流,得到第一图像数据和/或第一图像数据的特征数据。
可选地,数据流格式为预先设置的包含第一图像数据和/或第一图像数据的特征数据的数据流的格式。
本实施例中,目标数据流的数据流格式,可以采用上述图2对应实施例中步骤S201中提供的任意一种数据流格式实现,具体参见步骤S201,此处不再赘述。
步骤S44、根据第一图像数据和/或第一图像数据的特征数据,对第一图像数据进行图像处理。
示例性地,根据第一图像数据的特征数据中的各项特征信息,可以从特征数据中选择当前图像处理所用到的一项或多项特征信息,并对第一图像数据进行相应的图像处理,得到第二图像数据和第二图像数据的特征数据。
这样,在进行图像处理时,可以直接从数据流中获取所需要的第一图像数据的特征数据,无需调用各项特征数据的查询接口来获取特征数据,提高了图像处理的效率,并且减少了进行图像处理的算法模块对其它模块(如查询接口对应模块)的依赖,提高了计算摄影***的兼容性和一致性。
本实施例中,通过提供统一的第一接口,该第一接口被调用时获取第一图像数据的特征数据,并将特征数据赋值到数据流中;或者,在各个算法模块需要时调用统一的第二接口获取算法模块所需的特征数据,并将获取到的特征数据赋值到数据流中,能够使得后续的算法模块或功能模块在需要对第一图像数据进行处理时,直接从数据流中获取所需要的第一图像数据的特征数据,并对第一图像数据进行图像处理,无需调用各项特征数据的查询接口来获取特征数据,提高了图像处理的效率,并且减少了进行图像处理的算法模块对其它模块的依赖,提高了计算摄影***的兼容性和一致性。
图5为本申请实施例提供的一种图像处理装置的结构示意图。本申请实施例提供的图像处理装置可以执行图1对应实施例或图2对应实施例所提供的方法流程。如图5所示,该图像处理装置50包括:数据获取单元501,数据流单元502和图像处理单元503。
可选地,数据获取单元501,用于执行步骤S1:获取第一图像数据。
数据流单元502,用于执行步骤S2:按照数据流格式,确定或生成目标数据流。
图像处理单元503,用于执行步骤S3:根据目标数据流,对第一图像数据进行图像处理。
可选地,S3步骤包括:
解析目标数据流,对第一图像数据进行图像处理,以得到第二图像数据和第二图像数据的特征数据。
可选地,S1步骤包括以下至少一种:
根据成像控制指令和/或图像获取指令,获取第一图像数据;
获取第一图像数据的特征数据。
可选地,第一图像数据的特征数据包括以下至少一项:第一图像数据的图像基本信息、成像信息、语义信息。
获取第一图像数据的特征数据之后,还包括以下至少一种:
将第一图像数据的图像基本信息赋值到目标数据流中;
将第一图像数据的成像信息赋值到目标数据流中;
将第一图像数据的语义信息赋值到目标数据流中。
可选地,获取第一图像数据的特征数据,包括:
通过摄影***的成像模块获取第一图像数据的图像基本信息;
和/或,通过摄影***的成像模块和/或辅助成像模块,获取第一图像数据的成像信息。
可选地,S2步骤包括以下至少一种:
按照第一特定顺序依次排列的至少两个数据项确定或生成目标数据流;
按照第三特定顺序排列各项特征信息确定或生成目标数据流。
可选地,还包括以下至少一种:
每一数据项包括至少一类特征信息;
每一数据项中的各项特征信息按照第二特定顺序排列。
本申请实施例提供的装置可以具体用于执行上述图1对应实施例或图2对应实施例所提供的方法流程,具体功能此处不再赘述。
本申请实施例通过设置数据流格式,获取拍摄的第一图像数据,并且获取第一图像数据的特征数据;按照数据流格式,生成包含第一图像数据和特征数据的数据流,该数据流中不仅包含拍摄的第一图像数据,还包含第一图像数据的特征数据;对第一图像数据进行任一图像处理时,通过解析该数据流即可得到第一图像数据和第一图像数据的特征数据,无需调用各项特征数据的查询接口来获取特征数据,提高了图像处理的效率,并且减少了进行图像处理的算法模块对其它模块(如查询接口对应模块)的依赖,提高了计算摄影***的兼容性和一致性。
本申请实施例还提供另一种图像处理装置,可以执行图3对应实施例或图4对应实施例所提供的方法流程。该图像处理装置包括:数据获取单元,数据流单元和图像处理单元。
可选地,数据获取单元,用于执行步骤S10:根据预设规则获取第一图像数据。
数据流单元,用于执行步骤S20:根据第一图像数据确定或生成目标数据流。
图像处理单元,用于S30:基于目标数据流,对第一图像数据进行图像处理。
可选地,S10步骤包括以下至少一种:
若预设规则指示向数据流中添加图像基本信息,则通过摄影***的成像模块获取第一图像数据的图像基本信息;
若预设规则指示向数据流中添加成像信息,则通过摄影***的成像模块和/或辅助成像模块,获取第一图像数据的成像信息;
若预设规则指示向数据流中添加至少一种语义信息,则获取第一图像数据的至少一种语义信息。
可选地,若预设规则指示向数据流中添加至少一种语义信息,则获取第一图像数据的至少一种语义信息,包括以下至少一种:
若预设规则指示向数据流中添加基础语义信息,则获取第一图像数据的基础语义信息;
若预设规则指示向数据流中添加可选语义信息,则获取第一图像数据的至少一种可选语义信息。
可选地,S10步骤包括:
响应于对第一接口的调用,从第一接口的入口参数中获取预设规则,并根据预设规则获取第一图像数据。
可选地,S20步骤包括:
响应于数据流流转到的算法模块的数据请求,获取算法模块所需的数据;
在获取到算法模块所需的数据之后,将获取到的数据赋值到数据流中,以得到目标数据流。
可选地,响应于数据流流转到的算法模块的数据请求,获取算法模块所需的数据,包括:
响应于任一算法模块对第二接口的调用,根据第二接口的输入参数,确定或获取算法模块所需的数据,并传输给算法模块。
可选地,S30步骤包括:
解析目标数据流,对第一图像数据进行图像处理,以得到第二图像数据和/或第二图像数据的特征数据。
本申请实施例提供的装置可以具体用于执行上述图3对应实施例或图4对应实施例所提供的方法流程,具体功能此处不再赘述。
本实施例中,通过提供统一的第一接口,该第一接口被调用时获取第一图像数据的特征数据,并将特征数据赋值到数据流中;或者,在各个算法模块需要时调用统一的第二接口获取算法模块所需的特征数据,并将获取到的特征数据赋值到数据流中,能够使得后续的算法模块或功能模块在需要对第 一图像数据进行处理时,直接从数据流中获取所需要的第一图像数据的特征数据,并对第一图像数据进行图像处理,无需调用各项特征数据的查询接口来获取特征数据,提高了图像处理的效率,并且减少了进行图像处理的算法模块对其它模块的依赖,提高了计算摄影***的兼容性和一致性。
图6为本申请实施例提供的一种电子设备的结构示意图。如图6所示,该电子设备100包括:处理器1001、存储器1002。存储器1002存储计算机执行指令。可选地,处理器1001执行存储器1002存储的计算机执行指令,使得处理器1001执行上述任一方法实施例所提供的方法流程,具体功能此处不再赘述。
本申请实施例通过设置数据流格式,获取拍摄的第一图像数据,并且获取第一图像数据的特征数据;按照数据流格式,生成包含第一图像数据和特征数据的数据流,该数据流中不仅包含拍摄的第一图像数据,还包含第一图像数据的特征数据;对第一图像数据进行任一图像处理时,通过解析该数据流即可得到第一图像数据和第一图像数据的特征数据,无需调用各项特征数据的查询接口来获取特征数据,提高了图像处理的效率,并且减少了进行图像处理的算法模块对其它模块(如查询接口对应模块)的依赖,提高了计算摄影***的兼容性和一致性。
本申请还提供一种计算摄影***,包括:成像设备,以及至少一用于实现上述任一方法实施例所提供的方法流程的电子设备。
本申请实施例还提供一种智能终端,智能终端包括存储器、处理器,存储器上存储有数据处理程序,数据处理程序被处理器执行时实现上述任一实施例中的数据处理方法的步骤。
本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有数据处理程序,数据处理程序被处理器执行时实现上述任一实施例中的数据处理方法的步骤。
在本申请实施例提供的智能终端和计算机可读存储介质的实施例中,可以包含任一上述数据处理方法实施例的全部技术特征,说明书拓展和解释内容与上述方法的各实施例基本相同,在此不再做赘述。
本申请实施例还提供一种计算机程序产品,计算机程序产品包括计算机程序代码,当计算机程序代码在计算机上运行时,使得计算机执行如上各种可能的实施方式中的方法。
本申请实施例还提供一种芯片,包括存储器和处理器,存储器用于存储计算机程序,处理器用于从存储器中调用并运行计算机程序,使得安装有芯片的设备执行如上各种可能的实施方式中的方法。
可以理解,上述场景仅是作为示例,并不构成对于本申请实施例提供的技术方案的应用场景的限定,本申请的技术方案还可应用于其他场景。例如,本领域普通技术人员可知,随着***架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。
本申请实施例设备中的单元可以根据实际需要进行合并、划分和删减。
在本申请中,对于相同或相似的术语概念、技术方案和/或应用场景描述,一般只在第一次出现时进行详细描述,后面再重复出现时,为了简洁,一般未再重复阐述,在理解本申请技术方案等内容时,对于在后未详细描述的相同或相似的术语概念、技术方案和/或应用场景描述等,可以参考其之前的相关详细描述。
在本申请中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本申请技术方案的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本申请记载的范围。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,被控终端,或者网络设备等)执行本申请每个实施例的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络,或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的 任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、存储盘、磁带)、光介质(例如,DVD),或者半导体介质(例如固态存储盘Solid State Disk(SSD))等。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (16)

  1. 一种图像处理方法,其中,所述方法包括以下步骤:
    S1:获取第一图像数据;
    S2:按照数据流格式,确定或生成目标数据流;
    S3:根据所述目标数据流,对所述第一图像数据进行图像处理。
  2. 根据权利要求1所述的方法,其中,所述S3步骤包括:
    解析所述目标数据流,对所述第一图像数据进行图像处理,以得到第二图像数据和所述第二图像数据的特征数据。
  3. 根据权利要求1所述的方法,其中,所述S1步骤包括以下至少一种:
    根据成像控制指令和/或图像获取指令,获取第一图像数据;
    获取所述第一图像数据的特征数据。
  4. 根据权利要求3所述的方法,其中,所述第一图像数据的特征数据包括以下至少一项:所述第一图像数据的图像基本信息、成像信息、语义信息;
    所述获取所述第一图像数据的特征数据之后,包括以下至少一种:
    将所述第一图像数据的图像基本信息赋值到所述目标数据流中;
    将所述第一图像数据的成像信息赋值到所述目标数据流中;
    将所述第一图像数据的语义信息赋值到所述目标数据流中。
  5. 根据权利要求4所述的方法,其中,所述获取所述第一图像数据的特征数据,包括:
    通过摄影***的成像模块获取所述第一图像数据的图像基本信息;
    和/或,通过摄影***的成像模块和/或辅助成像模块,获取所述第一图像数据的成像信息。
  6. 根据权利要求1至5中任一项所述的方法,其中,所述S2步骤包括以下至少一种:
    按照第一特定顺序依次排列的至少两个数据项确定或生成所述目标数据流;
    按照第三特定顺序排列各项特征信息确定或生成所述目标数据流。
  7. 根据权利要求6所述的方法,其中,还包括以下至少一种:
    每一所述数据项包括至少一类特征信息;
    每一所述数据项中的各项特征信息按照第二特定顺序排列。
  8. 一种图像处理方法,其中,所述方法包括以下步骤:
    S10:根据预设规则获取第一图像数据;
    S20:根据所述第一图像数据确定或生成目标数据流;
    S30:基于目标数据流,对所述第一图像数据进行图像处理。
  9. 根据权利要求8所述的方法,其中,所述S10步骤包括以下至少一种:
    若所述预设规则指示向数据流中添加图像基本信息,则通过摄影***的成像模块获取所述第一图像数据的图像基本信息;
    若所述预设规则指示向数据流中添加成像信息,则通过摄影***的成像模块和/或辅助成像模块,获取所述第一图像数据的成像信息;
    若所述预设规则指示向数据流中添加至少一种语义信息,则获取所述第一图像数据的所述至少一种语义信息。
  10. 根据权利要求8所述的方法,其中,所述若所述预设规则指示向数据流中添加至少一种语义信息,则获取所述第一图像数据的所述至少一种语义信息,包括以下至少一种:
    若所述预设规则指示向数据流中添加基础语义信息,则获取所述第一图像数据的基础语义信息;
    若所述预设规则指示向数据流中添加可选语义信息,则获取所述第一图像数据的至少一种可选语义信息。
  11. 根据权利要求8至10中任一项所述的方法,其中,所述S10步骤包括:
    响应于对第一接口的调用,从所述第一接口的入口参数中获取所述预设规则,并根据所述预设规则获取所述第一图像数据。
  12. 根据权利要求11所述的方法,其中,所述S20步骤包括:
    响应于数据流流转到的算法模块的数据请求,获取所述算法模块所需的数据;
    在获取到所述算法模块所需的数据之后,将获取到的数据赋值到数据流中,以得到所述目标数据流。
  13. 根据权利要求12所述的方法,其中,所述响应于数据流流转到的算法模块的数据请求,获取所述算法模块所需的数据,包括:
    响应于任一算法模块对第二接口的调用,根据所述第二接口的输入参数,确定或获取所述算法模块所需的数据,并传输给所述算法模块。
  14. 根据权利要求8至10中任一项所述的方法,其中,所述S30步骤包括:
    解析所述目标数据流,对所述第一图像数据进行图像处理,以得到第二图像数据和/或所述第二图像数据的特征数据。
  15. 一种电子设备,其中,包括:处理器和存储器;
    所述存储器存储计算机执行指令;
    所述计算机执行指令被所述处理器执行时实现如权利要求1至14中任一项所述的图像处理方法。
  16. 一种计算机可读存储介质,其中,所述计算机可读存储介质中存储有计算机执行指令,当所述计算机执行指令被处理器执行时用于实现如权利要求1至14中任一项所述的图像处理方法。
PCT/CN2021/122444 2021-09-30 2021-09-30 图像处理方法、设备和存储介质 WO2023050423A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202180102138.0A CN117941342A (zh) 2021-09-30 2021-09-30 图像处理方法、设备和存储介质
PCT/CN2021/122444 WO2023050423A1 (zh) 2021-09-30 2021-09-30 图像处理方法、设备和存储介质
US18/605,699 US20240221127A1 (en) 2021-09-30 2024-03-14 Image processing method, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/122444 WO2023050423A1 (zh) 2021-09-30 2021-09-30 图像处理方法、设备和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/605,699 Continuation US20240221127A1 (en) 2021-09-30 2024-03-14 Image processing method, device, and storage medium

Publications (1)

Publication Number Publication Date
WO2023050423A1 true WO2023050423A1 (zh) 2023-04-06

Family

ID=85781213

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/122444 WO2023050423A1 (zh) 2021-09-30 2021-09-30 图像处理方法、设备和存储介质

Country Status (3)

Country Link
US (1) US20240221127A1 (zh)
CN (1) CN117941342A (zh)
WO (1) WO2023050423A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778209A (zh) * 2010-01-28 2010-07-14 无锡蓝天电子有限公司 一种模块化cmos工业相机
CN103051944A (zh) * 2012-12-24 2013-04-17 中国科学院对地观测与数字地球科学中心 遥感卫星数据移动窗口显示***及方法
CN103516987A (zh) * 2013-10-09 2014-01-15 哈尔滨工程大学 一种高速图像采集及实时存储***
CN103986869A (zh) * 2014-05-22 2014-08-13 中国科学院长春光学精密机械与物理研究所 一种高速tdiccd遥感相机图像采集与显示装置
US9681111B1 (en) * 2015-10-22 2017-06-13 Gopro, Inc. Apparatus and methods for embedding metadata into video stream

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778209A (zh) * 2010-01-28 2010-07-14 无锡蓝天电子有限公司 一种模块化cmos工业相机
CN103051944A (zh) * 2012-12-24 2013-04-17 中国科学院对地观测与数字地球科学中心 遥感卫星数据移动窗口显示***及方法
CN103516987A (zh) * 2013-10-09 2014-01-15 哈尔滨工程大学 一种高速图像采集及实时存储***
CN103986869A (zh) * 2014-05-22 2014-08-13 中国科学院长春光学精密机械与物理研究所 一种高速tdiccd遥感相机图像采集与显示装置
US9681111B1 (en) * 2015-10-22 2017-06-13 Gopro, Inc. Apparatus and methods for embedding metadata into video stream

Also Published As

Publication number Publication date
US20240221127A1 (en) 2024-07-04
CN117941342A (zh) 2024-04-26

Similar Documents

Publication Publication Date Title
US8634603B2 (en) Automatic media sharing via shutter click
US9665598B2 (en) Method and apparatus for storing image file in mobile terminal
US20140331178A1 (en) Digital image tagging apparatuses, systems, and methods
CN105718540B (zh) 数据加载方法和装置
KR101832680B1 (ko) 참석자들에 의한 이벤트 검색
CN106547547B (zh) 数据采集方法及装置
CN111223155B (zh) 图像数据处理方法、装置、计算机设备和存储介质
CN109167939B (zh) 一种自动配文方法、装置及计算机存储介质
US8775678B1 (en) Automated wireless synchronization and transformation
US20110211087A1 (en) Method and apparatus providing for control of a content capturing device with a requesting device to thereby capture a desired content segment
US20210089738A1 (en) Image Tag Generation Method, Server, and Terminal Device
WO2020044094A1 (zh) 资源推荐方法、装置、电子设备以及计算机可读介质
WO2023050423A1 (zh) 图像处理方法、设备和存储介质
CN115098449B (zh) 一种文件清理方法及电子设备
CN110457264B (zh) 会议文件处理方法、装置、设备及计算机可读存储介质
CN115630191A (zh) 基于全动态视频的时空数据集检索方法、装置及存储介质
CN111339367B (zh) 视频处理方法、装置、电子设备及计算机可读存储介质
CN110119380B (zh) 一种可缩放矢量图文件的存储、读取方法及装置
CN114726997A (zh) 图像存储及读取方法、装置、电子设备及存储介质
CN112732457A (zh) 图像传输方法、装置、电子设备和计算机可读介质
CN105975621B (zh) 识别浏览器页面中的搜索引擎的方法及装置
CN114500819B (zh) 拍摄方法、拍摄装置以及计算机可读存储介质
CN112464177B (zh) 一种水印全覆盖方法和装置
CN113835582A (zh) 一种终端设备、信息显示方法和存储介质
CN114328410A (zh) 文件处理方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21958990

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180102138.0

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE