CN110659581A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN110659581A
CN110659581A CN201910807979.6A CN201910807979A CN110659581A CN 110659581 A CN110659581 A CN 110659581A CN 201910807979 A CN201910807979 A CN 201910807979A CN 110659581 A CN110659581 A CN 110659581A
Authority
CN
China
Prior art keywords
image
scene
information
processed
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910807979.6A
Other languages
Chinese (zh)
Other versions
CN110659581B (en
Inventor
余自强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910807979.6A priority Critical patent/CN110659581B/en
Publication of CN110659581A publication Critical patent/CN110659581A/en
Application granted granted Critical
Publication of CN110659581B publication Critical patent/CN110659581B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing method, an image processing device, image processing equipment and a storage medium. And acquiring scene characteristic information of the image to be processed based on the scene recognition model, and acquiring scene type images corresponding to different scenes according to the scene characteristic information. The method selects any image to be processed from the scene type images, carries out image editing processing on the image to be processed, stores the editing processing process, and is applied to all images to be processed of the same scene type.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method, apparatus, device, and storage medium.
Background
As people prefer to take pictures through mobile phones, post-processing pictures also becomes an indiscernible thing. People often take many pictures over a period of time, such as traveling, recording their daily lives, and so on. Many times, the user spends a lot of time elaborately processing a picture, but the processed picture parameters cannot be applied to other pictures of the same type, and the user needs to process all the pictures one by one. Therefore, it is still inefficient to process photos in batches on the mobile terminal equipment.
The single picture can be directly processed by using the preset parameters, so that the condition that the corresponding parameters are required to be adjusted for the single picture every time is avoided, but the operation efficiency of the method at the mobile terminal is low. When the number of pictures is large, the process is very complicated because each picture needs to be processed in sequence, and the middle part of the picture is easy to drop and is not processed.
The batch processing of the pictures can be performed by manually marking the picture classification, but the efficiency is lower under the condition that more pictures are encountered. Moreover, the mobile terminal is limited to the screen size and is not suitable for manually marking one picture.
Disclosure of Invention
The invention provides an image processing method and device, aiming at solving the problem of low image processing efficiency of a plurality of images and obtaining the technical effect of processing the images in batch.
In one aspect, the present invention provides an image processing method, including:
acquiring image attribute information of a plurality of images to be processed, wherein the image attribute information is city information and time information of image shooting;
classifying the images to be processed according to the image attribute information and a preset scene recognition model to obtain scene type images corresponding to different scenes;
according to the scene type images, selecting any image to be processed from the scene type images of the same scene for image editing processing;
acquiring an image editing process for performing image editing processing on any image to be processed;
and performing image editing processing on a scene type image of the same scene as any image to be processed based on the image process.
Another aspect provides an image processing apparatus, including: the system comprises an attribute information acquisition module, an image classification module, a single image processing module, an editing process acquisition module and an image batch processing module;
the attribute information acquisition module is used for acquiring image attribute information of a plurality of images to be processed, wherein the image attribute information is geographic information and time information of image shooting;
the image classification module is used for classifying the images to be processed according to the image attribute information and a preset scene identification model to obtain scene type images corresponding to different scenes;
the single image processing module is used for selecting any image to be processed from the scene type images of the same scene to carry out image editing processing according to the scene type images;
the editing process acquisition module is used for acquiring an image editing process for performing image editing processing on any image to be processed;
the image batch processing module is used for carrying out image editing processing on the scene type image with the same scene as any image to be processed based on the image process.
Another aspect provides an apparatus comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, the at least one instruction, the at least one program, set of codes, or set of instructions being loaded and executed by the processor to implement an image processing method as described above.
Another aspect provides a storage medium comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, code set, or set of instructions, the at least one instruction, the at least one program, the code set, or the set of instructions being loaded and executed by the processor to implement an image processing method as described above.
The invention provides an image processing method, an image processing device, image processing equipment and a storage medium. And acquiring scene characteristic information of the image to be processed based on the scene recognition model, and acquiring scene type images corresponding to different scenes according to the scene characteristic information. The method comprises the steps of selecting any image to be processed from the scene type images, editing the image of the image to be processed, storing the editing process, and applying the editing process to all images to be processed of the same scene type.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart of an image processing method according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for classifying images according to image attribute information and a scene recognition model in an image processing method according to an embodiment of the present invention;
fig. 4 is a flowchart of a method for scene recognition based on a scene recognition model of a depth residual error network in an image processing method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a depth residual error network in an image processing method according to an embodiment of the present invention;
fig. 6 is a flowchart of a method for performing scene recognition based on a scene recognition model of a dense connection network in an image processing method according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a dense connection network in an image processing method according to an embodiment of the present invention;
fig. 8 is a flowchart of a method for acquiring an image editing process in an image processing method according to an embodiment of the present invention;
fig. 9 is a flowchart of a method for batch processing images according to an embodiment of the present invention;
fig. 10 is a logic diagram illustrating an image processing method applied to a scene for one look according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 12 is a schematic hardware structure diagram of an apparatus for implementing the method provided by the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
In the description of the present invention, it is to be understood that the terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. Moreover, the terms "first," "second," and the like, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
Referring to fig. 1, a schematic view of an application scenario of an image processing method according to an embodiment of the present invention is shown, where the application scenario includes a user terminal 110 and a server 120, and a user uses the user terminal to perform image shooting to obtain an image to be processed. The server obtains image attribute information of the image to be processed, classifies the image to be processed according to the image attribute information and the scene recognition model, and obtains scene type images corresponding to different scenes. A user can select any image to be processed from the images of the scene types of the same scene through an interface of a user terminal to perform image editing processing, the image editing processing process is stored, and the image editing processing process is applied to all the images of the scene types of the same scene, so that the images of the same type can be processed in batch.
In an embodiment of the present invention, the user terminal 110 may include a smart phone, a desktop computer, a tablet computer, a notebook computer, a digital assistant, a smart wearable device, and other types of physical devices. The operating system running on the user terminal 110 in this embodiment of the present application may include, but is not limited to, an android system, an IOS system, linux, windows, Mac, and the like.
In the embodiment of the invention, the scene recognition model can be constructed in a machine learning mode, the scene recognition is carried out on the input image information, and the preset fault processing strategy is selected according to the influence level. The machine learning is a multi-disciplinary cross specialty, covers probability theory knowledge, statistical knowledge, approximate theoretical knowledge and complex algorithm knowledge, uses a computer as a tool and is dedicated to simulating a human learning mode in real time, and divides the existing content into knowledge structures to effectively improve learning efficiency so as to obtain new knowledge or skills and reorganize the existing knowledge structures to continuously improve the performance of the knowledge structures.
Referring to fig. 2, an image processing method is shown, which can be applied to a server side, and includes:
s210, obtaining image attribute information of a plurality of images to be processed, wherein the image attribute information is city information and time information of image shooting;
s220, classifying the images to be processed according to the image attribute information and a preset scene recognition model to obtain scene type images corresponding to different scenes;
further, referring to fig. 3, the classifying the to-be-processed image according to the image attribute information and a preset scene recognition model to obtain scene type images corresponding to different scenes includes:
s310, classifying information to be processed based on the city information in the image attribute information to obtain city type images corresponding to different city information;
s320, classifying the city type images based on the time information in the image attribute information to obtain date type images corresponding to different time information;
s330, scene recognition is carried out on the date type image based on the scene recognition model, and scene type images corresponding to different scenes are obtained.
Specifically, on the interface where the user selects to edit the image, the shooting date and the geographical position of the image to be processed are read from the image attribute information, i.e., the image metadata, of the image to be processed. Because the geographic position included in the image metadata is longitude and latitude, the longitude and latitude are firstly converted into readable geographic position information, the city information of the geographic position is extracted, the images to be processed are classified according to the cities, and the images to be processed of different cities belong to different classifications. For example, if two of the images to be processed are captured in the shanghai and three of the images to be processed are captured in the hangzhou, two of the images captured in the shanghai are classified into one type and two of the images captured in the hangzhou are classified into one type.
According to the shooting date, the images to be processed in each city classification can be classified according to the date. If the difference between the latest time shot in the previous day and the earliest time shot in the next day in the images to be processed in the two adjacent date classifications is smaller than a preset time length, for example, smaller than 12 hours, the relevance of the contents shot in the two days is considered to be high, and the images to be processed in the two date classifications are combined for further classification of the subsequent scene recognition model. For example, there are A, B, C images to be processed under the classification of cities in Hangzhou, wherein the shooting date of the A image is 2019, No. 5 and No. 1 is nine pm, the shooting date of the B image is 2019, No. 5 and No. 2 morning 7 am, the shooting date of the C image is No. 5 and No. 4 noon 1 m in 2019, the preset time duration is set to 12 hours, and since the shooting dates of the A image and the B image are less than 12 hours, the A and B images are classified into one category, and the C image is classified into one category.
The scene of the picture is identified by loading the picture scene identification model, the characteristic of the image to be processed under each date classification is extracted according to the characteristic extraction network to obtain the scene characteristic information, and the image to be processed is further classified according to the scene characteristic information, for example, the image to be processed can be divided into different types such as food, people, buildings, seaside, night, performances, scenery, vehicles, indoor, animals and the like according to the style of the image to be processed.
Through classification processing, the images to be processed under the same type have the same scene, the difference between the images under the same type is reduced, and the subsequent batch processing of the images is facilitated.
Further, referring to fig. 4, the performing scene recognition on the date type image based on the scene recognition model to obtain scene type images corresponding to different scenes includes:
s410, performing maximum pooling on the date type image to obtain pooled image information;
s420, inputting the pooled image information into a plurality of residual convolution layers for convolution calculation;
s430, extracting the characteristics of residual error information between convolution output information and the pooled image information, and outputting basic characteristic information of the date type image;
s440, performing average pooling on the basic feature information to obtain pooled feature information;
s450, performing dimension reduction on the pooled feature information to obtain scene feature information of the date type image;
and S460, carrying out scene identification on the date type image according to the scene characteristic information to obtain a scene type image.
Specifically, the scene recognition model may be obtained through deep residual error network training. The deep residual network may be a deep residual network ResNet-101 comprising a layer 101 neural network. Referring to fig. 5, fig. 5 is a structural schematic diagram of ResNet, in the ResNet-101, gradient propagation is promoted by an identity mapping, a layer 1 network is a pooling layer for pooling maximum values of an input image to be processed under a date classification, layers 2 to 100 are residual convolution networks for performing feature extraction on the image to be processed under the date classification, and when feature extraction is performed, a part for performing feature learning is a difference value between convolution output information of the depth residual convolution network and input original information. In the depth residual error network applied to scene recognition, the input images to be processed under the date classification, namely the date type images, are subjected to feature extraction to obtain output information, meanwhile, the input date type images can be directly input into a subsequent convolutional layer, and the output information is added with the input date type images to serve as the input of a next layer of residual error network. The input date type image can be added with the output of the next layer of residual error network to be used as the input of the next layer of residual error network, and so on, in the depth residual error network, the input date type image can be transmitted to the last layer, and when scene feature learning is carried out, the basic feature information of the date type image is obtained by convolving the difference value between the output information and the input date type image.
In a 101-layer network, performing average pooling and dimension reduction on the basic characteristic information, and performing normalization through a softmax function to obtain output scene characteristic information. And classifying the date type image according to the scene characteristic information, so as to determine the scene type of the date type image.
Scene recognition based on a depth residual error network can keep the integrity of the date type image, so that the complete image can be subjected to feature extraction in the last layer of feature extraction, information loss and loss are avoided to a certain extent, meanwhile, the input date type image is directly transmitted to the output, the whole network only needs to learn the part of input and output difference, and the learning target and difficulty are simplified.
Alternatively, referring to fig. 6, performing scene recognition on the date type image based on a scene recognition model, and obtaining the scene type image includes:
s610, pooling the date type images to obtain pooled image information;
s620, inputting the pooled image information into a plurality of dense convolution layers and a plurality of conversion layers for convolution calculation;
s630, performing feature extraction on the convolution output information and the splicing information of the pooled image information, and outputting the basic feature information of the date type image;
s640, reducing the dimension of the basic characteristic information, and outputting scene characteristic information of the date type image;
s650, according to the scene characteristic information, carrying out scene recognition on the date type image to obtain a scene type image.
Specifically, the scene recognition model can be obtained through dense connection network (DenseNet) training. Referring to fig. 7, fig. 7 is a structural example of a DenseNet, in which the input of each dense convolutional layer is related to the output of all previous layers, and a conversion layer capable of performing nonlinear transformation calculation is provided after each dense convolutional layer, so that the feature size of each dense convolutional layer keeps the same size, and the conversion layer also plays a role of performing feature dimension reduction, so that the feature information after combination does not cause a large burden on the entire DenseNet. Let the input of DenseNet be x0The output characteristic of the i-th layer is xiThe non-linear transformation of the i-th layer is denoted as HiThen the input of the i-th layer is Hi([x0,x1,...,xi-1]) I.e. all outputs from the first layer to the i-1 st layerThe characteristic information is combined according to different channels.
In the DenseNet applied to scene recognition, an image to be processed under date classification, namely a date type image, is input, the date type image is pooled, the feature information of the date type image is consistent with the sizes of all subsequent output feature information, the pooled date type image is input into an intensive convolution layer for feature extraction, the output feature information is combined with the input date type image, feature dimension reduction is performed through a conversion layer, the combined feature information serving as input information of a second layer is input into the intensive convolution layer of the second layer for feature extraction, and by analogy, the input information of each layer is obtained after the combined dimension reduction of the output information of all previous layers. Finally, dimension reduction is carried out through a full connection layer, scene characteristic information of the date type image is output, the date type image is classified according to the scene characteristic information, and the scene type of the date type image can be determined.
The scene recognition model based on the DenseNet strengthens reuse of features through a bypass, has fewer parameter parameters, enables a network to be easier to train, has a certain regular effect, and relieves the problems of gradient disappearance and model degradation.
S230, selecting any image to be processed from the scene type images of the same scene to perform image editing processing according to the scene type images;
s240, acquiring an image editing process for performing image editing processing on any image to be processed;
further, referring to fig. 8, the acquiring an image editing process for performing image editing processing on any image to be processed includes:
s810, acquiring input image editing operation;
and S820, sequentially storing the image editing operations according to the sequence of the image editing operations to obtain an image editing process.
Specifically, after the images to be processed are classified to obtain scene type images, one image to be processed is arbitrarily selected from the scene type images of the same scene, and the image to be processed is edited, for example, size adjustment, filter addition, and the like are performed. And acquiring the input image editing operation, and sequentially saving the image editing operation according to the sequence of the image editing operation to obtain the image editing process. And the image editing process is applied to all the images to be processed in the same type, so that the purpose of batch processing is achieved.
In addition, for the images to be processed in a plurality of similar scenes, the merging processing can also be selected, so that the user can simultaneously process all the images to be processed in two or more scenes. In a specific example, a user selects two scene type images of buildings and streets on an interface of a user terminal, selects one image to be processed for exposure adjustment, and applies the exposure adjustment to the two scene type images of buildings and streets.
And S250, based on the image process, carrying out image editing processing on the scene type image with the same scene as any image to be processed.
Further, referring to fig. 9, the image editing processing of the scene type image of the same scene as any image to be processed based on the image process includes:
s910, acquiring the sequence of image editing operation in the image editing process;
and S920, according to the sequence, performing the same image editing operation on the scene type image with the same scene as any image to be processed, and processing the scene type image.
Specifically, after the image editing process is obtained, batch operations may be performed on other images to be processed in the same scene. And sequentially executing the same image editing operation on other images to be processed according to the sequence of the stored image editing process, thereby achieving the purpose of batch processing. In a specific example, the image editing process may include size adjustment, image exposure increase, filter addition, and decoration addition, and when batch processing is performed on the images to be processed in the same scene, the size adjustment, the image exposure increase, the filter addition, and the decoration addition are sequentially performed on each image to be processed in the same scene according to the sequence in the image editing process, so as to complete batch processing.
The batch processing operation can rapidly process the images, and reduces the complicated steps when a user processes a plurality of images. And the style of the images processed in the way is relatively uniform after the images are processed in a unified way, so that the user can share the whole group of photos more easily.
Further, the scene recognition model in the method may be obtained by a transfer learning method, where the transfer learning method is as follows:
s1010, receiving a scene recognition model construction request of a target scene, wherein the construction request comprises a prediction classification result, an actual classification result and dimension information of the target scene;
s1020, acquiring a trained scene recognition model;
s1030, setting the weight value and dimension reduction dimension of each layer of convolution layer in the trained scene recognition model based on the prediction classification result, the actual classification result and the dimension information of the target scene, and transferring the trained scene recognition model to be the scene recognition model of the target scene.
Specifically, when the scene recognition model is obtained, a scene recognition model to which Resnet or densenert is applied may be trained, and when the model training is performed, an image training data set, an image verification data set, and an image test data set are obtained. And inputting the image training data set into a scene recognition model, and training model parameters of the scene recognition model. And inputting the image verification data set into a scene recognition model, and verifying the model parameters of the scene recognition model. And inputting the image test data set into a scene recognition model, and carrying out scene classification test on the scene recognition model to obtain a finally trained scene recognition model.
Alternatively, the trained scene recognition model can be migrated to the scene classification task executed by the embodiment of the present invention by a migration learning method. In transfer learning, a base network trained on a base dataset and a task is required, and the base network can be an open-source trained network, and the learning features are readjusted or transferred to a second target network so as to train on the target dataset and the task. In the embodiment of the invention, the functions of the basic network and the target network are universal and are suitable for basic tasks and target tasks.
In one particular example, the model for transfer learning may be from a scene recognition model trained from the public data set Places 365. There are 180 ten thousand training images in the Places365 from 365 scene categories, 50 images per category in the validation set, and 900 images per category in the test set. The number of scenes under the scene classification task can be set to 50, at the moment, only the models based on the Places365 need to be migrated to the task under the training of 50 shooting scenes, and the training data of the 50 scenes can be obtained by crawling on the internet or marking 5 million pictures.
The trained Places365 model is migrated to 50 scene classification tasks, a 365-dimensional full-connected layer in the Places365 model is required to be replaced by a 50-d fc 50-dimensional full-connected layer, a prediction classification result generated by a normalization function softmax function and an actual classification result are compared, and the weight of each layer in the migrated network is deduced through gradient descent, so that the scene recognition model applicable to the current scene classification task is obtained.
By the transfer learning method, the model can be trained by using less data volume, and the difficulty in acquiring training data is greatly reduced.
In one specific example, a user takes a number of photos while traveling outside, and the system classifies the photos according to city and time information, and further classifies the photos by recognizing scenes in the photos through a scene recognition model. After the classification is finished, when a user opens a picture processing program of the mobile terminal equipment, the system pops up a picture selection interface. And displaying the time and the place on the top of the interface, and then displaying all the photos under the time and the place according to the scene identified by the content in a classified manner, wherein each classification only displays one photo under the classification. When the user edits and processes the photo, each step of the user's operation is saved. And when the processing is finished, copying and applying the previously processed operation to all the photos under the classification, and outputting all the modified photos to the gallery after the user clicks and edits the saved scene to finish the batch processing of the images.
The embodiment of the invention provides an image processing method, which classifies images to be processed through image attribute information and a scene recognition model to obtain scene type images corresponding to different scenes. Selecting any image to be processed from the scene type images, performing image editing processing on the image to be processed, storing the editing processing process, and applying the editing processing process to all images to be processed with the same scene type, further, the beneficial effects of the embodiment of the invention include:
(1) after the images are classified, any image to be processed is selected, and the process of editing any image to be processed is stored, so that the method applied to batch processing of other images to be processed of the same type can quickly process the images, the complex steps of a user when processing a plurality of images are reduced, and the user experience is improved;
(2) in the embodiment, the scene recognition model can be obtained by a transfer learning method, so that the model can be trained by using less data volume, and the difficulty in obtaining training data is greatly reduced;
(3) in the embodiment, the scene recognition model can be obtained by deep residual error network training, the deep residual error network avoids information loss and loss to some extent, the learning target and difficulty are simplified, and the classification accuracy is improved.
An embodiment of the present invention further provides an image processing apparatus, please refer to fig. 11, where the apparatus includes: an attribute information acquisition module 1110, an image classification module 1120, a single image processing module 1130, an editing process acquisition module 1140, and an image batch processing module 1150;
the attribute information acquiring module 1110 is configured to acquire image attribute information of a plurality of images to be processed, where the image attribute information is geographic information and time information of image shooting;
the image classification module 1120 is configured to classify the to-be-processed image according to the image attribute information and a preset scene identification model, so as to obtain scene type images corresponding to different scenes;
the single image processing module 1130 is configured to select any image to be processed from the scene-type images of the same scene for image editing processing according to the scene-type images;
the editing process obtaining module 1140 is configured to obtain an image editing process for performing image editing processing on any image to be processed;
the image batch processing module 1150 is configured to perform image editing processing on a scene type image of the same scene as the any image to be processed based on the image process.
Further, the image classification module comprises a city classification module, a time classification module and a scene classification module;
the city classification module is used for classifying information to be processed based on the city information in the image attribute information to obtain city type images corresponding to different city information;
the time classification module is used for classifying the city type images based on the time information in the image attribute information to obtain date type images corresponding to different time information;
and the scene classification module is used for carrying out scene identification on the date type image based on a scene identification model to obtain scene type images corresponding to different scenes.
Further, the editing process acquiring module comprises: an editing operation acquisition unit and an editing operation saving unit;
the editing operation acquisition unit is used for acquiring input image editing operation;
and the editing operation storage unit is used for sequentially storing the image editing operations according to the sequence of the image editing operations to obtain an image editing process.
Further, the image batch processing module comprises: the sequence acquisition unit is used for acquiring the sequence of image editing operation in the image editing process;
and the batch processing unit is used for executing the same image editing operation on the scene type images with the same scene as any image to be processed according to the sequence and processing the scene type images.
The device provided in the above embodiments can execute the method provided in any embodiment of the present invention, and has corresponding functional modules and beneficial effects for executing the method. For technical details that are not described in detail in the above embodiments, reference may be made to an image processing method provided in any embodiment of the present invention.
The present embodiment also provides a computer-readable storage medium, in which computer-executable instructions are stored, and the computer-executable instructions are loaded by a processor and execute an image processing method of the present embodiment.
The present embodiment also provides an apparatus comprising a processor and a memory, wherein the memory stores a computer program adapted to be loaded by the processor and to perform an image processing method as described above in the present embodiment.
The device may be a computer terminal, a mobile terminal or a server, and the device may also participate in forming the apparatus or system provided by the embodiments of the present invention. As shown in fig. 12, the mobile terminal 12 (or computer terminal 12 or server 12) may include one or more (shown here as 1202a, 1202b, … …, 1202 n) processors 1202 (the processors 1202 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), memory 1204 for storing data, and a transmitting device 1206 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 12 is only an illustration and is not intended to limit the structure of the electronic device. For example, mobile device 12 may also include more or fewer components than shown in FIG. 12, or have a different configuration than shown in FIG. 12.
It should be noted that the one or more processors 1202 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuitry may be a single, stand-alone processing module, or incorporated in whole or in part into any of the other elements in the mobile device 12 (or computer terminal). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 1204 may be used for storing software programs and modules of application software, such as program instructions/data storage devices corresponding to the method described in the embodiment of the present invention, and the processor 1202 executes various functional applications and data processing by running the software programs and modules stored in the memory 1204, so as to implement the above-mentioned method for generating the self-attention network-based time-series behavior capture block. The memory 1204 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 1204 may further include memory located remotely from processor 1202, which may be connected to mobile device 12 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmitting device 1206 is used for receiving or sending data via a network. Specific examples of such networks may include wireless networks provided by the communication provider of the mobile terminal 12. In one example, the transmitting device 1206 includes a Network Interface Controller (NIC) that can be connected to other Network devices via a base station to communicate with the internet. In one example, the transmitting device 1206 can be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the mobile device 12 (or computer terminal).
The present specification provides method steps as described in the examples or flowcharts, but may include more or fewer steps based on routine or non-inventive labor. The steps and sequences recited in the embodiments are but one manner of performing the steps in a multitude of sequences and do not represent a unique order of performance. In the actual system or interrupted product execution, it may be performed sequentially or in parallel (e.g., in the context of parallel processors or multi-threaded processing) according to the embodiments or methods shown in the figures.
The configurations shown in the present embodiment are only partial configurations related to the present application, and do not constitute a limitation on the devices to which the present application is applied, and a specific device may include more or less components than those shown, or combine some components, or have an arrangement of different components. It should be understood that the methods, apparatuses, and the like disclosed in the embodiments may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a division of one logic function, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or unit modules.
Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring image attribute information of a plurality of images to be processed, wherein the image attribute information is city information and time information of image shooting;
classifying the images to be processed according to the image attribute information and a preset scene recognition model to obtain scene type images corresponding to different scenes;
according to the scene type images, selecting any image to be processed from the scene type images of the same scene for image editing processing;
acquiring an image editing process for performing image editing processing on any image to be processed;
and performing image editing processing on a scene type image of the same scene as any image to be processed based on the image process.
2. The image processing method according to claim 1, wherein the classifying the image to be processed according to the image attribute information and a preset scene recognition model to obtain scene type images corresponding to different scenes comprises:
classifying information to be processed based on the city information in the image attribute information to obtain city type images corresponding to different city information;
classifying the city type images based on the time information in the image attribute information to obtain date type images corresponding to different time information;
and carrying out scene recognition on the date type image based on a scene recognition model to obtain scene type images corresponding to different scenes.
3. The image processing method according to claim 2, wherein the scene recognition of the date type image based on the scene recognition model to obtain the scene type images corresponding to different scenes comprises:
performing maximum pooling on the date type image to obtain pooled image information;
inputting the pooled image information into a plurality of residual convolution layers for convolution calculation;
extracting the characteristics of residual error information between the convolution output information and the pooled image information, and outputting basic characteristic information of the date type image;
performing average pooling on the basic characteristic information to obtain pooled characteristic information;
performing dimension reduction on the pooled feature information to obtain scene feature information of the date type image;
and carrying out scene identification on the date type image according to the scene characteristic information to obtain a scene type image.
4. The image processing method according to claim 2, wherein the scene recognition of the date type image based on the scene recognition model, and obtaining the scene type image comprises:
pooling the date type images to obtain pooled image information;
inputting the pooled image information into a plurality of dense convolution layers and a plurality of conversion layers for convolution calculation;
performing feature extraction on the convolution output information and the splicing information of the pooled image information, and outputting the basic feature information of the date type image;
reducing the dimension of the basic characteristic information, and outputting scene characteristic information of the date type image;
and carrying out scene identification on the date type image according to the scene characteristic information to obtain a scene type image.
5. The image processing method according to claim 1, wherein the acquiring of the image editing process for performing the image editing process on any image to be processed comprises:
acquiring input image editing operation;
and sequentially storing the image editing operations according to the sequence of the image editing operations to obtain an image editing process.
6. The image processing method according to claim 1, wherein said image-editing processing of a scene-type image of the same scene as the any image to be processed based on the image process comprises:
acquiring the sequence of image editing operation in the image editing process;
and according to the sequence, carrying out the same image editing operation on the scene type image with the same scene as any image to be processed, and processing the scene type image.
7. An image processing method according to claim 1, characterized in that the method further comprises:
receiving a scene recognition model construction request of a target scene, wherein the construction request comprises a prediction classification result, an actual classification result and dimension information of the target scene;
acquiring a trained scene recognition model;
setting weight values and dimensionality reduction dimensions of each layer of convolution layers in the trained scene recognition model based on a prediction classification result, an actual classification result and dimensionality information of a target scene, and transferring the trained scene recognition model to be the scene recognition model of the target scene.
8. An image processing apparatus, characterized in that the apparatus comprises: the system comprises an attribute information acquisition module, an image classification module, a single image processing module, an editing process acquisition module and an image batch processing module;
the attribute information acquisition module is used for acquiring image attribute information of a plurality of images to be processed, wherein the image attribute information is geographic information and time information of image shooting;
the image classification module is used for classifying the images to be processed according to the image attribute information and a preset scene identification model to obtain scene type images corresponding to different scenes;
the single image processing module is used for selecting any image to be processed from the scene type images of the same scene to carry out image editing processing according to the scene type images;
the editing process acquisition module is used for acquiring an image editing process for performing image editing processing on any image to be processed;
the image batch processing module is used for carrying out image editing processing on the scene type image with the same scene as any image to be processed based on the image process.
9. An apparatus comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, said at least one instruction, said at least one program, set of codes, or set of instructions being loaded and executed by said processor to implement an image processing method according to any one of claims 1 to 7.
10. A storage medium comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement an image processing method according to any one of claims 1 to 7.
CN201910807979.6A 2019-08-29 2019-08-29 Image processing method, device, equipment and storage medium Active CN110659581B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910807979.6A CN110659581B (en) 2019-08-29 2019-08-29 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910807979.6A CN110659581B (en) 2019-08-29 2019-08-29 Image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110659581A true CN110659581A (en) 2020-01-07
CN110659581B CN110659581B (en) 2024-02-20

Family

ID=69036457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910807979.6A Active CN110659581B (en) 2019-08-29 2019-08-29 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110659581B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476304A (en) * 2020-04-10 2020-07-31 国网冀北电力有限公司承德供电公司 Image data processing method and device
CN111626971A (en) * 2020-05-26 2020-09-04 南阳师范学院 Smart city CIM real-time imaging method with image semantic perception
CN111803944A (en) * 2020-07-21 2020-10-23 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium
CN112035042A (en) * 2020-08-31 2020-12-04 维沃移动通信有限公司 Application program control method and device, electronic equipment and readable storage medium
CN112434723A (en) * 2020-07-23 2021-03-02 之江实验室 Day/night image classification and object detection method based on attention network
CN112560890A (en) * 2020-11-06 2021-03-26 联想(北京)有限公司 Image processing method and electronic equipment
CN112785672A (en) * 2021-01-19 2021-05-11 浙江商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN114398124A (en) * 2021-12-31 2022-04-26 深圳市珍爱捷云信息技术有限公司 Point nine-effect graph rendering method based on iOS system and related device thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130089260A1 (en) * 2011-10-05 2013-04-11 Carnegie Mellon University Systems, Methods, and Software Implementing Affine-Invariant Feature Detection Implementing Iterative Searching of an Affine Space
CN108805200A (en) * 2018-06-08 2018-11-13 中国矿业大学 Optical remote sensing scene classification method and device based on the twin residual error network of depth
CN108875759A (en) * 2017-05-10 2018-11-23 华为技术有限公司 A kind of image processing method, device and server
CN109255364A (en) * 2018-07-12 2019-01-22 杭州电子科技大学 A kind of scene recognition method generating confrontation network based on depth convolution
CN109597912A (en) * 2018-12-05 2019-04-09 上海碳蓝网络科技有限公司 Method for handling picture
CN110147722A (en) * 2019-04-11 2019-08-20 平安科技(深圳)有限公司 A kind of method for processing video frequency, video process apparatus and terminal device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130089260A1 (en) * 2011-10-05 2013-04-11 Carnegie Mellon University Systems, Methods, and Software Implementing Affine-Invariant Feature Detection Implementing Iterative Searching of an Affine Space
CN108875759A (en) * 2017-05-10 2018-11-23 华为技术有限公司 A kind of image processing method, device and server
CN108805200A (en) * 2018-06-08 2018-11-13 中国矿业大学 Optical remote sensing scene classification method and device based on the twin residual error network of depth
CN109255364A (en) * 2018-07-12 2019-01-22 杭州电子科技大学 A kind of scene recognition method generating confrontation network based on depth convolution
CN109597912A (en) * 2018-12-05 2019-04-09 上海碳蓝网络科技有限公司 Method for handling picture
CN110147722A (en) * 2019-04-11 2019-08-20 平安科技(深圳)有限公司 A kind of method for processing video frequency, video process apparatus and terminal device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王丽萍;潘伟;: "一种无特征提取的自然场景图像分类新方法", 厦门大学学报(自然科学版), no. 04 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476304A (en) * 2020-04-10 2020-07-31 国网冀北电力有限公司承德供电公司 Image data processing method and device
CN111476304B (en) * 2020-04-10 2023-09-05 国网冀北电力有限公司承德供电公司 Image data processing method and device
CN111626971A (en) * 2020-05-26 2020-09-04 南阳师范学院 Smart city CIM real-time imaging method with image semantic perception
CN111626971B (en) * 2020-05-26 2021-09-07 南阳师范学院 Smart city CIM real-time imaging method with image semantic perception
CN111803944A (en) * 2020-07-21 2020-10-23 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium
CN112434723A (en) * 2020-07-23 2021-03-02 之江实验室 Day/night image classification and object detection method based on attention network
CN112035042A (en) * 2020-08-31 2020-12-04 维沃移动通信有限公司 Application program control method and device, electronic equipment and readable storage medium
WO2022042573A1 (en) * 2020-08-31 2022-03-03 维沃移动通信有限公司 Application control method and apparatus, electronic device, and readable storage medium
CN112560890A (en) * 2020-11-06 2021-03-26 联想(北京)有限公司 Image processing method and electronic equipment
CN112785672A (en) * 2021-01-19 2021-05-11 浙江商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN114398124A (en) * 2021-12-31 2022-04-26 深圳市珍爱捷云信息技术有限公司 Point nine-effect graph rendering method based on iOS system and related device thereof
CN114398124B (en) * 2021-12-31 2024-04-12 深圳市珍爱捷云信息技术有限公司 Point nine effect graph rendering method based on iOS system and related device thereof

Also Published As

Publication number Publication date
CN110659581B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
CN110659581B (en) Image processing method, device, equipment and storage medium
US12001475B2 (en) Mobile image search system
RU2631994C1 (en) Method, device and server for determining image survey plan
CN103412951A (en) Individual-photo-based human network correlation analysis and management system and method
CN107993191A (en) A kind of image processing method and device
CN105320695B (en) Picture processing method and device
CN103604271A (en) Intelligent-refrigerator based food recognition method
WO2016201963A1 (en) Application pushing method and device
KR101832680B1 (en) Searching for events by attendants
CN112001274A (en) Crowd density determination method, device, storage medium and processor
WO2018072207A1 (en) Information pushing method, apparatus, and system
CN106789565A (en) Social content sharing method and device
CN110852224B (en) Expression recognition method and related device
CN108419112B (en) Streaming media video cataloging method, retrieval method and device based on measurement and control track information
CN111353965B (en) Image restoration method, device, terminal and storage medium
US20190082002A1 (en) Media file sharing method, media file sharing device, and terminal
CN106714099A (en) Photograph information processing and scenic spot identification method, client and server
CN113128278B (en) Image recognition method and device
CN113742580A (en) Target type data recall method and device, electronic equipment and storage medium
CN111062914B (en) Method, apparatus, electronic device and computer readable medium for acquiring facial image
CN110866866B (en) Image color imitation processing method and device, electronic equipment and storage medium
CN110197459B (en) Image stylization generation method and device and electronic equipment
JP7515552B2 (en) Automatic generation of people groups and image-based creations
WO2016173278A1 (en) Image management method and device
CN109040584A (en) A kind of method and apparatus of interaction shooting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40019476

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant