CN110298413A - Image characteristic extracting method, device, storage medium and electronic equipment - Google Patents

Image characteristic extracting method, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110298413A
CN110298413A CN201910611747.3A CN201910611747A CN110298413A CN 110298413 A CN110298413 A CN 110298413A CN 201910611747 A CN201910611747 A CN 201910611747A CN 110298413 A CN110298413 A CN 110298413A
Authority
CN
China
Prior art keywords
characteristic pattern
non local
primitive character
local attention
weighted value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910611747.3A
Other languages
Chinese (zh)
Other versions
CN110298413B (en
Inventor
喻冬东
王长虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910611747.3A priority Critical patent/CN110298413B/en
Publication of CN110298413A publication Critical patent/CN110298413A/en
Application granted granted Critical
Publication of CN110298413B publication Critical patent/CN110298413B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

This disclosure relates to a kind of image characteristic extracting method, device, storage medium and electronic equipment, which comprises extract the primitive character figure of target image;The multiple non local attention neural networks of primitive character figure input are subjected to feature extraction, obtain the primitive character figure corresponding non local attention characteristic pattern under a variety of resolution ratio, wherein, the corresponding pond layer step-length of each non local attention neural network is different;The primitive character figure and each non local attention characteristic pattern are merged, the corresponding target signature of target image is obtained.Therefore, on the one hand the comprehensive and rich of the characteristic information for including in the corresponding target signature of target image can be effectively ensured, on the other hand, it also can effectively improve the accuracy of target signature by extracting the non local attention characteristic pattern under a variety of resolution ratio, guarantee the accuracy and popularity of image characteristics extraction, provides accurate data for the processing to target image and support.

Description

Image characteristic extracting method, device, storage medium and electronic equipment
Technical field
This disclosure relates to field of image processing, and in particular, to a kind of image characteristic extracting method, device, storage medium And electronic equipment.
Background technique
Now, image processing techniques using more and more, for example, can be applied to image recognition, digital map navigation, image The plurality of application scenes such as segmentation.In the prior art, image characteristics extraction is carried out by using to the image got, thus directly Subsequent applications, such as identification, segmentation are carried out according to the characteristics of image extracted.But different application scenarios are to the precision of images It is required that different, the accuracy of image characteristics extraction will directly affect the precision and accuracy of subsequent processing.
Summary of the invention
Purpose of this disclosure is to provide it is a kind of accurately, comprehensively image characteristic extracting method, device, storage medium and electricity Sub- equipment.
To achieve the goals above, according to the disclosure in a first aspect, provide a kind of image characteristic extracting method, the side Method includes:
Extract the primitive character figure of target image;
The multiple non local attention neural networks of primitive character figure input are subjected to feature extraction, are obtained described original Characteristic pattern corresponding non local attention characteristic pattern under a variety of resolution ratio, wherein each non local attention neural network pair The pond layer step-length answered is different;
The primitive character figure and each non local attention characteristic pattern are merged, it is corresponding to obtain target image Target signature.
Optionally, described to merge the primitive character figure with each non local attention characteristic pattern, it obtains The step of target image corresponding target signature includes:
Determine each non local attention characteristic pattern and the corresponding weighted value of the primitive character figure;
According to each non local attention characteristic pattern and the corresponding weighted value of the primitive character figure, determine former Beginning characteristic pattern weighted value and each non local attention characteristic pattern weighted value;
According to the primitive character figure weighted value and each non local attention characteristic pattern weighted value, to described original Characteristic pattern and each non local attention characteristic pattern are merged, and the target signature is obtained.
Optionally, each non local attention characteristic pattern of the determination and the corresponding power of the primitive character figure Weight values, comprising:
The weight of primitive character figure input pre-training is determined into model, obtains the power that the weight determines model output Weight channel vector;
Weight channel vector is decoded, the weighted value in each channel of weight channel vector is obtained, In, one a pair of channel of the primitive character figure and each non local attention characteristic pattern and weight channel vector It answers.
Optionally, described to be weighted according to the primitive character figure weighted value with each non local attention characteristic pattern Value, merges the primitive character figure and each non local attention characteristic pattern, obtains the target signature Step, comprising:
The sum of the primitive character figure weighted value and each non local attention characteristic pattern weighted value are determined as institute State target signature.
Optionally, described to be weighted according to the primitive character figure weighted value with each non local attention characteristic pattern Value, merges the primitive character figure and each non local attention characteristic pattern, obtains the target signature Step, comprising:
The primitive character figure weighted value and each non local attention characteristic pattern weighted value are spliced, obtained Weighted feature figure;
The weighted feature figure is subjected to dimensionality reduction, obtains the target signature.
Optionally, the method also includes:
According to the target signature, the target image is handled.
According to the second aspect of the disclosure, a kind of image characteristics extraction device is provided, described device includes:
First extraction module, for extracting the primitive character figure of target image;
Second extraction module, for the multiple non local attention neural networks of primitive character figure input to be carried out feature It extracts, obtains the primitive character figure corresponding non local attention characteristic pattern under a variety of resolution ratio, wherein each non local The corresponding pond layer step-length of attention neural network is different;
Fusion Module is obtained for merging the primitive character figure and each non local attention characteristic pattern Obtain the corresponding target signature of target image.
Optionally, the Fusion Module includes:
First determines submodule, for determining each non local attention characteristic pattern and primitive character figure difference Corresponding weighted value;
Second determines submodule, for being distinguished according to each non local attention characteristic pattern and the primitive character figure Corresponding weighted value determines primitive character figure weighted value and each non local attention characteristic pattern weighted value;
Submodule is merged, for adding according to the primitive character figure weighted value and each non local attention characteristic pattern Weight merges the primitive character figure and each non local attention characteristic pattern, obtains the target signature.
Optionally, described first determine that submodule includes:
Submodule is handled, for the weight of primitive character figure input pre-training to be determined model, obtains the weight Determine the weight channel vector of model output;
Decoding sub-module obtains each of weight channel vector for being decoded to weight channel vector The weighted value in channel, wherein the primitive character figure and each non local attention characteristic pattern and the weight channel to The channel of amount corresponds.
Optionally, the fusion submodule is used for:
The sum of the primitive character figure weighted value and each non local attention characteristic pattern weighted value are determined as institute State target signature.
Optionally, the fusion submodule includes:
Splice submodule, for weighting the primitive character figure weighted value and each non local attention characteristic pattern Value is spliced, and weighted feature figure is obtained;
Dimensionality reduction submodule obtains the target signature for the weighted feature figure to be carried out dimensionality reduction.
Optionally, described device further include:
Processing module, for handling the target image according to the target signature.
According to the third aspect of the disclosure, a kind of computer-readable medium is provided, computer program is stored thereon with, the journey The step of above-mentioned first aspect any the method is realized when sequence is executed by processor.
According to the fourth aspect of the disclosure, a kind of electronic equipment is provided, comprising:
Storage device is stored thereon with computer program;
Processing unit, for executing the computer program in the storage device, to realize that above-mentioned first aspect is appointed The step of one the method.
In the above-mentioned technical solutions, by extracting the primitive character figure of target image, and feature is carried out to primitive character figure It extracts again, obtains the non local attention characteristic pattern under a variety of resolution ratio, and by by primitive character figure and each non local note Meaning power characteristic pattern carries out the target signature that fusion obtains target image.Therefore, through the above technical solutions, on the one hand can have Effect guarantees the comprehensive and rich of the characteristic information for including in the corresponding target signature of target image, on the other hand, passes through Extracting the non local attention characteristic pattern under a variety of resolution ratio can also effectively reduce after Fusion Features in gained target signature Noise, improve the accuracy of target signature, guarantee the accuracy and popularity of image characteristics extraction, for target image Processing provides accurate data and supports.
Other feature and advantage of the disclosure will the following detailed description will be given in the detailed implementation section.
Detailed description of the invention
In conjunction with attached drawing and refer to following specific embodiments, the above and other feature, advantage of each embodiment of the disclosure and Aspect will be apparent.In attached drawing, the same or similar appended drawing reference indicates the same or similar element.It should manage Solution attached drawing is schematically that original part and element are not necessarily drawn to scale.In the accompanying drawings:
Fig. 1 is the flow chart of the image characteristic extracting method provided according to an embodiment of the present disclosure;
Fig. 2 is to merge primitive character figure and each non local attention characteristic pattern, and it is corresponding to obtain target image A kind of flow chart of example implementations of target signature;
Fig. 3 is the block diagram of the image characteristics extraction device provided according to an embodiment of the present disclosure;
Fig. 4 is the block diagram of the electronic equipment provided according to an embodiment of the present disclosure.
Specific embodiment
Embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the certain of the disclosure in attached drawing Embodiment, it should be understood that, the disclosure can be realized by various forms, and should not be construed as being limited to this In the embodiment that illustrates, providing these embodiments on the contrary is in order to more thorough and be fully understood by the disclosure.It should be understood that It is that being given for example only property of the accompanying drawings and embodiments effect of the disclosure is not intended to limit the protection scope of the disclosure.
Shown in Fig. 1, for the flow chart of the image characteristic extracting method provided according to an embodiment of the present disclosure, such as scheme Shown in 1, which comprises
In S11, the primitive character figure of target image is extracted.
Wherein, when getting target image, the spy of the target image can be extracted based on existing feature extraction mode Sign figure, as the primitive character figure.Illustratively, can by SIFT (Scale-invariant feature transform, Scale invariant features transform) or HOG (Histogram of Oriented Gradient, histograms of oriented gradients) or Resnet Network extracts characteristics of image, and the disclosure is to this without repeating.
In S12, the multiple non local attention neural networks of primitive character figure input are subjected to feature extraction, are obtained original Characteristic pattern corresponding non local attention characteristic pattern under a variety of resolution ratio, wherein each non local attention neural network pair The pond layer step-length answered is different.
Wherein, non local attention neural network is by the way that attention mechanism (Attention Mechanism) to be applied to Gained in non local neural network (Non-local Neural Networks) should for extracting non local attention characteristic pattern Non local attention characteristic pattern is then based on obtained by the attention mechanism and non local operation in non local attention neural network. Since non local attention characteristic pattern is handle according to each position in primitive character figure obtained, can effectively disappear Except the noise for being included when being polymerize after extraction local feature.The corresponding pond layer step-length of non local attention neural network is not Together, then the resolution ratio of its non local attention characteristic pattern extracted is different.Also, the characteristic pattern of different resolution ratio is included Non local feature it is different, for example, the Feature Semantics information that high-resolution characteristic pattern is included is fewer, but each feature Position it is more accurate;And the corresponding semantic feature of characteristic pattern of low resolution is relatively abundanter, but the position ratio of each feature It is relatively rough.
Illustratively, it is respectively 2,4,8 that pond layer step-length can be set in this application, then the non local feature that step-length is 2 The resolution ratio for the non local attention characteristic pattern that network extracts then is the 1/2 of the resolution ratio of primitive character figure;Step-length be 4 it is non- The resolution ratio for the non local attention characteristic pattern that local feature network extracts then is the 1/4 of the resolution ratio of primitive character figure, He is similar, and non local attention neural network is the prior art, and details are not described herein.It needs to be illustrated, in the application The step-length of the pond layer of the number of non local attention neural network and each non local attention neural network can be according to reality Border usage scenario is configured, and the application is to this without limiting.
In S13, primitive character figure and each non local attention characteristic pattern are merged, it is corresponding to obtain target image Target signature.
Wherein, primitive character figure can be used to indicate that the foundation characteristic of target image, each non local attention characteristic pattern It is obtained to be extracted again to primitive character figure progress feature, it is further indicating that for the feature in primitive character figure (e.g., semantic feature or feature locations etc.).Therefore, primitive character figure and each non local attention characteristic pattern merge To target signature may include richer, more fully feature in target image.
In the above-mentioned technical solutions, by extracting the primitive character figure of target image, and feature is carried out to primitive character figure It extracts again, obtains the non local attention characteristic pattern under a variety of resolution ratio, and by by primitive character figure and each non local note Meaning power characteristic pattern carries out the target signature that fusion obtains target image.Therefore, through the above technical solutions, on the one hand can have Effect guarantees the comprehensive and rich of the characteristic information for including in the corresponding target signature of target image, on the other hand, passes through Extracting the non local attention characteristic pattern under a variety of resolution ratio can also effectively reduce after Fusion Features in gained target signature Noise, improve the accuracy of target signature, guarantee the accuracy and popularity of image characteristics extraction, for target image Processing provides accurate data and supports.
Optionally, in S13, primitive character figure and each non local attention characteristic pattern are merged, obtain target A kind of example implementations of the corresponding target signature of image are as follows, as shown in Fig. 2, may include:
In S21, each non local attention characteristic pattern and the corresponding weighted value of primitive character figure are determined.
Wherein, on the basis of the resolution ratio of primitive character figure, the resolution ratio of primitive character figure is 1, each non local attention The resolution ratio of power characteristic pattern can be respectively 1/2,1/4,1/8.In one embodiment, each resolution ratio pair can be preset The weighted value for the characteristic pattern answered, therefore determining primitive character figure and the corresponding weighted value of each non local attention characteristic pattern When, weighted value corresponding with the resolution ratio of characteristic pattern can be determined as to the weighted value of this feature figure.Wherein, if below without special Illustrate, characteristic pattern can be used to indicate that primitive character figure and each non local attention characteristic pattern.
In another embodiment, each non local attention characteristic pattern of the determination and primitive character figure are corresponding Weighted value may include:
The weight of primitive character figure input pre-training is determined into model, obtains the power that the weight determines model output Weight channel vector.
Wherein, weight determines that model can be the convolutional neural networks model of a self study.In one embodiment, in determination After the corresponding resolution ratio of each non local attention neural network, characteristic pattern corresponding with multiple resolution ratio can be trained to melt It closes corresponding weight and determines model, so that corresponding comprising each resolution ratio in the weight channel vector of convolutional neural networks output Characteristic pattern weighted value.Wherein, the convolutional neural networks of self study are the prior art, and details are not described herein.By will be original Characteristic pattern is input to the weight extraction model, then can obtain the weight channel vector of output.
Later, weight channel vector is decoded, obtains the weight in each channel of weight channel vector Value, wherein the channel one of the primitive character figure and each non local attention characteristic pattern and weight channel vector One is corresponding.
Wherein, weight determines that each channel of the weight output vector of model can correspond respectively to a kind of resolution ratio, former The corresponding weighted value of beginning characteristic pattern is the weighted value in channel corresponding with the resolution ratio of the primitive character figure, for each described Non local attention characteristic pattern, the corresponding weighted value of the non local attention characteristic pattern are and the non local attention characteristic pattern The weighted value in the corresponding channel of resolution ratio.
Illustratively, if it exists resolution ratio be 1/2,1/4,1/8 non local attention neural network, then the weight channel to Amount includes 4 channels, and e.g., channel 0 is corresponding with primitive character figure, the non local attention feature that channel 1 is 1/2 with resolution ratio Scheme corresponding, the non local attention characteristic pattern that channel 2 and resolution ratio are 1/4 is corresponding, channel 3 and resolution ratio for 1/8 it is non- Local attention characteristic pattern is corresponding.Therefore, the corresponding weight in each channel can be obtained respectively to weight channel vector decoding Value, i.e., the weighted value of the characteristic pattern of resolution ratio corresponding with the channel.As an example, decoding can obtain following weighted value: F [0,α0]、F[1,α1]、F[2,α2]、F[3,α3], wherein α0、α1、α2、α3Respectively indicate the weighted value of channel 0-3, and α0、α1、 α2、α3It is positive value, summation 1.
Through the above technical solutions, can determine that model accurately determines out the corresponding weight of each characteristic pattern by weight Value, and the resolution ratio of each weighted value and characteristic pattern is adapted, it is subsequent primitive character figure and each non local attention The fusion of characteristic pattern provides accurate data and supports, and then improves the accuracy and applicability of target signature.
In S22, according to each non local attention characteristic pattern and the corresponding weighted value of primitive character figure, determine former Beginning characteristic pattern weighted value and each non local attention characteristic pattern weighted value.
Wherein it is possible to using the product of the corresponding weighted value of the vector of characteristic pattern as characteristic pattern weighted value.Illustratively, T0 indicates the vector of primitive character figure, and T1, T2, T3 respectively indicate the vector of each non local attention characteristic pattern, connect above-mentioned show Example, then primitive character weighted value can be T0* α0, similarly, each non local attention characteristic pattern weighted value is respectively T1* α1、 T2*α2、T3*α3
In S23, according to primitive character figure weighted value and each non local attention characteristic pattern weighted value, to primitive character Figure and each non local attention characteristic pattern are merged, and target signature is obtained.
Optionally, in S23, according to primitive character figure weighted value and each non local attention characteristic pattern weighted value, to original Beginning characteristic pattern and each non local attention characteristic pattern are merged, and obtain a kind of example implementations of target signature such as Under, comprising:
The sum of the primitive character figure weighted value and each non local attention characteristic pattern weighted value are determined as institute State target signature.
In this embodiment, obtain primitive character figure weighted value and each non local attention characteristic pattern weighted value it Afterwards, above-mentioned each weighted value can be overlapped, so as to realize primitive character figure and each non local attention feature The fusion of figure.Wherein, primitive character figure vector dimension corresponding with each non local attention characteristic pattern is identical, illustratively, It is indicated by N-dimensional vector.The vector A of the target signature obtained after above-mentioned each characteristic pattern is merged can be with table It is shown as A=T0* α0+T1*α1+T2*α2+T3*α3.And when being merged above-mentioned each characteristic pattern, in target signature Each feature is merged by primitive character figure and non local attention characteristic pattern, on the one hand guarantees original spy The accuracy of sign figure and each non local attention characteristic pattern fusion, on the other hand, it is also ensured that every in target signature A included information of feature it is comprehensive, improve the scope of application of target signature, improve image characteristics extraction accuracy and It is rich.
Optionally, in S23, according to primitive character figure weighted value and each non local attention characteristic pattern weighted value, to original Beginning characteristic pattern and each non local attention characteristic pattern are merged, and another example implementations of target signature are obtained It is as follows, comprising:
The primitive character figure weighted value and each non local attention characteristic pattern weighted value are spliced, obtained Weighted feature figure;
The weighted feature figure is subjected to dimensionality reduction, obtains the target signature.
In this embodiment, after obtaining primitive character figure weighted value and each non local attention characteristic pattern weighted value, The vector of each features described above figure weighted value can be indicated to splice, the vector B of the weighted feature figure of acquisition can be indicated For B=[T0* α0,T1*α1,T2*α2,T3*α3].Wherein, if T0, T1, T2, T3 are N-dimensional vector, B is the vector of 4N dimension. Later, target signature is obtained by carrying out dimensionality reduction to weighted feature figure.Wherein it is possible to which the convolution by convolutional neural networks is grasped Make to carry out weighted feature figure dimensionality reduction, which is the prior art, and details are not described herein.
In the above-mentioned technical solutions, by primitive character figure weighted value and each non local attention characteristic pattern weighted value Spliced, the weighted feature figure comprising each feature under each resolution ratio can be obtained, later, by weighted feature figure Dimensionality reduction is carried out, the relevance between the non local attention characteristic pattern under different resolution can be improved, to be further ensured that Feature is rich in the target signature of acquisition, improves the accuracy of image characteristics extraction, mentions for the processing to target image It is supported for accurate data.
Optionally, the method also includes:
According to target signature, target image is handled.
Wherein, target image is handled, can be and image segmentation is carried out to target image, is also possible to target figure As classify etc., the application is to this without limiting.Since the characteristics of image for including in the target signature is richer, complete Face, so that the processing to target image is more accurate, for example, being based on the target image when being split to target image The position of each key point in target image can be more accurately determined out, to realize the Accurate Segmentation of image.In another example When classifying to target image, which is that the characteristic pattern under a variety of resolution ratio of fusion obtains, then can pass through Feature under a variety of resolution ratio more clearly determines the type in kind for including in target image, and then improves image classification Accuracy.
The disclosure also provides a kind of image characteristics extraction device, and as described in Figure 3, described device 10 includes:
First extraction module 100, for extracting the primitive character figure of target image;
Second extraction module 200, for carrying out the multiple non local attention neural networks of primitive character figure input Feature extraction obtains the primitive character figure corresponding non local attention characteristic pattern under a variety of resolution ratio, wherein Ge Gefei The corresponding pond layer step-length of local attention neural network is different;
Fusion Module 300, for the primitive character figure and each non local attention characteristic pattern to be merged, Obtain the corresponding target signature of target image.
Optionally, the Fusion Module includes:
First determines submodule, for determining each non local attention characteristic pattern and primitive character figure difference Corresponding weighted value;
Second determines submodule, for being distinguished according to each non local attention characteristic pattern and the primitive character figure Corresponding weighted value determines primitive character figure weighted value and each non local attention characteristic pattern weighted value;
Submodule is merged, for adding according to the primitive character figure weighted value and each non local attention characteristic pattern Weight merges the primitive character figure and each non local attention characteristic pattern, obtains the target signature.
Optionally, described first determine that submodule includes:
Submodule is handled, for the weight of primitive character figure input pre-training to be determined model, obtains the weight Determine the weight channel vector of model output;
Decoding sub-module obtains each of weight channel vector for being decoded to weight channel vector The weighted value in channel, wherein the primitive character figure and each non local attention characteristic pattern and the weight channel to The channel of amount corresponds.
Optionally, the fusion submodule is used for:
The sum of the primitive character figure weighted value and each non local attention characteristic pattern weighted value are determined as institute State target signature.
Optionally, the fusion submodule includes:
Splice submodule, for weighting the primitive character figure weighted value and each non local attention characteristic pattern Value is spliced, and weighted feature figure is obtained;
Dimensionality reduction submodule obtains the target signature for the weighted feature figure to be carried out dimensionality reduction.
Optionally, described device further include:
Processing module, for handling the target image according to the target signature.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, no detailed explanation will be given here.
It should be appreciated that each step recorded in disclosed method embodiment can execute in a different order, And/or parallel execution.In addition, method implementation may include additional step and/or omit the step of execution is shown.This public affairs The range opened is not limited in this respect.
Terms used herein " comprising " and its deformation are that opening includes, i.e., " including but not limited to ".Term "based" It is " being based at least partially on ".Term " one embodiment " expression " at least one embodiment ";Term " another embodiment " indicates " at least one other embodiment ";Term " some embodiments " expression " at least some embodiments ".The correlation of other terms is fixed Justice provides in will be described below.
It is noted that referred in the disclosure "one", the modification of " multiple " be schematically and not restrictive this field It will be appreciated by the skilled person that being otherwise construed as " one or more " unless clearly indicate otherwise in context.
The being merely to illustrate property of title of the message or information that are interacted between multiple devices in disclosure embodiment Purpose, and be not used to limit the range of these message or information.
Below with reference to Fig. 4, it illustrates the structural representations for the electronic equipment 600 for being suitable for being used to realize the embodiment of the present disclosure Figure.Terminal device in the embodiment of the present disclosure can include but is not limited to such as mobile phone, laptop, digital broadcasting and connect Receive device, PDA (personal digital assistant), PAD (tablet computer), PMP (portable media player), car-mounted terminal (such as vehicle Carry navigation terminal) etc. mobile terminal and such as number TV, desktop computer etc. fixed terminal.Electricity shown in Fig. 4 Sub- equipment is only an example, should not function to the embodiment of the present disclosure and use scope bring any restrictions.
As shown in figure 4, electronic equipment 600 may include processing unit (such as central processing unit, graphics processor etc.) 601, random access can be loaded into according to the program being stored in read-only memory (ROM) 602 or from storage device 606 Program in memory (RAM) 603 and execute various movements appropriate and processing.In RAM 603, it is also stored with electronic equipment Various programs and data needed for 600 operations.Processing unit 601, ROM 602 and RAM 603 pass through the phase each other of bus 604 Even.Input/output (I/O) interface 605 is also connected to bus 604.
In general, following device can connect to I/O interface 605: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph As the input unit 606 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration The output device 607 of dynamic device etc.;Storage device 606 including such as tape, hard disk etc.;And communication device 609.Communication device 609, which can permit electronic equipment 600, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 4 shows tool There is the electronic equipment 600 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with Alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising being carried on non-transient computer can The computer program on medium is read, which includes the program code for method shown in execution flow chart.At this In the embodiment of sample, which can be downloaded and installed from network by communication device 609, or be filled from storage It sets 606 to be mounted, or is mounted from ROM 602.When the computer program is executed by processing unit 601, the disclosure is executed The above-mentioned function of being limited in the method for embodiment.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated, In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned Any appropriate combination.
In some embodiments, client, server can use such as HTTP (HyperText Transfer Protocol, hypertext transfer protocol) etc the network protocols of any currently known or following research and development communicated, and can To be interconnected with the digital data communications (for example, communication network) of arbitrary form or medium.The example of communication network includes local area network (" LAN "), wide area network (" WAN "), Internet (for example, internet) and ad-hoc network are (for example, the end-to-end net of ad hoc Network) and any currently known or following research and development network.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not It is fitted into the electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are by the electricity When sub- equipment executes, so that the electronic equipment: extracting the primitive character figure of target image;The primitive character figure is inputted multiple Non local attention neural network carries out feature extraction, and it is corresponding non local under a variety of resolution ratio to obtain the primitive character figure Attention characteristic pattern, wherein the corresponding pond layer step-length of each non local attention neural network is different;By the primitive character Figure and each non local attention characteristic pattern are merged, and the corresponding target signature of target image is obtained.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof Machine program code, above procedure design language include but is not limited to object oriented program language-such as Java, Smalltalk, C++ further include conventional procedural programming language-such as " C " language or similar program design language Speech.Program code can be executed fully on the user computer, partly be executed on the user computer, as an independence Software package execute, part on the user computer part execute on the remote computer or completely in remote computer or It is executed on server.In situations involving remote computers, remote computer can pass through the network of any kind --- packet It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit It is connected with ISP by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions.
Function described herein can be executed at least partly by one or more hardware logic components.Example Such as, without limitation, the hardware logic component for the exemplary type that can be used include: field programmable gate array (FPGA), specially With integrated circuit (ASIC), Application Specific Standard Product (ASSP), system on chip (SOC), complex programmable logic equipment (CPLD) etc. Deng.
In the context of the disclosure, machine readable media can be tangible medium, may include or is stored for The program that instruction execution system, device or equipment are used or is used in combination with instruction execution system, device or equipment.Machine can Reading medium can be machine-readable signal medium or machine-readable storage medium.Machine readable media can include but is not limited to electricity Son, magnetic, optical, electromagnetism, infrared or semiconductor system, device or equipment or above content any conjunction Suitable combination.The more specific example of machine readable storage medium will include the electrical connection of line based on one or more, portable meter Calculation machine disk, hard disk, random access memory (RAM), read-only memory (ROM), Erasable Programmable Read Only Memory EPROM (EPROM Or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage facilities or Any appropriate combination of above content.
According to one or more other embodiments of the present disclosure, a kind of image characteristic extracting method is provided, which comprises
Extract the primitive character figure of target image;
The multiple non local attention neural networks of primitive character figure input are subjected to feature extraction, are obtained described original Characteristic pattern corresponding non local attention characteristic pattern under a variety of resolution ratio, wherein each non local attention neural network pair The pond layer step-length answered is different;
The primitive character figure and each non local attention characteristic pattern are merged, it is corresponding to obtain target image Target signature.
According to one or more other embodiments of the present disclosure, a kind of image characteristic extracting method is additionally provided, wherein described to incite somebody to action The primitive character figure and each non local attention characteristic pattern are merged, and the corresponding target signature of target image is obtained The step of figure includes:
Determine each non local attention characteristic pattern and the corresponding weighted value of the primitive character figure;
According to each non local attention characteristic pattern and the corresponding weighted value of the primitive character figure, determine former Beginning characteristic pattern weighted value and each non local attention characteristic pattern weighted value;
According to the primitive character figure weighted value and each non local attention characteristic pattern weighted value, to described original Characteristic pattern and each non local attention characteristic pattern are merged, and the target signature is obtained.
According to one or more other embodiments of the present disclosure, a kind of image characteristic extracting method is additionally provided, wherein described true Fixed each non local attention characteristic pattern and the corresponding weighted value of the primitive character figure, comprising:
The weight of primitive character figure input pre-training is determined into model, obtains the power that the weight determines model output Weight channel vector;
Weight channel vector is decoded, the weighted value in each channel of weight channel vector is obtained, In, one a pair of channel of the primitive character figure and each non local attention characteristic pattern and weight channel vector It answers.
According to one or more other embodiments of the present disclosure, a kind of image characteristic extracting method is additionally provided, wherein described According to the primitive character figure weighted value and each non local attention characteristic pattern weighted value, to the primitive character figure and respectively The step of a non local attention characteristic pattern is merged, obtains the target signature, comprising:
The sum of the primitive character figure weighted value and each non local attention characteristic pattern weighted value are determined as institute State target signature.
According to one or more other embodiments of the present disclosure, a kind of image characteristic extracting method is additionally provided, wherein described According to the primitive character figure weighted value and each non local attention characteristic pattern weighted value, to the primitive character figure and respectively The step of a non local attention characteristic pattern is merged, obtains the target signature, comprising:
The primitive character figure weighted value and each non local attention characteristic pattern weighted value are spliced, obtained Weighted feature figure;
The weighted feature figure is subjected to dimensionality reduction, obtains the target signature.
According to one or more other embodiments of the present disclosure, a kind of image characteristic extracting method is additionally provided, wherein the side Method further include:
According to the target signature, the target image is handled.
According to one or more other embodiments of the present disclosure, a kind of image characteristics extraction device is provided, described device includes:
First extraction module, for extracting the primitive character figure of target image;
Second extraction module, for the multiple non local attention neural networks of primitive character figure input to be carried out feature It extracts, obtains the primitive character figure corresponding non local attention characteristic pattern under a variety of resolution ratio, wherein each non local The corresponding pond layer step-length of attention neural network is different;
Fusion Module is obtained for merging the primitive character figure and each non local attention characteristic pattern Obtain the corresponding target signature of target image.
According to one or more other embodiments of the present disclosure, a kind of image characteristics extraction device is also provided, wherein the fusion Module includes:
First determines submodule, for determining each non local attention characteristic pattern and primitive character figure difference Corresponding weighted value;
Second determines submodule, for being distinguished according to each non local attention characteristic pattern and the primitive character figure Corresponding weighted value determines primitive character figure weighted value and each non local attention characteristic pattern weighted value;
Submodule is merged, for adding according to the primitive character figure weighted value and each non local attention characteristic pattern Weight merges the primitive character figure and each non local attention characteristic pattern, obtains the target signature.
According to one or more other embodiments of the present disclosure, a kind of image characteristics extraction device is also provided, wherein described first Determine that submodule includes:
Submodule is handled, for the weight of primitive character figure input pre-training to be determined model, obtains the weight Determine the weight channel vector of model output;
Decoding sub-module obtains each of weight channel vector for being decoded to weight channel vector The weighted value in channel, wherein the primitive character figure and each non local attention characteristic pattern and the weight channel to The channel of amount corresponds.
According to one or more other embodiments of the present disclosure, a kind of image characteristics extraction device is also provided, wherein the fusion Submodule is used for:
The sum of the primitive character figure weighted value and each non local attention characteristic pattern weighted value are determined as institute State target signature.
According to one or more other embodiments of the present disclosure, a kind of image characteristics extraction device is also provided, wherein the fusion Submodule includes:
Splice submodule, for weighting the primitive character figure weighted value and each non local attention characteristic pattern Value is spliced, and weighted feature figure is obtained;
Dimensionality reduction submodule obtains the target signature for the weighted feature figure to be carried out dimensionality reduction.
According to one or more other embodiments of the present disclosure, a kind of image characteristics extraction device is also provided, wherein described device Further include:
Processing module, for handling the target image according to the target signature.
According to one or more other embodiments of the present disclosure, a kind of computer-readable medium is provided, computer is stored thereon with The step of program, any image feature extracting method that the realization disclosure provides when which is executed by processor.
According to one or more other embodiments of the present disclosure, a kind of electronic equipment is provided, comprising:
Storage device is stored thereon with computer program;
Processing unit, for executing the computer program in the storage device, to realize appointing for disclosure offer The step of one image characteristic extracting method.
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that the open scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from design disclosed above, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure Can technical characteristic replaced mutually and the technical solution that is formed.
Although this is not construed as requiring these operations with institute in addition, depicting each operation using certain order The certain order that shows executes in sequential order to execute.Under certain environment, multitask and parallel processing may be advantageous 's.Similarly, although containing several specific implementation details in being discussed above, these are not construed as to this public affairs The limitation for the range opened.Certain features described in the context of individual embodiment can also be realized in combination single real It applies in example.On the contrary, the various features described in the context of single embodiment can also be individually or with any suitable The mode of sub-portfolio is realized in various embodiments.
Although having used specific to this theme of the language description of structure feature and/or method logical action, answer When understanding that theme defined in the appended claims is not necessarily limited to special characteristic described above or movement.On on the contrary, Special characteristic described in face and movement are only to realize the exemplary forms of claims.

Claims (10)

1. a kind of image characteristic extracting method, which is characterized in that the described method includes:
Extract the primitive character figure of target image;
The multiple non local attention neural networks of primitive character figure input are subjected to feature extraction, obtain the primitive character Figure corresponding non local attention characteristic pattern under a variety of resolution ratio, wherein each non local attention neural network is corresponding Pond layer step-length is different;
The primitive character figure and each non local attention characteristic pattern are merged, the corresponding mesh of target image is obtained Mark characteristic pattern.
2. the method according to claim 1, wherein described by the primitive character figure and each described non local The step of attention characteristic pattern is merged, and target image corresponding target signature is obtained include:
Determine each non local attention characteristic pattern and the corresponding weighted value of the primitive character figure;
According to each non local attention characteristic pattern and the corresponding weighted value of the primitive character figure, original spy is determined Levy figure weighted value and each non local attention characteristic pattern weighted value;
According to the primitive character figure weighted value and each non local attention characteristic pattern weighted value, to the primitive character Figure and each non local attention characteristic pattern are merged, and the target signature is obtained.
3. according to the method described in claim 2, it is characterized in that, each non local attention characteristic pattern of the determination and The corresponding weighted value of the primitive character figure, comprising:
The weight of primitive character figure input pre-training is determined into model, the weight is obtained and determines that the weight of model output is logical Road vector;
Weight channel vector is decoded, obtains the weighted value in each channel of weight channel vector, wherein institute The channel for stating primitive character figure and each non local attention characteristic pattern and weight channel vector corresponds.
4. according to the method described in claim 2, it is characterized in that, described according to the primitive character figure weighted value and each institute Non local attention characteristic pattern weighted value is stated, the primitive character figure and each non local attention characteristic pattern are melted The step of closing, obtaining the target signature, comprising:
The sum of the primitive character figure weighted value and each non local attention characteristic pattern weighted value are determined as the mesh Mark characteristic pattern.
5. according to the method described in claim 2, it is characterized in that, described according to the primitive character figure weighted value and each institute Non local attention characteristic pattern weighted value is stated, the primitive character figure and each non local attention characteristic pattern are melted The step of closing, obtaining the target signature, comprising:
The primitive character figure weighted value and each non local attention characteristic pattern weighted value are spliced, weighted Characteristic pattern;
The weighted feature figure is subjected to dimensionality reduction, obtains the target signature.
6. method according to any one of claims 1-5, which is characterized in that the method also includes:
According to the target signature, the target image is handled.
7. a kind of image characteristics extraction device, which is characterized in that described device includes:
First extraction module, for extracting the primitive character figure of target image;
Second extraction module is mentioned for the multiple non local attention neural networks of primitive character figure input to be carried out feature It takes, obtains the primitive character figure corresponding non local attention characteristic pattern under a variety of resolution ratio, wherein each non local note The corresponding pond layer step-length of power neural network of anticipating is different;
Fusion Module obtains mesh for merging the primitive character figure and each non local attention characteristic pattern The corresponding target signature of logo image.
8. device according to claim 7, which is characterized in that the Fusion Module includes:
First determines submodule, for determining that each non local attention characteristic pattern and the primitive character figure respectively correspond Weighted value;
Second determines submodule, for being respectively corresponded according to each non local attention characteristic pattern and the primitive character figure Weighted value, determine primitive character figure weighted value and each non local attention characteristic pattern weighted value;
Submodule is merged, for weighting according to the primitive character figure weighted value and each non local attention characteristic pattern Value, merges the primitive character figure and each non local attention characteristic pattern, obtains the target signature.
9. a kind of computer-readable medium, is stored thereon with computer program, which is characterized in that when the program is executed by processor The step of realizing any one of claim 1-6 the method.
10. a kind of electronic equipment characterized by comprising
Storage device is stored thereon with computer program;
Processing unit, for executing the computer program in the storage device, to realize any one of claim 1-6 The step of the method.
CN201910611747.3A 2019-07-08 2019-07-08 Image feature extraction method and device, storage medium and electronic equipment Active CN110298413B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910611747.3A CN110298413B (en) 2019-07-08 2019-07-08 Image feature extraction method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910611747.3A CN110298413B (en) 2019-07-08 2019-07-08 Image feature extraction method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110298413A true CN110298413A (en) 2019-10-01
CN110298413B CN110298413B (en) 2021-07-16

Family

ID=68030670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910611747.3A Active CN110298413B (en) 2019-07-08 2019-07-08 Image feature extraction method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110298413B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046847A (en) * 2019-12-30 2020-04-21 北京澎思科技有限公司 Video processing method and device, electronic equipment and medium
CN112215789A (en) * 2020-10-12 2021-01-12 北京字节跳动网络技术有限公司 Image defogging method, device, equipment and computer readable medium
CN112233077A (en) * 2020-10-10 2021-01-15 北京三快在线科技有限公司 Image analysis method, device, equipment and storage medium
CN112435174A (en) * 2020-08-20 2021-03-02 辽宁师范大学 Underwater image processing method based on double attention mechanism
CN112967730A (en) * 2021-01-29 2021-06-15 北京达佳互联信息技术有限公司 Voice signal processing method and device, electronic equipment and storage medium
CN113052175A (en) * 2021-03-26 2021-06-29 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and readable storage medium
WO2021136978A1 (en) * 2019-12-30 2021-07-08 Sensetime International Pte. Ltd. Image processing method and apparatus, electronic device, and storage medium
CN113095106A (en) * 2019-12-23 2021-07-09 华为数字技术(苏州)有限公司 Human body posture estimation method and device
CN113240042A (en) * 2021-06-01 2021-08-10 平安科技(深圳)有限公司 Image classification preprocessing method, image classification preprocessing device, image classification equipment and storage medium
CN114519401A (en) * 2022-02-22 2022-05-20 平安科技(深圳)有限公司 Image classification method and device, electronic equipment and storage medium
CN115063810A (en) * 2022-06-24 2022-09-16 联仁健康医疗大数据科技股份有限公司 Text detection method and device, electronic equipment and storage medium
US11450021B2 (en) 2019-12-30 2022-09-20 Sensetime International Pte. Ltd. Image processing method and apparatus, electronic device, and storage medium
CN115375980A (en) * 2022-06-30 2022-11-22 杭州电子科技大学 Block chain-based digital image evidence storing system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170374528A1 (en) * 2009-12-18 2017-12-28 Comcast Cable Communications, Llc Location Intelligence Management System for Border Security
CN109102502A (en) * 2018-08-03 2018-12-28 西北工业大学 Pulmonary nodule detection method based on Three dimensional convolution neural network
US10198823B1 (en) * 2017-03-28 2019-02-05 Amazon Technologies, Inc. Segmentation of object image data from background image data
CN109815964A (en) * 2019-01-31 2019-05-28 北京字节跳动网络技术有限公司 The method and apparatus for extracting the characteristic pattern of image
CN109829501A (en) * 2019-02-01 2019-05-31 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109871798A (en) * 2019-02-01 2019-06-11 浙江大学 A kind of remote sensing image building extracting method based on convolutional neural networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170374528A1 (en) * 2009-12-18 2017-12-28 Comcast Cable Communications, Llc Location Intelligence Management System for Border Security
US10198823B1 (en) * 2017-03-28 2019-02-05 Amazon Technologies, Inc. Segmentation of object image data from background image data
CN109102502A (en) * 2018-08-03 2018-12-28 西北工业大学 Pulmonary nodule detection method based on Three dimensional convolution neural network
CN109815964A (en) * 2019-01-31 2019-05-28 北京字节跳动网络技术有限公司 The method and apparatus for extracting the characteristic pattern of image
CN109829501A (en) * 2019-02-01 2019-05-31 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109871798A (en) * 2019-02-01 2019-06-11 浙江大学 A kind of remote sensing image building extracting method based on convolutional neural networks

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KE LI等: ""Multi-modal feature fusion for geographic image annotation"", 《PATTERN RECOGNITION》 *
YULUN ZHANG等: ""Residual Non-local Attention Networks for Image Restoration"", 《ICLR 2019》 *
周才东等: ""结合注意力与卷积神经网络的中文摘要研究"", 《计算机工程与应用》 *
胡滨等: ""基于自注意力孪生神经网络的人脸识别"", 《信息与电脑》 *
马书磊等: ""一种改进的全局注意机制图像描述方法"", 《西安电子科技大学学报》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095106A (en) * 2019-12-23 2021-07-09 华为数字技术(苏州)有限公司 Human body posture estimation method and device
US11450021B2 (en) 2019-12-30 2022-09-20 Sensetime International Pte. Ltd. Image processing method and apparatus, electronic device, and storage medium
WO2021136978A1 (en) * 2019-12-30 2021-07-08 Sensetime International Pte. Ltd. Image processing method and apparatus, electronic device, and storage medium
CN111046847A (en) * 2019-12-30 2020-04-21 北京澎思科技有限公司 Video processing method and device, electronic equipment and medium
CN112435174A (en) * 2020-08-20 2021-03-02 辽宁师范大学 Underwater image processing method based on double attention mechanism
CN112435174B (en) * 2020-08-20 2023-07-11 辽宁师范大学 Underwater image processing method based on double-attention mechanism
CN112233077A (en) * 2020-10-10 2021-01-15 北京三快在线科技有限公司 Image analysis method, device, equipment and storage medium
CN112215789A (en) * 2020-10-12 2021-01-12 北京字节跳动网络技术有限公司 Image defogging method, device, equipment and computer readable medium
CN112967730A (en) * 2021-01-29 2021-06-15 北京达佳互联信息技术有限公司 Voice signal processing method and device, electronic equipment and storage medium
CN113052175A (en) * 2021-03-26 2021-06-29 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and readable storage medium
CN113052175B (en) * 2021-03-26 2024-03-29 北京百度网讯科技有限公司 Target detection method, target detection device, electronic equipment and readable storage medium
CN113240042A (en) * 2021-06-01 2021-08-10 平安科技(深圳)有限公司 Image classification preprocessing method, image classification preprocessing device, image classification equipment and storage medium
CN113240042B (en) * 2021-06-01 2023-08-29 平安科技(深圳)有限公司 Image classification preprocessing, image classification method, device, equipment and storage medium
CN114519401A (en) * 2022-02-22 2022-05-20 平安科技(深圳)有限公司 Image classification method and device, electronic equipment and storage medium
CN115063810A (en) * 2022-06-24 2022-09-16 联仁健康医疗大数据科技股份有限公司 Text detection method and device, electronic equipment and storage medium
CN115375980A (en) * 2022-06-30 2022-11-22 杭州电子科技大学 Block chain-based digital image evidence storing system and method
CN115375980B (en) * 2022-06-30 2023-05-09 杭州电子科技大学 Digital image certification system and certification method based on blockchain

Also Published As

Publication number Publication date
CN110298413B (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN110298413A (en) Image characteristic extracting method, device, storage medium and electronic equipment
EP3398119B1 (en) Generative neural networks for generating images using a hidden canvas
CN112184738B (en) Image segmentation method, device, equipment and storage medium
CN111104962A (en) Semantic segmentation method and device for image, electronic equipment and readable storage medium
CN112364860B (en) Training method and device of character recognition model and electronic equipment
CN107644209A (en) Method for detecting human face and device
CN110532981A (en) Human body key point extracting method, device, readable storage medium storing program for executing and equipment
WO2022105553A1 (en) Speech synthesis method and apparatus, readable medium, and electronic device
CN109376268A (en) Video classification methods, device, electronic equipment and computer readable storage medium
US20240233334A1 (en) Multi-modal data retrieval method and apparatus, medium, and electronic device
CN110532983A (en) Method for processing video frequency, device, medium and equipment
CN110362698A (en) A kind of pictorial information generation method, device, mobile terminal and storage medium
CN110381352A (en) Display methods, device, electronic equipment and the readable medium of virtual present
CN110427915A (en) Method and apparatus for output information
CN116128055A (en) Map construction method, map construction device, electronic equipment and computer readable medium
CN113255327B (en) Text processing method and device, electronic equipment and computer readable storage medium
CN110414450A (en) Keyword detection method, apparatus, storage medium and electronic equipment
US20230315990A1 (en) Text detection method and apparatus, electronic device, and storage medium
CN110334650A (en) Object detecting method, device, electronic equipment and storage medium
CN114420135A (en) Attention mechanism-based voiceprint recognition method and device
CN111859970B (en) Method, apparatus, device and medium for processing information
CN109829431A (en) Method and apparatus for generating information
CN110378282A (en) Image processing method and device
CN110414527A (en) Character identifying method, device, storage medium and electronic equipment
CN113706663B (en) Image generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant