CN111915480B - Method, apparatus, device and computer readable medium for generating feature extraction network - Google Patents

Method, apparatus, device and computer readable medium for generating feature extraction network Download PDF

Info

Publication number
CN111915480B
CN111915480B CN202010685579.5A CN202010685579A CN111915480B CN 111915480 B CN111915480 B CN 111915480B CN 202010685579 A CN202010685579 A CN 202010685579A CN 111915480 B CN111915480 B CN 111915480B
Authority
CN
China
Prior art keywords
sample
affine transformation
feature map
image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010685579.5A
Other languages
Chinese (zh)
Other versions
CN111915480A (en
Inventor
何轶
王长虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Original Assignee
Douyin Vision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Douyin Vision Co Ltd filed Critical Douyin Vision Co Ltd
Priority to CN202010685579.5A priority Critical patent/CN111915480B/en
Publication of CN111915480A publication Critical patent/CN111915480A/en
Priority to PCT/CN2021/096145 priority patent/WO2022012179A1/en
Application granted granted Critical
Publication of CN111915480B publication Critical patent/CN111915480B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • G06T3/147Transformations for image registration, e.g. adjusting or mapping for alignment of images using affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Embodiments of the present disclosure disclose methods, apparatuses, electronic devices, and computer-readable media for generating a feature extraction network. One embodiment of the method comprises the following steps: respectively inputting a first sample image and a second sample image into the feature extraction network to obtain a first sample feature image and a second sample feature image, wherein the second sample image is obtained by carrying out affine transformation on the first sample image; carrying out affine transformation on the first sample feature map to obtain a first sample affine transformation feature map; determining loss values of a first vector and a second vector at the same position in the affine transformation feature map of the first sample and the affine transformation feature map of the second sample based on a preset loss function; training the feature extraction network based on the loss value. According to the embodiment, training optimization of the feature extraction network is realized, and on the basis, the features extracted by the affine transformation pictures are similar to those extracted by the original pictures.

Description

Method, apparatus, device and computer readable medium for generating feature extraction network
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a method, an apparatus, an electronic device, and a computer-readable medium for generating a feature extraction network.
Background
With the development of the internet and the popularization of artificial intelligence technology with deep learning as a core, computer vision technology relates to various fields of people's life. In practice, the features extracted from the affine transformed pictures have larger differences from the features extracted from the original pictures, so that the accuracy of the subsequent similarity calculation is affected.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a method, apparatus, electronic device and computer readable medium for generating a feature extraction network to solve the technical problems mentioned in the background above.
In a first aspect, some embodiments of the present disclosure provide a method of generating a feature extraction network, the method comprising: respectively inputting a first sample image and a second sample image into the feature extraction network to obtain a first sample feature image and a second sample feature image, wherein the second sample image is obtained by carrying out affine transformation on the first sample image; carrying out affine transformation on the first sample feature map to obtain a first sample affine transformation feature map; determining loss values of a first vector and a second vector at the same position in the affine transformation feature map of the first sample and the affine transformation feature map of the second sample based on a preset loss function; training the feature extraction network based on the loss value.
In a second aspect, some embodiments of the present disclosure provide an apparatus for generating a feature extraction network, the apparatus comprising: a feature map generation unit configured to: respectively inputting a first sample image and a second sample image into the feature extraction network to obtain a first sample feature image and a second sample feature image, wherein the second sample image is obtained by carrying out affine transformation on the first sample image; an affine transformation unit configured to affine-transform the first sample feature map to obtain a first sample affine-transformed feature map; a loss determination unit configured to determine loss values of a first vector and a second vector at the same position in the first sample affine transformation feature map and the second sample feature map based on a preset loss function; and a network training unit configured to train the feature extraction network based on the loss value.
In a third aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors causes the one or more processors to implement the method as described in any of the implementations of the first aspect or to implement the method as described in any of the third aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements a method as described in any of the implementations of the first aspect, or implements a method as described in any of the implementations of the third aspect.
One of the above embodiments of the present disclosure has the following advantageous effects: affine transformation is carried out on the first sample image to obtain a second sample image, and the first sample image and the second sample image are input into a feature extraction network to obtain a feature image of the image. The feature extraction network may be optimized using the feature map and the affine transformed image of the first sample feature map described above. The loss value of the same position of the feature map obtained by inputting the image of the first sample after affine transformation and the feature extraction network to the first sample image can be determined, and the feature extraction network can be trained by using the loss value. On the basis, the similarity between the characteristics of the picture after affine transformation and the characteristics of the original picture can be improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a schematic illustration of one application scenario of a method of generating a feature extraction network according to some embodiments of the present disclosure;
FIG. 2 is a flow chart of some embodiments of a method of generating a feature extraction network according to the present disclosure;
FIG. 3 is a flow chart of further embodiments of a method of generating a feature extraction network according to the present disclosure;
FIG. 4 is a schematic structural diagram of some embodiments of an apparatus to generate a feature extraction network according to the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram 100 of one application scenario of a method of generating a feature extraction network according to some embodiments of the present disclosure.
As shown in fig. 1, the computing device 101 may input the first sample image 102 and the second sample image 103 into the feature extraction network 104 described above, respectively, resulting in a first sample feature map 105 and a second sample feature map 106. As an example, the first sample image 102 may be a face image, and the second sample image 103 may be the face image after affine transformation. The affine transformation is performed on the first sample feature map 105, and the affine transformation may be a translational transformation, and the first sample affine transformation feature map 107 may be obtained. Based on a preset euclidean distance loss function, a loss value 108 of the first vector and the second vector may be determined for a first vector and a second vector at the same position in the affine transformation feature map 107 of the first sample and 106 in the feature map of the second sample. As an example, a pixel position in the first sample image 102 may be the same position in the first sample affine transformation feature map 107 after feature extraction and affine transformation as a second position in the second sample feature map 106 after affine transformation and feature extraction. The loss function may be a euclidean distance loss function. Training the feature extraction network 104 based on the loss value 108 can optimize the feature extraction network, and on this basis, the similarity of the features extracted from the picture and the affine transformed picture can be improved. It will be appreciated that the method for generating the feature extraction network may be performed by the computing device 101, or may be performed by a server, and the execution subject of the method may further include a device formed by integrating the computing device 101 and the server through a network, or may be performed by various software programs. Wherein the computing device 101 may be a variety of electronic devices having information processing capabilities, including, but not limited to, smartphones, tablet computers, electronic book readers, laptop and desktop computers, and the like. The execution body may be embodied as a server, software, or the like. When the execution subject is software, the execution subject can be installed in the electronic device enumerated above. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
It should be understood that the number of computing devices in fig. 1 is merely illustrative. There may be any number of computing devices, as desired for an implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of a method of generating a feature extraction network according to the present disclosure is shown. The method for generating the feature extraction network comprises the following steps:
step 201, inputting a first sample image and a second sample image into the feature extraction network respectively to obtain a first sample feature map and a second sample feature map, wherein the second sample image is obtained by carrying out affine transformation on the first sample image.
In some embodiments, the first sample image may be any image.
In some embodiments, the first sample feature map and the second sample feature map may have features such as size features and shading features of the image.
In some alternative implementations of some embodiments, the first sample image and the second sample feature map have color features, texture features, shape features, and spatial relationship features.
In some embodiments, affine transformations may include translational, rotational, scaling, shearing, or reflective operations. As an example, the first sample image may be rotated to obtain the second sample image, or the first sample image may be scaled to obtain the second sample image.
In some embodiments, the feature extraction network may be various neural networks for feature extraction. For example, a convolutional neural network or a cyclic neural network may be used. The first and second sample feature maps may be images having features of the first and second sample images, respectively.
Step 202, performing affine transformation on the first sample feature map to obtain a first sample affine transformation feature map.
In some embodiments, as an example, the first sample feature map may be subjected to a translation transformation, resulting in a first sample affine transformation feature map. The first sample feature map may also be rotated to obtain a first sample affine transformation feature map.
Step 203, determining loss values of the first vector and the second vector based on a preset loss function for the first vector and the second vector at the same position in the affine transformation feature map of the first sample and the feature map of the second sample.
In some embodiments, the same position may be a position where the first sample affine transformation feature map and the second sample feature map have the same coordinates in the same coordinate system.
In some embodiments, the first vector and the second vector are co-located feature vectors in the first sample affine transformation feature map and the second sample feature map.
In some embodiments, the loss function is a function defining the difference between the fit result and the true result. As an example, the loss function may be an absolute value loss function or a square loss function. The loss value may be an image difference degree between the first sample affine transformation feature map and the second sample feature map. As an example, the normalization processing is performed on the vectors corresponding to each pixel in the first sample affine transformation feature map and the second sample affine transformation feature map, so as to obtain a normalized vector set of the first sample affine transformation feature map and a normalized vector set of the second sample affine transformation feature map. Determining a loss value of each normalized vector corresponding to the normalized vector set of the second sample feature map in the normalized vector set of the first sample affine transformation feature map by the following formula:
Figure BDA0002587414470000061
the first position in the affine transformation feature map of the first sample after feature extraction and affine transformation and the second position in the feature map of the second sample after feature extraction and affine transformation can be the same position. i represents the corresponding hash code in the normalized vector set of the affine transformation feature map of the first sample at the same position and the i-th bit of the corresponding hash code in the normalized vector set of the second sample feature map. P is p i The probability of 1 is taken for the corresponding hash-coded ith bit in the normalized vector set of the affine transformation feature map of the first sample. Wherein, elements greater than 0.5 in the normalized vector set can be used as hash code 1, and elements less than 0.5 can be used as hash code 0.q i The probability of 1 is taken for the corresponding hash-coded ith bit in the normalized vector set of the second sample signature. P is p i q i The probability that the corresponding hash-coded ith bit in the normalized vector set representing the first sample affine transformation feature map takes 1 is multiplied by the probability that the corresponding hash-coded ith bit in the normalized vector set of the second sample feature map takes 1. 1-p i 1-q representing probability of 0 of corresponding hash-coded ith bit in normalized vector set of affine transformation feature map of first sample i The probability that the corresponding hash-coded ith bit in the normalized vector set representing the second sample feature map takes 0. (1-p) i )(1-q i ) The probability that the corresponding hash-coded ith bit in the normalized vector set representing the first sample affine transformation feature map takes 0 is multiplied by the probability that the corresponding hash-coded ith bit in the normalized vector set of the second sample feature map takes 0. P is p i q i +(1-p i )(1-q i ) And the difference degree between the corresponding i bit of hash codes in the normalized vector set of the predicted affine transformation feature map of the first sample and the corresponding i bit of hash codes in the normalized vector set of the second sample feature map is represented.
Figure BDA0002587414470000071
Normalized vector set representing affine transformation feature map for first sample, and secondThe sum of hash coded prediction difference values corresponding to each element in the normalized vector set of the sample feature map. The obtained result is a loss value between the normalized vector of the affine transformation feature map of the first sample and the normalized vector of the feature map of the second sample at the same position.
In some alternative implementations of some embodiments, the loss function may be a maximum likelihood estimation function, a divergence function, or a hamming distance.
And 204, training the feature extraction network based on the loss value.
In some embodiments, the weights in the feature extraction network may be optimized by gradient descent to minimize losses.
One of the above embodiments of the present disclosure has the following advantageous effects: based on the loss function, affine transformation is carried out on the first sample image, then the picture obtained by feature extraction is compared with the picture obtained by the feature extraction and affine transformation of the first sample image, and the loss value of the training neural network can be obtained. The loss value can be used for optimizing and training the feature extraction network, so that the feature extraction network is optimized.
With further reference to fig. 3, a flow 300 of further embodiments of a method of generating a feature extraction network is shown. The method for generating the feature extraction network comprises the following steps:
step 301, preprocessing the first image to obtain a first sample image.
In some embodiments, the first image may be subjected to graying processing, geometric transformation, and image enhancement.
Step 302, inputting the first sample image and the second sample image into the feature extraction network respectively to obtain a first sample feature map and a second sample feature map, wherein the second sample image is obtained by carrying out affine transformation on the first sample image.
Step 303, carrying out affine transformation on the first sample feature map to obtain a first sample affine transformation feature map.
Step 304, determining loss values of the first vector and the second vector based on a preset loss function for the first vector and the second vector at the same position in the affine transformation feature map of the first sample and the feature map of the second sample.
And step 305, training the feature extraction network based on the loss value.
In some embodiments, the specific implementation of steps 302, 303, 304, and 305 and the technical effects thereof may refer to steps 201, 202, 203, and 204 in the corresponding embodiment of fig. 2, which are not described herein.
According to the method for generating the feature extraction network disclosed by some embodiments of the present disclosure, based on preprocessing an image, graying processing, geometric transformation and image enhancement are performed, so that irrelevant information in the image can be eliminated, and therefore, the training effect of the network is improved, and the accuracy of the feature extracted by the network can be improved based on the training effect.
With further reference to fig. 4, as an implementation of the method shown in the above figures, the present disclosure provides some embodiments of an apparatus for generating a feature extraction network, which apparatus embodiments correspond to those method embodiments shown in fig. 2, and which apparatus is particularly applicable in various electronic devices.
As shown in fig. 4, an apparatus 400 for generating a feature extraction network of some embodiments includes: a feature map generating unit 401, an affine transformation unit 402, a loss value determining unit 403, and a network training unit 404. The feature map generating unit 401 is configured to input a first sample image and a second sample image into the feature extraction network respectively, so as to obtain a first sample feature map and a second sample feature map, where the second sample image is obtained by performing affine transformation on the first sample image; an affine transformation unit 402 configured to affine-transform the first sample feature map to obtain a first sample affine transformation feature map; a loss value determining unit 403 configured to determine, for a first vector and a second vector at the same position in the first sample affine transformation feature map and the second sample feature map, a loss value of the first vector and the second vector based on a preset loss function; a network training unit 404 configured to train the feature extraction network based on the loss value.
In an alternative implementation of some embodiments, the apparatus further includes: and the image preprocessing unit is configured to preprocess the first image to obtain the first sample image.
In an alternative implementation of some embodiments, the first sample feature map and the second sample feature map include: color features, texture features, shape features, and spatial relationship features.
In an alternative implementation of some embodiments, the above-described loss function is one of: maximum likelihood estimation function, divergence function, hamming distance.
It will be appreciated that the elements described in the apparatus 400 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting benefits described above with respect to the method are equally applicable to the apparatus 400 and the units contained therein, and are not described in detail herein.
As shown in fig. 5, the electronic device 500 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 5 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communications device 509, or from the storage device 508, or from the ROM 502. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that the computer readable medium according to some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: respectively inputting a first sample image and a second sample image into the feature extraction network to obtain a first sample feature image and a second sample feature image, wherein the second sample image is obtained by carrying out affine transformation on the first sample image; carrying out affine transformation on the first sample feature map to obtain a first sample affine transformation feature map; determining loss values of a first vector and a second vector at the same position in the affine transformation feature map of the first sample and the affine transformation feature map of the second sample based on a preset loss function; training the feature extraction network based on the loss value.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes a feature map generation unit, an affine transformation unit, a loss value determination unit, and a network training unit. The names of these units do not constitute limitations on the unit itself in some cases, and for example, the feature map generation unit may also be described as "a unit that generates a feature map".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In accordance with one or more embodiments of the present disclosure, there is provided a method of generating a feature extraction network, comprising: respectively inputting a first sample image and a second sample image into the feature extraction network to obtain a first sample feature image and a second sample feature image, wherein the second sample image is obtained by carrying out affine transformation on the first sample image; carrying out affine transformation on the first sample feature map to obtain a first sample affine transformation feature map; determining loss values of a first vector and a second vector at the same position in the affine transformation feature map of the first sample and the affine transformation feature map of the second sample based on a preset loss function; training the feature extraction network based on the loss value.
According to one or more embodiments of the present disclosure, before inputting the first sample image and the second sample image into the feature extraction network to obtain the first sample feature map and the second sample feature map, respectively, the method further includes: and preprocessing the first image to obtain the first sample image.
According to one or more embodiments of the present disclosure, the first sample feature map and the second sample feature map described above include: color features, texture features, shape features, and spatial relationship features.
According to one or more embodiments of the present disclosure, the above-described loss function is one of: maximum likelihood estimation function, divergence function, hamming distance.
In accordance with one or more embodiments of the present disclosure, an apparatus for generating a feature extraction network includes: a feature map generating unit configured to input a first sample image and a second sample image into the feature extraction network, respectively, to obtain a first sample feature map and a second sample feature map, wherein the second sample image is obtained by performing affine transformation on the first sample image; an affine transformation unit configured to affine-transform the first sample feature map to obtain a first sample affine-transformed feature map; a loss value determining unit configured to determine, for a first vector and a second vector at the same position in the first sample affine transformation feature map and the second sample feature map, a loss value of the first vector and the second vector based on a preset loss function; and a network training unit configured to train the feature extraction network based on the loss value.
According to one or more embodiments of the present disclosure, the above-described apparatus further includes: and the image preprocessing unit is configured to preprocess the first image to obtain the first sample image.
According to one or more embodiments of the present disclosure, the first sample feature map and the second sample feature map described above include: color features, texture features, shape features, and spatial relationship features.
According to one or more embodiments of the present disclosure, the above-described loss function is one of: maximum likelihood estimation function, divergence function, hamming distance.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement a method as described above.
According to one or more embodiments of the present disclosure, there is provided a computer readable medium having stored thereon a computer program, wherein the program, when executed by a processor, implements a method as described in any of the embodiments above.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (10)

1. A method of generating a feature extraction network, comprising:
respectively inputting a first sample image and a second sample image into a feature extraction network to obtain a first sample feature image and a second sample feature image, wherein the second sample image is obtained by carrying out affine transformation on the first sample image;
carrying out affine transformation on the first sample feature map to obtain a first sample affine transformation feature map;
determining a first vector of the first sample affine transformation feature map and a second vector of the second sample affine transformation feature map, and determining loss values of the first vector and the second vector based on a preset loss function, wherein the first vector and the second vector are feature vectors at the same position in the first sample affine transformation feature map and the second sample feature map respectively;
training the feature extraction network based on the loss value.
2. The method of claim 1, wherein before inputting the first sample image and the second sample image into the feature extraction network, respectively, to obtain the first sample feature map and the second sample feature map, wherein the second sample image is obtained by performing affine transformation on the first sample image, further comprising:
and preprocessing the first image to obtain the first sample image.
3. The method of claim 1, wherein the first and second sample feature maps comprise:
color features, texture features, shape features, and spatial relationship features.
4. The method of claim 1, wherein the loss function is one of:
maximum likelihood estimation function, divergence function, hamming distance.
5. An apparatus for generating a feature extraction network, comprising:
the characteristic map generating unit is configured to input a first sample image and a second sample image into the characteristic extraction network respectively to obtain a first sample characteristic map and a second sample characteristic map, wherein the second sample image is obtained by carrying out affine transformation on the first sample image;
an affine transformation unit configured to perform affine transformation on the first sample feature map to obtain a first sample affine transformation feature map;
a loss value determination unit configured to determine a first vector of the first sample affine transformation feature map and a second vector of the second sample feature map, which are feature vectors of the same position in the first sample affine transformation feature map and the second sample feature map, respectively, and determine loss values of the first vector and the second vector based on a preset loss function;
a network training unit configured to train the feature extraction network based on the loss value.
6. The apparatus of claim 5, wherein the apparatus further comprises:
and the image preprocessing unit is configured to preprocess the first image to obtain the first sample image.
7. The apparatus of claim 5, wherein the first and second sample feature maps comprise:
color features, texture features, shape features, and spatial relationship features.
8. The apparatus of claim 5, wherein the loss function is one of: maximum likelihood estimation function, divergence function, hamming distance.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-4.
10. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-4.
CN202010685579.5A 2020-07-16 2020-07-16 Method, apparatus, device and computer readable medium for generating feature extraction network Active CN111915480B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010685579.5A CN111915480B (en) 2020-07-16 2020-07-16 Method, apparatus, device and computer readable medium for generating feature extraction network
PCT/CN2021/096145 WO2022012179A1 (en) 2020-07-16 2021-05-26 Method and apparatus for generating feature extraction network, and device and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010685579.5A CN111915480B (en) 2020-07-16 2020-07-16 Method, apparatus, device and computer readable medium for generating feature extraction network

Publications (2)

Publication Number Publication Date
CN111915480A CN111915480A (en) 2020-11-10
CN111915480B true CN111915480B (en) 2023-05-23

Family

ID=73280390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010685579.5A Active CN111915480B (en) 2020-07-16 2020-07-16 Method, apparatus, device and computer readable medium for generating feature extraction network

Country Status (2)

Country Link
CN (1) CN111915480B (en)
WO (1) WO2022012179A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915480B (en) * 2020-07-16 2023-05-23 抖音视界有限公司 Method, apparatus, device and computer readable medium for generating feature extraction network
CN112651880B (en) * 2020-12-25 2022-12-30 北京市商汤科技开发有限公司 Video data processing method and device, electronic equipment and storage medium
CN113065475B (en) * 2021-04-08 2023-11-07 上海晓材科技有限公司 Rapid and accurate identification method for CAD (computer aided design) legend
CN113313022B (en) * 2021-05-27 2023-11-10 北京百度网讯科技有限公司 Training method of character recognition model and method for recognizing characters in image
CN114528976B (en) * 2022-01-24 2023-01-03 北京智源人工智能研究院 Equal transformation network training method and device, electronic equipment and storage medium
CN115082740B (en) * 2022-07-18 2023-09-01 北京百度网讯科技有限公司 Target detection model training method, target detection device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015118473A (en) * 2013-12-17 2015-06-25 日本電信電話株式会社 Characteristic extraction device, method and program
CN109344845A (en) * 2018-09-21 2019-02-15 哈尔滨工业大学 A kind of feature matching method based on Triplet deep neural network structure
CN110188754A (en) * 2019-05-29 2019-08-30 腾讯科技(深圳)有限公司 Image partition method and device, model training method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007133840A (en) * 2005-11-07 2007-05-31 Hirotaka Niitsuma Em object localization using haar-like feature
CN102231191B (en) * 2011-07-17 2012-12-26 西安电子科技大学 Multimodal image feature extraction and matching method based on ASIFT (affine scale invariant feature transform)
CN110555835B (en) * 2019-09-04 2022-12-02 郑州大学 Brain slice image region division method and device
CN111382793B (en) * 2020-03-09 2023-02-28 腾讯音乐娱乐科技(深圳)有限公司 Feature extraction method and device and storage medium
CN111382727B (en) * 2020-04-02 2023-07-25 安徽睿极智能科技有限公司 Dog face recognition method based on deep learning
CN111340013B (en) * 2020-05-22 2020-09-01 腾讯科技(深圳)有限公司 Face recognition method and device, computer equipment and storage medium
CN111915480B (en) * 2020-07-16 2023-05-23 抖音视界有限公司 Method, apparatus, device and computer readable medium for generating feature extraction network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015118473A (en) * 2013-12-17 2015-06-25 日本電信電話株式会社 Characteristic extraction device, method and program
CN109344845A (en) * 2018-09-21 2019-02-15 哈尔滨工业大学 A kind of feature matching method based on Triplet deep neural network structure
CN110188754A (en) * 2019-05-29 2019-08-30 腾讯科技(深圳)有限公司 Image partition method and device, model training method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Hui Lin等.《Image Registration Based on Corner Detection And Affine Transformation》.《2010 3rd International Congress on Image and Signal Processing》.2010,第2184-2188. *
侯晨等.《卷积神经网络在尺度不变特征变换图像配准方法中的应用》.《电子技术》.2020,第49卷(第04期),第189-191页. *

Also Published As

Publication number Publication date
WO2022012179A1 (en) 2022-01-20
CN111915480A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN111915480B (en) Method, apparatus, device and computer readable medium for generating feature extraction network
CN109800732B (en) Method and device for generating cartoon head portrait generation model
CN111369427B (en) Image processing method, image processing device, readable medium and electronic equipment
CN113379627A (en) Training method of image enhancement model and method for enhancing image
CN112766284B (en) Image recognition method and device, storage medium and electronic equipment
CN112418249A (en) Mask image generation method and device, electronic equipment and computer readable medium
CN115578570A (en) Image processing method, device, readable medium and electronic equipment
WO2022012178A1 (en) Method for generating objective function, apparatus, electronic device and computer readable medium
CN111539287B (en) Method and device for training face image generation model
CN114792355A (en) Virtual image generation method and device, electronic equipment and storage medium
CN111898338B (en) Text generation method and device and electronic equipment
CN111967584A (en) Method, device, electronic equipment and computer storage medium for generating countermeasure sample
CN115100536B (en) Building identification method and device, electronic equipment and computer readable medium
CN111680754B (en) Image classification method, device, electronic equipment and computer readable storage medium
CN116704593A (en) Predictive model training method, apparatus, electronic device, and computer-readable medium
CN116310615A (en) Image processing method, device, equipment and medium
CN111814807B (en) Method, apparatus, electronic device, and computer-readable medium for processing image
CN112085035A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN111797263A (en) Image label generation method, device, equipment and computer readable medium
CN114049417B (en) Virtual character image generation method and device, readable medium and electronic equipment
CN115345931B (en) Object attitude key point information generation method and device, electronic equipment and medium
CN111582376B (en) Visualization method and device for neural network, electronic equipment and medium
CN114399814B (en) Deep learning-based occlusion object removing and three-dimensional reconstructing method
CN113239943B (en) Three-dimensional component extraction and combination method and device based on component semantic graph
CN114863025B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant