Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram 100 of one application scenario of a method of generating a feature extraction network according to some embodiments of the present disclosure.
As shown in fig. 1, the computing device 101 may input the first sample image 102 and the second sample image 103 into the feature extraction network 104 described above, respectively, resulting in a first sample feature map 105 and a second sample feature map 106. As an example, the first sample image 102 may be a face image, and the second sample image 103 may be the face image after affine transformation. The affine transformation is performed on the first sample feature map 105, and the affine transformation may be a translational transformation, and the first sample affine transformation feature map 107 may be obtained. Based on a preset euclidean distance loss function, a loss value 108 of the first vector and the second vector may be determined for a first vector and a second vector at the same position in the affine transformation feature map 107 of the first sample and 106 in the feature map of the second sample. As an example, a pixel position in the first sample image 102 may be the same position in the first sample affine transformation feature map 107 after feature extraction and affine transformation as a second position in the second sample feature map 106 after affine transformation and feature extraction. The loss function may be a euclidean distance loss function. Training the feature extraction network 104 based on the loss value 108 can optimize the feature extraction network, and on this basis, the similarity of the features extracted from the picture and the affine transformed picture can be improved. It will be appreciated that the method for generating the feature extraction network may be performed by the computing device 101, or may be performed by a server, and the execution subject of the method may further include a device formed by integrating the computing device 101 and the server through a network, or may be performed by various software programs. Wherein the computing device 101 may be a variety of electronic devices having information processing capabilities, including, but not limited to, smartphones, tablet computers, electronic book readers, laptop and desktop computers, and the like. The execution body may be embodied as a server, software, or the like. When the execution subject is software, the execution subject can be installed in the electronic device enumerated above. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
It should be understood that the number of computing devices in fig. 1 is merely illustrative. There may be any number of computing devices, as desired for an implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of a method of generating a feature extraction network according to the present disclosure is shown. The method for generating the feature extraction network comprises the following steps:
step 201, inputting a first sample image and a second sample image into the feature extraction network respectively to obtain a first sample feature map and a second sample feature map, wherein the second sample image is obtained by carrying out affine transformation on the first sample image.
In some embodiments, the first sample image may be any image.
In some embodiments, the first sample feature map and the second sample feature map may have features such as size features and shading features of the image.
In some alternative implementations of some embodiments, the first sample image and the second sample feature map have color features, texture features, shape features, and spatial relationship features.
In some embodiments, affine transformations may include translational, rotational, scaling, shearing, or reflective operations. As an example, the first sample image may be rotated to obtain the second sample image, or the first sample image may be scaled to obtain the second sample image.
In some embodiments, the feature extraction network may be various neural networks for feature extraction. For example, a convolutional neural network or a cyclic neural network may be used. The first and second sample feature maps may be images having features of the first and second sample images, respectively.
Step 202, performing affine transformation on the first sample feature map to obtain a first sample affine transformation feature map.
In some embodiments, as an example, the first sample feature map may be subjected to a translation transformation, resulting in a first sample affine transformation feature map. The first sample feature map may also be rotated to obtain a first sample affine transformation feature map.
Step 203, determining loss values of the first vector and the second vector based on a preset loss function for the first vector and the second vector at the same position in the affine transformation feature map of the first sample and the feature map of the second sample.
In some embodiments, the same position may be a position where the first sample affine transformation feature map and the second sample feature map have the same coordinates in the same coordinate system.
In some embodiments, the first vector and the second vector are co-located feature vectors in the first sample affine transformation feature map and the second sample feature map.
In some embodiments, the loss function is a function defining the difference between the fit result and the true result. As an example, the loss function may be an absolute value loss function or a square loss function. The loss value may be an image difference degree between the first sample affine transformation feature map and the second sample feature map. As an example, the normalization processing is performed on the vectors corresponding to each pixel in the first sample affine transformation feature map and the second sample affine transformation feature map, so as to obtain a normalized vector set of the first sample affine transformation feature map and a normalized vector set of the second sample affine transformation feature map. Determining a loss value of each normalized vector corresponding to the normalized vector set of the second sample feature map in the normalized vector set of the first sample affine transformation feature map by the following formula:
the first position in the affine transformation feature map of the first sample after feature extraction and affine transformation and the second position in the feature map of the second sample after feature extraction and affine transformation can be the same position. i represents the corresponding hash code in the normalized vector set of the affine transformation feature map of the first sample at the same position and the i-th bit of the corresponding hash code in the normalized vector set of the second sample feature map. P is p
i The probability of 1 is taken for the corresponding hash-coded ith bit in the normalized vector set of the affine transformation feature map of the first sample. Wherein, elements greater than 0.5 in the normalized vector set can be used as hash code 1, and elements less than 0.5 can be used as hash code 0.q
i The probability of 1 is taken for the corresponding hash-coded ith bit in the normalized vector set of the second sample signature. P is p
i q
i The probability that the corresponding hash-coded ith bit in the normalized vector set representing the first sample affine transformation feature map takes 1 is multiplied by the probability that the corresponding hash-coded ith bit in the normalized vector set of the second sample feature map takes 1. 1-p
i 1-q representing probability of 0 of corresponding hash-coded ith bit in normalized vector set of affine transformation feature map of first sample
i The probability that the corresponding hash-coded ith bit in the normalized vector set representing the second sample feature map takes 0. (1-p)
i )(1-q
i ) The probability that the corresponding hash-coded ith bit in the normalized vector set representing the first sample affine transformation feature map takes 0 is multiplied by the probability that the corresponding hash-coded ith bit in the normalized vector set of the second sample feature map takes 0. P is p
i q
i +(1-p
i )(1-q
i ) And the difference degree between the corresponding i bit of hash codes in the normalized vector set of the predicted affine transformation feature map of the first sample and the corresponding i bit of hash codes in the normalized vector set of the second sample feature map is represented.
Normalized vector set representing affine transformation feature map for first sample, and secondThe sum of hash coded prediction difference values corresponding to each element in the normalized vector set of the sample feature map. The obtained result is a loss value between the normalized vector of the affine transformation feature map of the first sample and the normalized vector of the feature map of the second sample at the same position.
In some alternative implementations of some embodiments, the loss function may be a maximum likelihood estimation function, a divergence function, or a hamming distance.
And 204, training the feature extraction network based on the loss value.
In some embodiments, the weights in the feature extraction network may be optimized by gradient descent to minimize losses.
One of the above embodiments of the present disclosure has the following advantageous effects: based on the loss function, affine transformation is carried out on the first sample image, then the picture obtained by feature extraction is compared with the picture obtained by the feature extraction and affine transformation of the first sample image, and the loss value of the training neural network can be obtained. The loss value can be used for optimizing and training the feature extraction network, so that the feature extraction network is optimized.
With further reference to fig. 3, a flow 300 of further embodiments of a method of generating a feature extraction network is shown. The method for generating the feature extraction network comprises the following steps:
step 301, preprocessing the first image to obtain a first sample image.
In some embodiments, the first image may be subjected to graying processing, geometric transformation, and image enhancement.
Step 302, inputting the first sample image and the second sample image into the feature extraction network respectively to obtain a first sample feature map and a second sample feature map, wherein the second sample image is obtained by carrying out affine transformation on the first sample image.
Step 303, carrying out affine transformation on the first sample feature map to obtain a first sample affine transformation feature map.
Step 304, determining loss values of the first vector and the second vector based on a preset loss function for the first vector and the second vector at the same position in the affine transformation feature map of the first sample and the feature map of the second sample.
And step 305, training the feature extraction network based on the loss value.
In some embodiments, the specific implementation of steps 302, 303, 304, and 305 and the technical effects thereof may refer to steps 201, 202, 203, and 204 in the corresponding embodiment of fig. 2, which are not described herein.
According to the method for generating the feature extraction network disclosed by some embodiments of the present disclosure, based on preprocessing an image, graying processing, geometric transformation and image enhancement are performed, so that irrelevant information in the image can be eliminated, and therefore, the training effect of the network is improved, and the accuracy of the feature extracted by the network can be improved based on the training effect.
With further reference to fig. 4, as an implementation of the method shown in the above figures, the present disclosure provides some embodiments of an apparatus for generating a feature extraction network, which apparatus embodiments correspond to those method embodiments shown in fig. 2, and which apparatus is particularly applicable in various electronic devices.
As shown in fig. 4, an apparatus 400 for generating a feature extraction network of some embodiments includes: a feature map generating unit 401, an affine transformation unit 402, a loss value determining unit 403, and a network training unit 404. The feature map generating unit 401 is configured to input a first sample image and a second sample image into the feature extraction network respectively, so as to obtain a first sample feature map and a second sample feature map, where the second sample image is obtained by performing affine transformation on the first sample image; an affine transformation unit 402 configured to affine-transform the first sample feature map to obtain a first sample affine transformation feature map; a loss value determining unit 403 configured to determine, for a first vector and a second vector at the same position in the first sample affine transformation feature map and the second sample feature map, a loss value of the first vector and the second vector based on a preset loss function; a network training unit 404 configured to train the feature extraction network based on the loss value.
In an alternative implementation of some embodiments, the apparatus further includes: and the image preprocessing unit is configured to preprocess the first image to obtain the first sample image.
In an alternative implementation of some embodiments, the first sample feature map and the second sample feature map include: color features, texture features, shape features, and spatial relationship features.
In an alternative implementation of some embodiments, the above-described loss function is one of: maximum likelihood estimation function, divergence function, hamming distance.
It will be appreciated that the elements described in the apparatus 400 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting benefits described above with respect to the method are equally applicable to the apparatus 400 and the units contained therein, and are not described in detail herein.
As shown in fig. 5, the electronic device 500 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 5 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communications device 509, or from the storage device 508, or from the ROM 502. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that the computer readable medium according to some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: respectively inputting a first sample image and a second sample image into the feature extraction network to obtain a first sample feature image and a second sample feature image, wherein the second sample image is obtained by carrying out affine transformation on the first sample image; carrying out affine transformation on the first sample feature map to obtain a first sample affine transformation feature map; determining loss values of a first vector and a second vector at the same position in the affine transformation feature map of the first sample and the affine transformation feature map of the second sample based on a preset loss function; training the feature extraction network based on the loss value.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes a feature map generation unit, an affine transformation unit, a loss value determination unit, and a network training unit. The names of these units do not constitute limitations on the unit itself in some cases, and for example, the feature map generation unit may also be described as "a unit that generates a feature map".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In accordance with one or more embodiments of the present disclosure, there is provided a method of generating a feature extraction network, comprising: respectively inputting a first sample image and a second sample image into the feature extraction network to obtain a first sample feature image and a second sample feature image, wherein the second sample image is obtained by carrying out affine transformation on the first sample image; carrying out affine transformation on the first sample feature map to obtain a first sample affine transformation feature map; determining loss values of a first vector and a second vector at the same position in the affine transformation feature map of the first sample and the affine transformation feature map of the second sample based on a preset loss function; training the feature extraction network based on the loss value.
According to one or more embodiments of the present disclosure, before inputting the first sample image and the second sample image into the feature extraction network to obtain the first sample feature map and the second sample feature map, respectively, the method further includes: and preprocessing the first image to obtain the first sample image.
According to one or more embodiments of the present disclosure, the first sample feature map and the second sample feature map described above include: color features, texture features, shape features, and spatial relationship features.
According to one or more embodiments of the present disclosure, the above-described loss function is one of: maximum likelihood estimation function, divergence function, hamming distance.
In accordance with one or more embodiments of the present disclosure, an apparatus for generating a feature extraction network includes: a feature map generating unit configured to input a first sample image and a second sample image into the feature extraction network, respectively, to obtain a first sample feature map and a second sample feature map, wherein the second sample image is obtained by performing affine transformation on the first sample image; an affine transformation unit configured to affine-transform the first sample feature map to obtain a first sample affine-transformed feature map; a loss value determining unit configured to determine, for a first vector and a second vector at the same position in the first sample affine transformation feature map and the second sample feature map, a loss value of the first vector and the second vector based on a preset loss function; and a network training unit configured to train the feature extraction network based on the loss value.
According to one or more embodiments of the present disclosure, the above-described apparatus further includes: and the image preprocessing unit is configured to preprocess the first image to obtain the first sample image.
According to one or more embodiments of the present disclosure, the first sample feature map and the second sample feature map described above include: color features, texture features, shape features, and spatial relationship features.
According to one or more embodiments of the present disclosure, the above-described loss function is one of: maximum likelihood estimation function, divergence function, hamming distance.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement a method as described above.
According to one or more embodiments of the present disclosure, there is provided a computer readable medium having stored thereon a computer program, wherein the program, when executed by a processor, implements a method as described in any of the embodiments above.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.