CN107832794A - A kind of convolutional neural networks generation method, the recognition methods of car system and computing device - Google Patents

A kind of convolutional neural networks generation method, the recognition methods of car system and computing device Download PDF

Info

Publication number
CN107832794A
CN107832794A CN201711098051.2A CN201711098051A CN107832794A CN 107832794 A CN107832794 A CN 107832794A CN 201711098051 A CN201711098051 A CN 201711098051A CN 107832794 A CN107832794 A CN 107832794A
Authority
CN
China
Prior art keywords
grader
process block
vehicle
block
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711098051.2A
Other languages
Chinese (zh)
Other versions
CN107832794B (en
Inventor
周晖
刘峰
黄国龙
张欣
胡蒙
黄中杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Che Zhi Interconnect (beijing) Technology Co Ltd
Original Assignee
Che Zhi Interconnect (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Che Zhi Interconnect (beijing) Technology Co Ltd filed Critical Che Zhi Interconnect (beijing) Technology Co Ltd
Priority to CN201711098051.2A priority Critical patent/CN107832794B/en
Publication of CN107832794A publication Critical patent/CN107832794A/en
Application granted granted Critical
Publication of CN107832794B publication Critical patent/CN107832794B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of convolutional neural networks generation method, the recognition methods of car system and computing device for being used to enter the vehicle in image vehicle-driving identification, convolutional neural networks generation method includes:Structure includes the first process block, the 3rd process block and the 5th process block of one or more convolutional layers and maximum pond layer respectively;Structure includes one or more convolutional layers, second processing block, fourth process block and the 6th process block of the first global average pond layer, full articulamentum and active coating respectively;According to one or more first process block, second processing block, the 3rd process block, fourth process block, the 5th process block and the 6th process blocks, with reference to the second global average pond layer, the first grader, the second grader and the 3rd grader structure convolutional neural networks;Convolutional neural networks are trained according to vehicle image data acquisition system, car system, brand and rank corresponding to vehicle are indicated respectively so as to the output of the first grader, the second grader and the 3rd grader.

Description

A kind of convolutional neural networks generation method, the recognition methods of car system and computing device
Technical field
It is more particularly to a kind of to be used to enter the vehicle in image vehicle-driving identification the present invention relates to technical field of image processing Convolutional neural networks generation method, the recognition methods of car system and computing device.
Background technology
With science and technology and economic rapid development, the market type for being of getting on the bus is increasingly abundanter, such as common Audi A4L, the system of BMW 3 etc., but in actual life, it can also often meet and its car system not recognized or uncomprehending vehicle.In order to The car system of these vehicles is enough identified, typically first obtains the vehicle pictures of car system to be identified, then it is special by some artificial extractions The algorithm of sign, such as SIFT (Scale-Invariant Feature Transform, Scale invariant features transform), HOG (Histogram of Oriented Gradient, histograms of oriented gradients) etc., the pixel value of vehicle pictures is converted into solid Determine the characteristic vector of dimension, reuse SVM (Support Vector Machine, SVMs), KNN (K-Nearest Neighbor, closest node) etc. grader features described above vector is classified, determined according to classification results in vehicle pictures Car system corresponding to vehicle.
However, the identification of car system belongs to fine granularity identification, the feature of engineer is too single, causes the feature deficiency of extraction To portray and distinguish object, recognition accuracy is low, can not meet practical application request, and then, with CNN (Convolutional Neural Network, convolutional neural networks) based on the recognition methods of car system arise at the historic moment.Such method is instructed first with vehicle Practice data set and train the CNN models identified for car system, the vehicle pictures of car system to be identified are input in the CNN models, The probable value of each car system is predicted, takes the car system corresponding to the probable value of maximum as recognition result.But due to using Vehicle training dataset it is smaller, cover only small part car system on the market, if covering all car systems, recognition accuracy will be big It is big to reduce, and used CNN models are relatively simple, are not trained, caused using supervision messages such as vehicle brand, ranks Discrimination is difficult to improve.
The content of the invention
Therefore, the present invention provides a kind of convolutional neural networks generation side for being used to enter the vehicle in image vehicle-driving identification Case, and the car system identifying schemes based on the convolutional neural networks are proposed, solve or at least alleviate existing above to try hard to Problem.
According to an aspect of the present invention, there is provided a kind of convolutional Neural for being used to enter the vehicle in image vehicle-driving identification Network generation method, suitable for being performed in computing device, this method comprises the following steps:First, build respectively the first process block, 3rd process block and the 5th process block, the first process block, the 3rd process block and the 5th process block include one or more convolution Layer and maximum pond layer;Second processing block, fourth process block and the 6th process block, second processing block, fourth process are built respectively Block and the 6th process block include one or more convolutional layers, the first global average pond layer, full articulamentum and active coating;Respectively Structure second global average pond layer, the first grader, the second grader and the 3rd grader;At one or more first Block, second processing block, the 3rd process block, fourth process block, the 5th process block and the 6th process block are managed, it is global average with reference to second Pond layer, the first grader, the second grader and the 3rd grader structure convolutional neural networks, convolutional neural networks are with first Block is managed as input, using the first grader, the second grader and the 3rd grader as output;According to the vehicle image number obtained in advance Convolutional neural networks are trained according to set, distinguished so as to the output of the first grader, the second grader and the 3rd grader Car system, brand and the rank corresponding to vehicle are indicated, vehicle image data acquisition system includes multiple vehicle image information, each vehicle Image information includes car system information, brand message and the class information of vehicle in vehicle image and correspondence image.
Alternatively, generated in the convolutional neural networks for being used to enter the vehicle in image vehicle-driving identification according to the present invention In method, the step of building the first process block, the 3rd process block and five process blocks respectively, includes:Respectively according to default first Concatenate rule, the 3rd concatenate rule and the 5th concatenate rule, each convolutional layer is connected to be correspondingly formed with maximum pond layer First process block, the 3rd process block and the 5th process block.
Alternatively, generated in the convolutional neural networks for being used to enter the vehicle in image vehicle-driving identification according to the present invention In method, the step of structure second processing block, fourth process block and six process blocks includes respectively:Respectively according to default second Concatenate rule, the 4th concatenate rule and the 6th concatenate rule, by each convolutional layer, the first global average pond layer, full articulamentum and Active coating is connected to be correspondingly formed second processing block, fourth process block and the 6th process block.
Alternatively, generated in the convolutional neural networks for being used to enter the vehicle in image vehicle-driving identification according to the present invention In method, according to one or more first process blocks, second processing block, the 3rd process block, fourth process block, the 5th process block and 6th process block, with reference to the second global average pond layer, the first grader, the second grader and the 3rd grader structure convolution god Include through the step of network:According to default network connection rules, by each first process block, second processing block, the 3rd process block, After fourth process block, the 5th process block are connected with the 6th process block, the global average pond layer of connection second;It is global second After average pond layer, the first coupled grader, the second grader and the 3rd grader are added respectively, to build with first Process block is input, the convolutional neural networks using the first grader, the second grader and the 3rd grader as output.
Alternatively, generated in the convolutional neural networks for being used to enter the vehicle in image vehicle-driving identification according to the present invention In method, according to one or more first process blocks, second processing block, the 3rd process block, fourth process block, the 5th process block and 6th process block, with reference to the second global average pond layer, the first grader, the second grader and the 3rd grader structure convolution god Through also including the step of network:Second global average pond layer and the first grader, the second grader and the 3rd grader it Between, addition abstention layer, with build using the first process block as input, using the first grader, the second grader and the 3rd grader as The convolutional neural networks of output, waive the right layer pond layer average with second overall situation, the first grader, the second grader and the 3rd respectively Grader is connected.
Alternatively, generated in the convolutional neural networks for being used to enter the vehicle in image vehicle-driving identification according to the present invention In method, the quantity of the first process block, the 3rd process block and the 5th process block is 1.
Alternatively, generated in the convolutional neural networks for being used to enter the vehicle in image vehicle-driving identification according to the present invention In method, the quantity of second processing block is 3, and the quantity of fourth process block is 10, and the quantity of the 6th process block is 5.
Alternatively, generated in the convolutional neural networks for being used to enter the vehicle in image vehicle-driving identification according to the present invention Include in method, in addition to the step of previously generate vehicle image data acquisition system, previously generate vehicle image data acquisition system:To every One pending picture carries out image procossing, to obtain vehicle image corresponding to the pending picture;Obtain each vehicle image institute The car system information of corresponding pending picture association;According to the quantity of vehicle image corresponding to each car system information with it is default The magnitude relationship of first threshold, generates vehicle image group corresponding to the car system information, and vehicle image group is the first threshold including quantity The vehicle image of value;By the vehicle image in each vehicle image group corresponding car system information, brand message and class information It is associated, to generate corresponding vehicle image information;Collect each vehicle image information, to form vehicle image data acquisition system.
Alternatively, generated in the convolutional neural networks for being used to enter the vehicle in image vehicle-driving identification according to the present invention In method, according to the quantity of vehicle image corresponding to each car system information and the magnitude relationship of default first threshold, generation Include corresponding to the car system information the step of vehicle image group:To each car system information, if the number of its corresponding vehicle image Amount is more than first threshold, then will carry out random sampling to multiple vehicle images corresponding to the car system information, it is first to obtain quantity The vehicle image of threshold value is with vehicle image group corresponding to being formed;If the quantity of its corresponding vehicle image is equal to first threshold, Rolling stock image corresponding to the car system information is obtained to form corresponding vehicle image group;If the number of its corresponding vehicle image Amount is less than first threshold, then the vehicle image that respective numbers are extracted from multiple vehicle images corresponding to the car system information is carried out in advance Processing, it is the vehicle image of first threshold with vehicle image group corresponding to being formed to expand generation quantity.
Alternatively, generated in the convolutional neural networks for being used to enter the vehicle in image vehicle-driving identification according to the present invention In method, pretreatment comprises at least one kind in following operation:Rotation, flip horizontal, scaling, horizontal translation and vertical translation.
According to a further aspect of the invention, there is provided a kind of recognition methods of car system, suitable for being performed in computing device, the party Method based in the convolutional neural networks generation method for entering vehicle-driving identification to the vehicle in image, the convolutional Neural that trains Network enters vehicle-driving identification to the vehicle in image, comprises the following steps:First, images to be recognized is handled and treated with obtaining Identify vehicle image;Vehicle image to be identified is input in the convolutional neural networks trained and enters vehicle-driving identification;Obtain instruction The output of first grader, the second grader and the 3rd grader in the convolutional neural networks perfected;According to the first grader, The output of two graders and the 3rd grader determines the car system in images to be recognized corresponding to vehicle.
Alternatively, in the car system recognition methods according to the present invention, images to be recognized is handled to be identified to obtain The step of vehicle image, includes:Vehicle detection is carried out to images to be recognized, to obtain the vehicle position information of the images to be recognized; The images to be recognized is cut according to vehicle position information;Size adjusting and normalizing are carried out to the images to be recognized after cutting Change is handled, with vehicle image to be identified corresponding to acquisition.
According to a further aspect of the invention, there is provided a kind of computing device, including one or more processors, memory with And one or more programs, wherein one or more program storages in memory and are configured as by one or more processors Perform, one or more programs include being used to perform the volume for being used to enter the vehicle in image vehicle-driving identification according to the present invention Product neutral net generation method and/or the instruction according to car system recognition methods of the invention.
According to a further aspect of the invention, a kind of computer-readable storage medium for storing one or more programs is also provided Matter, one or more programs include instruction, and instruction is when executed by a computing apparatus so that computing device is according to the present invention's For entering the convolutional neural networks generation method of vehicle-driving identification to the vehicle in image and/or being identified according to the car system of the present invention Method.
The technical side generated according to the convolutional neural networks for being used to enter the vehicle in image vehicle-driving identification of the present invention Case, build one or more first process blocks, second processing block, the 3rd process block, fourth process block, the 5th processing respectively first Block and the 6th process block, in conjunction with the second global average pond layer, the first grader, the second grader and the 3rd grader, structure Build using the first process block as input, the convolutional neural networks using the first grader, the second grader and the 3rd grader as output, It is last that the convolutional neural networks are trained according to the vehicle image data acquisition system obtained in advance, so as to the first grader, the The output of two graders and the 3rd grader indicates respectively car system, brand and the rank corresponding to vehicle.In above-mentioned technical proposal In, the structure of convolutional neural networks is according to default network connection rules, by each first process block, second processing block, the 3rd Process block, fourth process block, the 5th process block are connected with the 6th process block, the second global average pond layer of connection, the The first coupled grader, the second grader and the 3rd grader are added respectively and is realized after two global average pond layers , and each process block is formed according to default, corresponding concatenate rule, ensures that the feature of extraction is substantially better than Engineer's feature, improve the precision of car system identification.In addition, when training the convolutional neural networks, using ultra-large Vehicle image data acquisition system, covered with most of the country listed with unlisted car system, and utilize brand classification and car system The supervision message supplemental training such as rank, further optimizes the effect of car system identification, and accuracy rate is up to more than 90%.
And then according to the car system recognition methods of the present invention, vehicle image to be identified is input to the convolution god trained Through in network, determining the type of car system according to the output of the first grader, the second grader and the 3rd grader, as a result accurately Degree has huge lifting.
Brief description of the drawings
In order to realize above-mentioned and related purpose, some illustrative sides are described herein in conjunction with following description and accompanying drawing Face, these aspects indicate the various modes that can put into practice principles disclosed herein, and all aspects and its equivalent aspect It is intended to fall under in the range of theme claimed.Read following detailed description in conjunction with the accompanying drawings, the disclosure it is above-mentioned And other purposes, feature and advantage will be apparent.Throughout the disclosure, identical reference generally refers to identical Part or element.
Fig. 1 shows the schematic diagram of computing device 100 according to an embodiment of the invention;
Fig. 2 shows the convolution according to an embodiment of the invention for being used to enter the vehicle in image vehicle-driving identification The flow chart of neutral net generation method 200;
Fig. 3 A show the structural representation of the first process block according to an embodiment of the invention;
Fig. 3 B show the structural representation of the 3rd process block according to an embodiment of the invention;
Fig. 3 C show the structural representation of the 5th process block according to an embodiment of the invention;
Fig. 3 D show the structural representation of second processing block according to an embodiment of the invention;
Fig. 3 E show the structural representation of fourth process block according to an embodiment of the invention;
Fig. 3 F show the structural representation of the 6th process block according to an embodiment of the invention;
Fig. 4 shows the structural representation of convolutional neural networks according to an embodiment of the invention;And
Fig. 5 shows the flow chart of car system according to an embodiment of the invention recognition methods 500.
Embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in accompanying drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here Limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure Completely it is communicated to those skilled in the art.
Fig. 1 is the block diagram of Example Computing Device 100.In basic configuration 102, computing device 100, which typically comprises, is System memory 106 and one or more processor 104.Memory bus 108 can be used in processor 104 and system storage Communication between device 106.
Depending on desired configuration, processor 104 can be any kind of processing, include but is not limited to:Microprocessor (μ P), microcontroller (μ C), digital information processor (DSP) or any combination of them.Processor 104 can be included such as The cache of one or more rank of on-chip cache 110 and second level cache 112 etc, processor core 114 and register 116.The processor core 114 of example can include arithmetic and logical unit (ALU), floating-point unit (FPU), Digital signal processing core (DSP core) or any combination of them.The Memory Controller 118 of example can be with processor 104 are used together, or in some implementations, Memory Controller 118 can be an interior section of processor 104.
Depending on desired configuration, system storage 106 can be any type of memory, include but is not limited to:Easily The property lost memory (RAM), nonvolatile memory (ROM, flash memory etc.) or any combination of them.System stores Device 106 can include operating system 120, one or more program 122 and routine data 124.In some embodiments, Program 122 may be arranged to utilize the execute instruction of routine data 124 by one or more processors 104 on an operating system.
Computing device 100 can also include contributing to from various interface equipments (for example, output equipment 142, Peripheral Interface 144 and communication equipment 146) to basic configuration 102 via the communication of bus/interface controller 130 interface bus 140.Example Output equipment 142 include graphics processing unit 148 and audio treatment unit 150.They can be configured as contributing to via One or more A/V port 152 is communicated with the various external equipments of such as display or loudspeaker etc.Outside example If interface 144 can include serial interface controller 154 and parallel interface controller 156, they can be configured as contributing to Via one or more I/O port 158 and such as input equipment (for example, keyboard, mouse, pen, voice-input device, touch Input equipment) or the external equipment of other peripheral hardwares (such as printer, scanner etc.) etc communicated.The communication of example is set Standby 146 can include network controller 160, and it can be arranged to be easy to via one or more COM1 164 and one The communication that other individual or multiple computing devices 162 pass through network communication link.
Network communication link can be an example of communication media.Communication media can be generally presented as in such as carrier wave Or computer-readable instruction in the modulated data signal of other transmission mechanisms etc, data structure, program module, and can With including any information delivery media." modulated data signal " can such signal, one in its data set or more It is individual or it change can the mode of coding information in the signal carry out.As nonrestrictive example, communication media can be with Include the wire medium of such as cable network or private line network etc, and it is such as sound, radio frequency (RF), microwave, infrared (IR) the various wireless mediums or including other wireless mediums.Term computer-readable medium used herein can include depositing Both storage media and communication media.
Computing device 100 can be implemented as server, such as file server, database server, application program service Device and WEB server etc., a part for portable (or mobile) electronic equipment of small size, these electronic equipments can also be embodied as Can be such as cell phone, personal digital assistant (PDA), personal media player device, wireless network browsing apparatus, individual Helmet, application specific equipment or the mixing apparatus that any of the above function can be included.Computing device 100 can also be real It is now to include desktop computer and the personal computer of notebook computer configuration.
In certain embodiments, computing device 100 is configured as performing and is used for according to the present invention to the vehicle in image Enter convolutional neural networks generation method and/or the recognition methods of car system of vehicle-driving identification.Wherein, one of computing device 100 or Multiple programs 122 include being used to perform the convolutional Neural net for being used to enter the vehicle in image vehicle-driving identification according to the present invention Network generation method 200 and/or the instruction of car system recognition methods 500.
Fig. 2 shows the convolution god according to an embodiment of the invention for being used to enter the vehicle in image vehicle-driving identification Flow chart through network generation method 200.For entering the convolutional neural networks generation side of vehicle-driving identification to the vehicle in image Method 200 is suitable to perform in computing device (such as computing device 100 shown in Fig. 1).
As shown in Fig. 2 method 200 starts from step S210.In step S210, the first process block is built respectively, at the 3rd Reason block and the 5th process block, the first process block, the 3rd process block and the 5th process block include convolutional layer and maximum pond layer, and The quantity of convolutional layer and maximum pond layer is one or more.According to one embodiment of present invention, can come in the following way Build the first process block, the 3rd process block and the 5th process block.First, default first concatenate rule, the 3rd connection rule are obtained Then with the 5th concatenate rule, then respectively according to the first concatenate rule, the 3rd concatenate rule and the 5th concatenate rule, by each convolutional layer It is connected with maximum pond layer to be correspondingly formed the first process block, the 3rd process block and the 5th process block.Preferably In, the first process block includes 11 convolutional layers and 2 maximum pond layers, and the 3rd process block includes 4 convolutional layers and 1 maximum pond Change layer, the 5th process block includes 6 convolutional layers and 1 maximum pond layer.
Fig. 3 A show the structural representation of the first process block according to an embodiment of the invention.As shown in Figure 3A, In the first process block, be using convolutional layer A1 as input, behind be sequentially connected convolutional layer A2 and convolutional layer A3, in convolutional layer A3 Coupled maximum pond layer MA1 and convolutional layer A4 are added afterwards, and the output to maximum pond layer MA1 and convolutional layer A4 is carried out Splicing, volume is connected with afterwards using spliced output as convolutional layer A5 and convolutional layer A7 input, convolutional layer A5 Convolutional layer A8, convolutional layer A9 and convolutional layer A10 are sequentially connected after lamination A6, convolutional layer A7, to convolutional layer A6 and convolutional layer A10 Output carry out splicing, using it is spliced output as convolutional layer A11 and maximum pond layer MA2 input, finally Output to convolutional layer A11 and maximum pond layer MA2 carries out splicing, using spliced output as output end.Wherein, spell Order member is added for the characteristic pattern of each input to be carried out into quantitative splicing, with to maximum pond layer MA1 and convolutional layer A4 Output carry out splicing exemplified by, it is assumed that maximum pond layer MA1 outputs the characteristic pattern of 96 73px × 73px sizes, convolution Layer A4 outputs the characteristic pattern of 64 73px × 73px sizes, then splicing refers to this 96 characteristic patterns and 64 characteristic patterns Quantitative addition is carried out, finally exports the characteristic pattern of 160 73px × 73px sizes, but not any characteristic pattern is entered Other processing of row.If without particularly pointing out, the content for being related to splicing below is defined by described above.Volume shown by Fig. 3 A Lamination A1 is as set to the order of connection of a last concatenation unit according to default first concatenate rule.
Fig. 3 B show the structural representation of the 3rd process block according to an embodiment of the invention.As shown in Figure 3 B, In the 3rd process block, be respectively using maximum pond layer MB1, convolutional layer B1 and convolutional layer B2 as input, convolutional layer B2 it After be sequentially connected convolutional layer B3 and convolutional layer B4, the output to maximum pond layer MB1, convolutional layer B1 and convolutional layer B4 is spelled Processing is connect, using spliced output as output end.Maximum pond layer MB1, convolutional layer B1 and convolutional layer B2 shown by Fig. 3 B To the order of connection of concatenation unit, as set according to default 3rd concatenate rule.
Fig. 3 C show the structural representation of the 5th process block according to an embodiment of the invention.As shown in Figure 3 C, In the 5th process block, be respectively using maximum pond layer MC1, convolutional layer C1 and convolutional layer C2 as input, convolutional layer C2 it After be sequentially connected convolutional layer C4, convolutional layer C5 and convolutional layer C6, to maximum pond layer MC1, convolutional layer C2 and convolutional layer C6 Output carries out splicing, using spliced output as output end.Maximum pond layer MC1, convolutional layer C1 shown by Fig. 3 C With convolutional layer C3 to the concatenation unit order of connection, as set according to default 5th concatenate rule.
Table 1 is shown in the first process block according to an embodiment of the invention, the 3rd process block and the 5th process block The relevant parameter example of each convolutional layer, table 2 show the first process block according to an embodiment of the invention, the 3rd process block With the relevant parameter example of each maximum pond layer in the 5th process block.Wherein, the value of the inside circle zero padding this parameter of table 1 is come Say, "-" is represented to operate without border zero padding, and " 1 " is represented outside 1 pixel unit in the edge of convolutional layer institute input picture Each row and each column is with 0 filling.If without particularly pointing out, the content for being related to border zero padding below is defined by described above.Table 1 and table 2 content is specific as follows shown:
Table 1
Table 2
Then, into step S220, second processing block, fourth process block and the 6th process block, second processing are built respectively Block, fourth process block and the 6th process block include convolutional layer, the first global average pond layer, full articulamentum and active coating, and Convolutional layer, the quantity of the first global average pond layer, full articulamentum and active coating are one or more.According to one of the present invention Embodiment, second processing block, fourth process block and the 6th process block can be built in the following way.First, obtain default Second concatenate rule, the 4th concatenate rule and the 6th concatenate rule, then respectively according to the second concatenate rule, the 4th concatenate rule and 6th concatenate rule, each convolutional layer, the first global average pond layer, full articulamentum are connected to be correspondingly formed with active coating Second processing block, fourth process block and the 6th process block.In this embodiment, second processing block include 7 convolutional layers, 1 First global average pond layer, 2 full articulamentums and 2 active coatings, fourth process block and the 6th process block include 10 volumes Lamination, 2 the first global average pond layer, 2 full articulamentums and 2 active coatings.
Fig. 3 D show the structural representation of second processing block according to an embodiment of the invention.As shown in Figure 3 D, It is to be connected respectively using convolutional layer D1, convolutional layer D3 and convolutional layer D5 as input after convolutional layer D1 in second processing block Convolutional layer D4 is connected after convolutional layer D2, convolutional layer D3, convolutional layer D6 and convolutional layer D7 is sequentially connected after convolutional layer D5, will roll up Lamination D2, convolutional layer D4 and convolutional layer D7 output carry out splicing, are added using spliced output as ratio single The input of member and the first global average pond layer ND1, the first global average pond layer ND1 be sequentially connected afterwards full articulamentum PD1, Active coating QD1, full articulamentum PD2 and active coating QD2, the output after above-mentioned splicing and active coating QD2 output are carried out Ratio is added, and the output after ratio is added is as output end.Wherein, ratio be added refer to by preset ratio to characteristic pattern Pixel value is overlapped processing, so that the output after to splicing and active coating QD2 output carry out splicing as an example, by two The ratio of person is disposed as 0.5, it is assumed that the characteristic pattern that the output after above splicing is 384 35px × 35px, active coating QD2 output is 384 1px × 1px characteristic pattern, the mismatch in dimension due to both be present, usually by active coating QD2 Output carry out dimension extend to form 384 35px × 35px characteristic pattern, for every characteristic pattern, each pixel value is The pixel value of this feature figure before not extending, then characteristic pattern of the characteristic pattern after extension in order with being exported after splicing is entered Row ratio is added.The processing being added on ratio, is existing mature technology, is not repeated herein.If nothing particularly points out, with Under be related to ratio addition content be defined by described above.Convolutional layer D1, convolutional layer D3 and convolutional layer D5 shown by Fig. 3 D To the order of connection of ratio addition unit, as set according to default second concatenate rule.
Fig. 3 E show the structural representation of fourth process block according to an embodiment of the invention.As shown in FIGURE 3 E, It is respectively using the first global average pond layer NE1, convolutional layer E2, convolutional layer E3 and convolutional layer E6 to be defeated in fourth process block Enter end, connect convolutional layer E1 after the first global average pond layer NE1, convolutional layer E4 and volume are sequentially connected after convolutional layer E3 Convolutional layer E7, convolutional layer E8, convolutional layer E9 and convolutional layer E10 are sequentially connected after lamination E5, convolutional layer E6, by convolutional layer E1, Convolutional layer E2, convolutional layer E5 and convolutional layer E10 output carry out splicing, and spliced output is added as ratio The input of unit and the first global average pond layer NE2, the first global average pond layer NE2 are sequentially connected full articulamentum afterwards PE1, active coating QE1, full articulamentum PE2 and active coating QE2, by the output after above-mentioned splicing and active coating QE2 output Carry out ratio addition, the output after ratio is added is as output end.The first global average pond layer NE1 shown by Fig. 3 E, Convolutional layer E2, convolutional layer E3 and convolutional layer E6 as advise to the order of connection of ratio addition unit according to the default 4th connection Then set.
Fig. 3 F show the structural representation of the 6th process block according to an embodiment of the invention.As illustrated in Figure 3 F, It is respectively using the first global average pond layer NF1, convolutional layer F2, convolutional layer F3 and convolutional layer F6 to be defeated in the 6th process block Enter end, connect convolutional layer F1 after the first global average pond layer NF1, convolutional layer F4 and volume are sequentially connected after convolutional layer F3 Convolutional layer F7, convolutional layer F8, convolutional layer F9 and convolutional layer F10 are sequentially connected after lamination F5, convolutional layer F6, by convolutional layer F1, Convolutional layer F2, convolutional layer F5 and convolutional layer F10 output carry out splicing, and spliced output is added as ratio The input of unit and the first global average pond layer NF2, the first global average pond layer NF2 are sequentially connected full articulamentum afterwards PF1, active coating QF1, full articulamentum PF2 and active coating QF2, by the output after above-mentioned splicing and active coating QF2 output Carry out ratio addition, the output after ratio is added is as output end.The first global average pond layer NF1 shown by Fig. 3 F, Convolutional layer F2, convolutional layer F3 and convolutional layer F6 as advise to the order of connection of ratio addition unit according to the default 6th connection Then set.
It is worth noting that, in this embodiment, swashing in the active coating QD1, fourth process block in second processing block Active coating QF1 in layer QE1 living and the 6th process block is using ReLU (Rectified Linear Unit) function as activation Function, active coating QD2, the active coating QF2 in active coating QE2 and the 6th process block in fourth process block in second processing block Using Sigmoid functions as activation primitive, to adjust the output of its last layer, the output for avoiding next layer is last layer Linear combination and arbitrary function can not be approached.
Table 3 is shown in second processing block according to an embodiment of the invention, fourth process block and the 6th process block The relevant parameter example of each convolutional layer, table 4 show second processing block according to an embodiment of the invention, fourth process block It is specific as follows shown with the relevant parameter example of each full articulamentum in the 6th process block:
Table 3
Table 4
Next, in step S230, the second global average pond layer, the first grader, the second grader are built respectively With the 3rd grader.According to one embodiment of present invention, the first grader, the second classification and the 3rd grader are selected Softmax graders.
In step S240, according to one or more first process blocks, second processing block, the 3rd process block, fourth process Block, the 5th process block and the 6th process block, with reference to the second global average pond layer, the first grader, the second grader and the 3rd Grader builds convolutional neural networks, and convolutional neural networks are using the first process block as input, with the first grader, the second grader It is output with the 3rd grader.According to one embodiment of present invention, convolutional neural networks can be built in the following way.It is first First, according to default network connection rules, by each first process block, second processing block, the 3rd process block, fourth process block, the After five process blocks are connected with the 6th process block, the global average pond layer of connection second, then in the second global average pond Layer after, add the first coupled grader, the second grader and the 3rd grader respectively, with build using the first process block as Input, the convolutional neural networks using the first grader, the second grader and the 3rd grader as output.Wherein, the first process block, The quantity of 3rd process block and the 5th process block is 1, and the quantity of second processing block is 3, and the quantity of fourth process block is 10, the The quantity of six process blocks is 5.
The problem of in order to alleviate over-fitting, according to still another embodiment of the invention, when building convolutional neural networks, also Can be in second overall situation averagely between pond layer and the first grader, the second grader and the 3rd grader, addition abstention layer should Abstention layer is connected with the second global average pond layer, the first grader, the second grader and the 3rd grader respectively.In the implementation In mode, the ratio of abstention (Dropout) layer is preferably 0.8.
Fig. 4 shows the structural representation of convolutional neural networks according to an embodiment of the invention.As shown in figure 4, In convolutional neural networks, be using the first process block as input, behind sequentially add 3 second processing being sequentially connected blocks, It is 3rd process block, 10 fourth process blocks being sequentially connected, the 5th process block, 5 the 6th process blocks being sequentially connected, second complete The average pond layer of office, abstention layer, and the first grader, the second grader and the 3rd grader are connected respectively after layer of waiving the right, Wherein the first grader, the second grader and the 3rd grader are output end.First process block illustrated in fig. 4 is to last one The order of connection of each processing unit between 6th process block, as set according to default network connection rules.On net Network concatenate rule, and the first concatenate rule, the second concatenate rule, the 3rd concatenate rule, the 4th concatenate rule, the 5th connection rule Then pre-set with the 6th concatenate rule, can according to practical application scene, network training situation, system configuration and performance requirement Deng suitably being adjusted, these can be readily apparent that for the technical staff for understanding the present invention program, and also exist Within protection scope of the present invention, do not repeated herein.
Finally, step S250 is performed, convolutional neural networks are instructed according to the vehicle image data acquisition system obtained in advance Practice, so as to the output of the first grader, the second grader and the 3rd grader indicate respectively car system corresponding to vehicle, brand and Rank, vehicle image data acquisition system include multiple vehicle image information, and each vehicle image information is including vehicle image and correspondingly Car system information, brand message and the class information of vehicle in image.According to one embodiment of present invention, vehicle image data set The vehicle image of each vehicle image information is satisfied by pre-set dimension in conjunction, and pre-set dimension is preferably 299px × 299px, vehicle Image is RGB triple channel images, and its corresponding car system information is any of 3100 kinds of car systems, such as Audi A4L, the system of BMW 3 It is any of 268 kinds of brands Deng, brand message, such as Audi, BMW, masses, class information is in 18 kinds of car system ranks It is any, for describing the size and form of vehicle, such as compact car, in-between car, large car.
Below by by taking a vehicle image information L in vehicle image data acquisition system as an example, to the instruction of convolutional neural networks Practice process to illustrate.Vehicle image information L includes the car system information K2 of vehicle in vehicle image K1 and correspondence image, brand letter K3 and class information K4 is ceased, car system information K2 is the system of BMW 3, and brand message K3 is BMW, and class information K4 is in-between car.Instructing Practice when, be inputted using vehicle image K1 as the first process block, car system information K2 is the output of the first grader, brand message K3 is The output of second grader, class information K4 are that the output of the 3rd grader carries out the training of convolutional neural networks.
In convolutional neural networks, first, vehicle image K1 is input to the first process block, by the first process block After convolution, maximum pond and splicing, the 384 big small characteristic pattern for 35px × 35px is outputed.Then, by the first processing 384 35px × 35px of block output characteristic pattern is input in the 1st second processing block, by the volume of 3 second processing blocks After product, splicing, global average pond, full connection, activation are added processing with ratio, obtain 384 big small for 35px × 35px's Characteristic pattern.384 35px × 35px of the 3rd second processing block output characteristic pattern is input in the 3rd process block again, passed through After crossing convolution and the processing of maximum pondization of the 3rd process block, the 1024 big small characteristic pattern for 17px × 17px is outputed.Connect down Come, 1024 17px × 17px of the 3rd process block output characteristic pattern is input in the 1st fourth process block, by 10 After the convolution of fourth process block, splicing, global average pond, full connection, activation are added processing with ratio, obtain 1024 big small For 17px × 17px characteristic pattern.1024 17px × 17px of the 10th fourth process block output characteristic pattern is input to the In five process blocks, after the convolution and the processing of maximum pondization by the 5th process block, output 1728 big small for 8px × 8px's Characteristic pattern.1728 8px × 8px of the 5th process block output characteristic pattern is input in the 1st the 6th process block again, passed through After the convolution of 5 the 6th process blocks, splicing, global average pond, full connection, activation are added processing with ratio, 1728 are exported Size is 8px × 8px characteristic pattern.
Further, 1728 7px × 7px of the 5th the 6th process block output characteristic pattern is input to second overall situation Average pond layer, to calculate the average of each characteristic pattern all pixels point, the output that can obtain the second global average pond layer is 1728 1px × 1px characteristic pattern.Now, 1px × 1px characteristic pattern actually only possesses 1 pixel value, and therefore, second is complete The output of the average pond layer of office can be considered the characteristic vector of one 1 × 1728.Then, dropout processing is carried out into abstention layer, Dropout can be understood as model and be averaged, i.e., in training process in forward conduction, allows the activation value of some neuron with certain Probability p be stopped, i.e. the activation value of the neuron is changed into 0 with Probability p.The neuron of second global average pond layer is 1728, dropout ratio selection 0.8, then this layer of neuron passes through after dropout, wherein there are about 1382 god Value through member is set to 0, alleviates over-fitting equivalent to by preventing the synergy of some features, avoids neuron Occur relying on the phenomenon with another neuron.
It is classification problem more than one due to being that car system is identified, and car system is 3100 kinds of cars in this embodiment Any of system, brand are any of 268 kinds of brands, and rank is any of 18 kinds of ranks, therefore first classifies Device, the second grader and the 3rd grader have 3100,268 and 18 outputs respectively, and the first grader is exported most Car system corresponding to greatest should be car system information K2, and brand corresponding to the most probable value that the second grader is exported should be product Board information K3, rank corresponding to the most probable value that the 3rd grader is exported should be class information K4.In order to train the convolution Neutral net according to car system information K2 corresponding to the vehicle image K1 of input is the system of BMW 3, brand message is BMW, rank letter Cease and be adjusted for this foreseen outcome of in-between car, the respectively output to the first grader, the second grader and the 3rd grader, Based on loss function and gradient calculation, by the method backpropagation of minimization error to adjust each ginseng in convolutional neural networks Number.After substantial amounts of vehicle image information is trained in vehicle image data acquisition system, the convolutional Neural net trained is obtained Network.
On each processing unit in convolutional neural networks, i.e. the first process block, second processing block, the 3rd process block, the 4th Process block, the 5th process block, the 6th process block, the second global average pond layer, abstention layer, the first grader, the second grader Be related in the 3rd grader convolution, maximum pond, global average pond, full connection, activation, splice, ratio is added, The contents such as softmax graders, its specific calculating process can refer to correlation technique, and here is omitted.
Vehicle image data acquisition system for training convolutional neural networks needs to previously generate, according to the present invention again One embodiment, vehicle image data acquisition system can be previously generated in the following way.First, each pending picture is carried out Image procossing, to obtain vehicle image corresponding to the pending picture.Specifically, when obtaining vehicle image, first based on advance The vehicle detection model trained carries out vehicle detection to the pending picture, to obtain the vehicle location of pending picture letter Breath, cuts further according to the vehicle position information to the pending picture, to obtain corresponding vehicle image.Wherein, vehicle Detection model can train the public data collection such as VOC2007 and VOC2012 to get by Faster R-CNN frameworks.
Next, the car system information of the pending picture association corresponding to each vehicle image is obtained, and according to each car It is magnitude relationship of the quantity with default first threshold of vehicle image corresponding to information, generates vehicle corresponding to the car system information Image sets, vehicle image group include the vehicle image that quantity is first threshold.According to one embodiment of present invention, can pass through In the following manner generates vehicle image group.In this embodiment, to each car system information, its corresponding vehicle figure is first judged The quantity of picture and the magnitude relationship of first threshold, will be to the car if the quantity of its corresponding vehicle image is more than first threshold It is that multiple vehicle images carry out random sampling corresponding to information, it is corresponding to be formed for the vehicle image of first threshold obtains quantity Vehicle image group, if the quantity of its corresponding vehicle image is equal to first threshold, obtain whole cars corresponding to the car system information Image is with vehicle image group corresponding to being formed, if the quantity of its corresponding vehicle image is less than first threshold, from the car system The vehicle image that respective numbers are extracted in multiple vehicle images corresponding to information is pre-processed, and expansion generation quantity is the first threshold The vehicle image of value is with vehicle image group corresponding to being formed.Wherein, pretreatment is including at least rotation, flip horizontal, scaling, level One kind in translation and vertical translation, first threshold is preferably 1500.Getting vehicle image group corresponding to each car system information Afterwards, the vehicle image in each vehicle image group corresponding car system information, brand message and class information are associated, with Corresponding vehicle image information is generated, and collects each vehicle image information, to form vehicle image data acquisition system.
Fig. 5 shows the flow chart of car system according to an embodiment of the invention recognition methods 500.The recognition methods of car system 500 are suitable to perform in computing device (such as computing device 100 shown in Fig. 1), based on for being carried out to the vehicle in image In the convolutional neural networks generation method of car system identification, the convolutional neural networks that train enter vehicle-driving identification.
As shown in figure 5, method 500 starts from step S510.In step S510, images to be recognized is handled to obtain Vehicle image to be identified.According to one embodiment of present invention, images to be recognized R, corresponding car system information are S1, and brand is believed Cease and indicate that the car system of vehicle in images to be recognized R is Audi A4L, brand message S2 for S2, class information S3, car system information S1 The brand for indicating vehicle in images to be recognized R is Audi, and class information S3 indicates that the rank of vehicle in images to be recognized R is medium-sized Car.In this embodiment, vehicle detection first is carried out to images to be recognized R, to obtain images to be recognized R vehicle location letter Breath, is cut further according to vehicle position information to the images to be recognized, and size adjusting is carried out to the images to be recognized after cutting And normalized, with vehicle image T to be identified corresponding to acquisition.Wherein, size adjusting can be by the images to be recognized after cutting Zoom to pre-set dimension 299px × 299px, normalized be by the pixel value in image by 0~255 span normalizing Change to 0~1.It is normal generally by image cropping, smoothing processing etc. on the process handled images to be recognized R herein Advise image processing techniques, using get can be adapted as convolutional neural networks input vehicle image T to be identified, these for Understanding can be readily apparent that for the technical staff of the present invention program, and also within protection scope of the present invention, this Place is not repeated.
Then, into step S520, vehicle image to be identified is input in the convolutional neural networks trained and enters driving System's identification.According to one embodiment of present invention, vehicle image T to be identified is input to the convolutional neural networks trained to carry out Car system identifies.
Next, in step S530, obtain the first grader in the convolutional neural networks trained, the second grader and The output of 3rd grader.According to one embodiment of present invention, the first grader is defeated in the convolutional neural networks trained It is 0.79 to go out for 3100 probable values, maximum of which probable value, and the output of the second grader is 268 probable values, wherein most Big probable value is 0.82, and the output of the 3rd grader is 18 probable values, and maximum of which probable value is 0.86.
Finally, step S540 is performed, determines to wait to know according to the output of the first grader, the second grader and the 3rd grader Car system in other image corresponding to vehicle.According to one embodiment of present invention, for the first grader, probable value 0.79 is Its 798th output, associated car system information be Audi A4L, for the second grader, probable value 0.82 be its 168th Individual output, associated brand message is Audi, and for the 3rd grader, probable value 0.86 is its 9th output, associated Class information be in-between car, thus can determine that the car system in images to be recognized R corresponding to vehicle is Audi A4L, and with it is true Car system information S1 it is consistent.
The recognition methods of existing car system is broadly divided into two classes, and one kind is the algorithm based on artificial extraction feature, another kind of to be Deep learning algorithm based on convolutional neural networks, but this two classes algorithm all has accuracy of identification relatively low, can not meet reality The problem of border application demand.The convolutional Neural net for being used to enter the vehicle in image vehicle-driving identification according to embodiments of the present invention The technical scheme of network generation, build one or more first process blocks, second processing block, the 3rd process block, the 4th respectively first Process block, the 5th process block and the 6th process block, in conjunction with the second global average pond layer, the first grader, the second grader With the 3rd grader, build using the first process block as input, using the first grader, the second grader and the 3rd grader as output Convolutional neural networks, it is last that the convolutional neural networks are trained according to the vehicle image data acquisition system obtained in advance, with Just the output of the first grader, the second grader and the 3rd grader indicates respectively car system, brand and the rank corresponding to vehicle. In the above-mentioned technical solutions, the structure of convolutional neural networks is according to default network connection rules, by each first process block, Two process blocks, the 3rd process block, fourth process block, the 5th process block are connected with the 6th process block, and connection second is global flat Equal pond layer, the first coupled grader, the second grader and the 3rd are added respectively after the second global average pond layer Grader and realize, and each process block is formed according to default, corresponding concatenate rule, ensures extraction Feature is substantially better than engineer's feature, improves the precision of car system identification.In addition, when training the convolutional neural networks, adopt With ultra-large vehicle image data acquisition system, covered with most of the country listed with unlisted car system, and utilize product The supervision message supplemental training such as board classification and car system rank, further optimizes the effect of car system identification, accuracy rate be up to 90% with On.And then car system recognition methods according to embodiments of the present invention, vehicle image to be identified is input to the convolution god trained Through in network, determining the type of car system according to the output of the first grader, the second grader and the 3rd grader, as a result accurately Degree has huge lifting.
A8. the method as any one of A1-7, in addition to vehicle image data acquisition system is previously generated, described pre- Mr. Include into the step of vehicle image data acquisition system:Image procossing is carried out to each pending picture, to obtain the pending figure Vehicle image corresponding to piece;Obtain the car system information of the pending picture association corresponding to each vehicle image;According to each car It is magnitude relationship of the quantity with default first threshold of vehicle image corresponding to information, generates vehicle corresponding to the car system information Image sets, the vehicle image group include the vehicle image that quantity is first threshold;By the vehicle image in each vehicle image group Corresponding car system information, brand message and class information is associated, to generate corresponding vehicle image information;Collect each Vehicle image information, to form vehicle image data acquisition system.
A9. the method as described in A8, the quantity of the vehicle image according to corresponding to each car system information with it is default The magnitude relationship of first threshold, the step of generating vehicle image group corresponding to the car system information, include:To each car system information, If the quantity of its corresponding vehicle image is more than the first threshold, multiple vehicle images corresponding to the car system information will be entered Row random sampling, it is the vehicle image of first threshold with vehicle image group corresponding to being formed to obtain quantity;If its corresponding vehicle The quantity of image is equal to the first threshold, then obtains rolling stock image corresponding to the car system information to form corresponding vehicle Image sets;If the quantity of its corresponding vehicle image is less than the first threshold, from multiple vehicles corresponding to the car system information The vehicle image that respective numbers are extracted in image is pre-processed, and it is the vehicle image of first threshold to be formed to expand generation quantity Corresponding vehicle image group.
A10. the method as described in A9, the pretreatment comprise at least one kind in following operation:Rotation, flip horizontal, Scaling, horizontal translation and vertical translation.
B12. the method as described in B11, it is described images to be recognized to be handled to obtain the step of vehicle image to be identified Suddenly include:Vehicle detection is carried out to images to be recognized, to obtain the vehicle position information of the images to be recognized;According to the vehicle Positional information is cut to the images to be recognized;Size adjusting and normalized are carried out to the images to be recognized after cutting, With vehicle image to be identified corresponding to acquisition.
In the specification that this place provides, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention Example can be put into practice in the case of these no details.In some instances, known method, knot is not been shown in detail Structure and technology, so as not to obscure the understanding of this description.
Similarly, it will be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect, Above in the description to the exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantor The application claims of shield are than the feature more features that is expressly recited in each claim.More precisely, as following As claims reflect, inventive aspect is all features less than single embodiment disclosed above.Therefore, abide by Thus the claims for following embodiment are expressly incorporated in the embodiment, wherein each claim is in itself Separate embodiments as the present invention.
Those skilled in the art should be understood the module or unit or group of the equipment in example disclosed herein Between can be arranged in equipment as depicted in this embodiment, or alternatively can be positioned at and the equipment in the example In different one or more equipment.Module in aforementioned exemplary can be combined as a module or be segmented into addition multiple Submodule.
Those skilled in the art, which are appreciated that, to be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more equipment different from the embodiment.Can be the module or list in embodiment Member or group between be combined into one between module or unit or group, and can be divided into addition multiple submodule or subelement or Between subgroup.In addition at least some in such feature and/or process or unit exclude each other, it can use any Combination is disclosed to all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so to appoint Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power Profit requires, summary and accompanying drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation Replace.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included some features rather than further feature, but the combination of the feature of different embodiments means in of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed One of meaning mode can use in any combination.
In addition, be described as herein can be by the processor of computer system or by performing for some in the embodiment The method or the combination of method element that other devices of the function are implemented.Therefore, have and be used to implement methods described or method The processor of the necessary instruction of element forms the device for implementing this method or method element.In addition, device embodiment Element described in this is the example of following device:The device is used to implement as in order to performed by implementing the element of the purpose of the invention Function.
Various technologies described herein can combine hardware or software, or combinations thereof is realized together.So as to the present invention Method and apparatus, or some aspects of the process and apparatus of the present invention or part can take embedded tangible media, such as soft The form of program code (instructing) in disk, CD-ROM, hard disk drive or other any machine readable storage mediums, Wherein when program is loaded into the machine of such as computer etc, and is performed by the machine, the machine becomes to put into practice this hair Bright equipment.
In the case where program code performs on programmable computers, computing device generally comprises processor, processor Readable storage medium (including volatibility and nonvolatile memory and/or memory element), at least one input unit, and extremely A few output device.Wherein, memory is arranged to store program codes;Processor is arranged to according to the memory Instruction in the described program code of middle storage, perform the convolution for being used to enter the vehicle in image vehicle-driving identification of the present invention Neutral net generation method and/or the recognition methods of car system.
By way of example and not limitation, computer-readable medium includes computer-readable storage medium and communication media.Calculate Machine computer-readable recording medium includes computer-readable storage medium and communication media.Computer-readable storage medium storage such as computer-readable instruction, The information such as data structure, program module or other data.Communication media is typically modulated with carrier wave or other transmission mechanisms etc. Data-signal processed passes to embody computer-readable instruction, data structure, program module or other data including any information Pass medium.Any combination above is also included within the scope of computer-readable medium.
As used in this, unless specifically stated so, come using ordinal number " first ", " second ", " the 3rd " etc. Description plain objects are merely representative of the different instances for being related to similar object, and are not intended to imply that the object being so described must Must have the time it is upper, spatially, in terms of sequence or given order in any other manner.
Although describing the present invention according to the embodiment of limited quantity, above description, the art are benefited from It is interior it is clear for the skilled person that in the scope of the present invention thus described, it can be envisaged that other embodiments.Additionally, it should be noted that The language that is used in this specification primarily to readable and teaching purpose and select, rather than in order to explain or limit Determine subject of the present invention and select.Therefore, in the case of without departing from the scope and spirit of the appended claims, for this Many modifications and changes will be apparent from for the those of ordinary skill of technical field.For the scope of the present invention, to this The done disclosure of invention is illustrative and not restrictive, and it is intended that the scope of the present invention be defined by the claims appended hereto.

Claims (10)

  1. A kind of 1. convolutional neural networks generation method for being used to enter the vehicle in image vehicle-driving identification, suitable in computing device Middle execution, methods described include step:
    The first process block, the 3rd process block and the 5th process block, first process block, the 3rd process block and the 5th are built respectively Process block includes one or more convolutional layers and maximum pond layer;
    Second processing block, fourth process block and the 6th process block, the second processing block, fourth process block and the 6th are built respectively Process block includes one or more convolutional layers, the first global average pond layer, full articulamentum and active coating;
    Structure second global average pond layer, the first grader, the second grader and the 3rd grader respectively;
    According to one or more first process blocks, second processing block, the 3rd process block, fourth process block, the 5th process block and the Six process blocks, with reference to the described second global average pond layer, the first grader, the second grader and the 3rd grader structure convolution Neutral net, the convolutional neural networks are using the first process block as input, with the first grader, the second grader and the 3rd classification Device is output;
    Vehicle image data acquisition system according to obtaining in advance is trained to the convolutional neural networks, so as to the described first classification The output of device, the second grader and the 3rd grader indicates respectively car system, brand and the rank corresponding to vehicle, the vehicle figure Picture data acquisition system includes multiple vehicle image information, and each vehicle image information includes vehicle in vehicle image and correspondence image Car system information, brand message and class information.
  2. 2. method as claimed in claim, the step for building the first process block, the 3rd process block and the 5th process block respectively Suddenly include:
    Respectively according to default first concatenate rule, the 3rd concatenate rule and the 5th concatenate rule, by each convolutional layer and maximum pond Change layer to be connected to be correspondingly formed the first process block, the 3rd process block and the 5th process block.
  3. 3. method as claimed in claim 1 or 2, described to build second processing block, fourth process block and the 6th process block respectively The step of include:
    Respectively according to default second concatenate rule, the 4th concatenate rule and the 6th concatenate rule, by each convolutional layer, first overall situation Average pond layer, full articulamentum are connected with active coating to be correspondingly formed second processing block, fourth process block and the 6th processing Block.
  4. It is 4. described according to one or more first process blocks, second processing such as the method any one of claim 1-3 Block, the 3rd process block, fourth process block, the 5th process block and the 6th process block, with reference to the described second global average pond layer, the The step of one grader, the second grader and the 3rd grader structure convolutional neural networks, includes:
    According to default network connection rules, by each first process block, second processing block, the 3rd process block, fourth process block, After five process blocks are connected with the 6th process block, the global average pond layer of connection described second;
    After the described second global average pond layer, the first coupled grader, the second grader and the 3rd are added respectively Grader, to build the volume using the first process block as input, using the first grader, the second grader and the 3rd grader as output Product neutral net.
  5. 5. method as claimed in claim 4, described according to one or more first process blocks, second processing block, the 3rd processing Block, fourth process block, the 5th process block and the 6th process block, with reference to the described second global average pond layer, the first grader, the The step of two graders and the 3rd grader structure convolutional neural networks, also includes:
    In described second overall situation averagely between pond layer and the first grader, the second grader and the 3rd grader, addition abstention Layer, to build the convolution god using the first process block as input, by output of the first grader, the second grader and the 3rd grader Through network, the abstention layer is classified with the described second global average pond layer, the first grader, the second grader and the 3rd respectively Device is connected.
  6. 6. such as the method any one of claim 1-4, first process block, the 3rd process block and the 5th process block Quantity is 1.
  7. 7. such as the method any one of claim 1-5, the quantity of the second processing block is 3, the fourth process block Quantity be 10, the quantity of the 6th process block is 5.
  8. 8. a kind of recognition methods of car system, suitable for being performed in computing device, methods described is based on any one of claim 1-7 institute The convolutional neural networks trained stated enter vehicle-driving identification, including step to the vehicle in image:
    Images to be recognized is handled to obtain vehicle image to be identified;
    The vehicle image to be identified is input in the convolutional neural networks trained and enters vehicle-driving identification;
    The output of first grader, the second grader and the 3rd grader in the convolutional neural networks trained described in acquisition;
    Vehicle institute in the images to be recognized is determined according to the output of first grader, the second grader and the 3rd grader Corresponding car system.
  9. 9. a kind of computing device, including:
    One or more processors;
    Memory;And
    One or more programs, wherein one or more of program storages are in the memory and are configured as by described one Individual or multiple computing devices, one or more of programs include being used to perform in the method according to claim 1-7 Either method and/or claim 8 described in method in either method instruction.
  10. 10. a kind of computer-readable recording medium for storing one or more programs, one or more of programs include instruction, The instruction is when executed by a computing apparatus so that in method of the computing device according to claim 1-7 The either method in method described in either method and/or claim 8.
CN201711098051.2A 2017-11-09 2017-11-09 Convolutional neural network generation method, vehicle system identification method and computing device Active CN107832794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711098051.2A CN107832794B (en) 2017-11-09 2017-11-09 Convolutional neural network generation method, vehicle system identification method and computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711098051.2A CN107832794B (en) 2017-11-09 2017-11-09 Convolutional neural network generation method, vehicle system identification method and computing device

Publications (2)

Publication Number Publication Date
CN107832794A true CN107832794A (en) 2018-03-23
CN107832794B CN107832794B (en) 2020-07-14

Family

ID=61654937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711098051.2A Active CN107832794B (en) 2017-11-09 2017-11-09 Convolutional neural network generation method, vehicle system identification method and computing device

Country Status (1)

Country Link
CN (1) CN107832794B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109730656A (en) * 2019-01-09 2019-05-10 中国科学院苏州纳米技术与纳米仿生研究所 Nerve network system, computer equipment for pulse wave signal classification
CN110222748A (en) * 2019-05-27 2019-09-10 西南交通大学 OFDM Radar Signal Recognition method based on the fusion of 1D-CNN multi-domain characteristics
CN110827208A (en) * 2019-09-19 2020-02-21 重庆特斯联智慧科技股份有限公司 General pooling enhancement method, device, equipment and medium for convolutional neural network
CN111126224A (en) * 2019-12-17 2020-05-08 成都通甲优博科技有限责任公司 Vehicle detection method and classification recognition model training method
CN111291715A (en) * 2020-02-28 2020-06-16 安徽大学 Vehicle type identification method based on multi-scale convolutional neural network, electronic device and storage medium
CN111427541A (en) * 2020-03-30 2020-07-17 太原理工大学 Machine learning-based random number online detection system and method
CN111986080A (en) * 2020-07-17 2020-11-24 浙江工业大学 Logistics vehicle feature positioning method based on improved master R-CNN
US11361585B2 (en) * 2018-06-11 2022-06-14 Zkteco Usa Llc Method and system for face recognition via deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046272A (en) * 2015-06-29 2015-11-11 电子科技大学 Image classification method based on concise unsupervised convolutional network
CN105139395A (en) * 2015-08-19 2015-12-09 西安电子科技大学 SAR image segmentation method based on wavelet pooling convolutional neural networks
CN105574550A (en) * 2016-02-02 2016-05-11 北京格灵深瞳信息技术有限公司 Vehicle identification method and device
CN106529578A (en) * 2016-10-20 2017-03-22 中山大学 Vehicle brand model fine identification method and system based on depth learning
CN106570477A (en) * 2016-10-28 2017-04-19 中国科学院自动化研究所 Vehicle model recognition model construction method based on depth learning and vehicle model recognition method based on depth learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046272A (en) * 2015-06-29 2015-11-11 电子科技大学 Image classification method based on concise unsupervised convolutional network
CN105139395A (en) * 2015-08-19 2015-12-09 西安电子科技大学 SAR image segmentation method based on wavelet pooling convolutional neural networks
CN105574550A (en) * 2016-02-02 2016-05-11 北京格灵深瞳信息技术有限公司 Vehicle identification method and device
CN106529578A (en) * 2016-10-20 2017-03-22 中山大学 Vehicle brand model fine identification method and system based on depth learning
CN106570477A (en) * 2016-10-28 2017-04-19 中国科学院自动化研究所 Vehicle model recognition model construction method based on depth learning and vehicle model recognition method based on depth learning

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11361585B2 (en) * 2018-06-11 2022-06-14 Zkteco Usa Llc Method and system for face recognition via deep learning
CN109730656A (en) * 2019-01-09 2019-05-10 中国科学院苏州纳米技术与纳米仿生研究所 Nerve network system, computer equipment for pulse wave signal classification
CN110222748A (en) * 2019-05-27 2019-09-10 西南交通大学 OFDM Radar Signal Recognition method based on the fusion of 1D-CNN multi-domain characteristics
CN110222748B (en) * 2019-05-27 2022-12-20 西南交通大学 OFDM radar signal identification method based on 1D-CNN multi-domain feature fusion
CN110827208A (en) * 2019-09-19 2020-02-21 重庆特斯联智慧科技股份有限公司 General pooling enhancement method, device, equipment and medium for convolutional neural network
CN111126224A (en) * 2019-12-17 2020-05-08 成都通甲优博科技有限责任公司 Vehicle detection method and classification recognition model training method
CN111291715A (en) * 2020-02-28 2020-06-16 安徽大学 Vehicle type identification method based on multi-scale convolutional neural network, electronic device and storage medium
CN111291715B (en) * 2020-02-28 2023-03-10 安徽大学 Vehicle type identification method based on multi-scale convolutional neural network, electronic device and storage medium
CN111427541A (en) * 2020-03-30 2020-07-17 太原理工大学 Machine learning-based random number online detection system and method
CN111427541B (en) * 2020-03-30 2022-03-04 太原理工大学 Machine learning-based random number online detection system and method
CN111986080A (en) * 2020-07-17 2020-11-24 浙江工业大学 Logistics vehicle feature positioning method based on improved master R-CNN
CN111986080B (en) * 2020-07-17 2024-01-16 浙江工业大学 Logistics vehicle feature positioning method based on improved master R-CNN

Also Published As

Publication number Publication date
CN107832794B (en) 2020-07-14

Similar Documents

Publication Publication Date Title
CN107832794A (en) A kind of convolutional neural networks generation method, the recognition methods of car system and computing device
CN109522942B (en) Image classification method and device, terminal equipment and storage medium
Yi et al. ASSD: Attentive single shot multibox detector
CN108171701B (en) Significance detection method based on U network and counterstudy
CN112084331A (en) Text processing method, text processing device, model training method, model training device, computer equipment and storage medium
CN110990631A (en) Video screening method and device, electronic equipment and storage medium
CN113822209B (en) Hyperspectral image recognition method and device, electronic equipment and readable storage medium
CN109034206A (en) Image classification recognition methods, device, electronic equipment and computer-readable medium
CN113449700B (en) Training of video classification model, video classification method, device, equipment and medium
Zeng et al. LEARD-Net: Semantic segmentation for large-scale point cloud scene
CN112801146A (en) Target detection method and system
US20220122351A1 (en) Sequence recognition method and apparatus, electronic device, and storage medium
CN113761153A (en) Question and answer processing method and device based on picture, readable medium and electronic equipment
CN112232346A (en) Semantic segmentation model training method and device and image semantic segmentation method and device
CN111680678A (en) Target area identification method, device, equipment and readable storage medium
CN110728295A (en) Semi-supervised landform classification model training and landform graph construction method
CN114283351A (en) Video scene segmentation method, device, equipment and computer readable storage medium
CN115083435A (en) Audio data processing method and device, computer equipment and storage medium
CN114282059A (en) Video retrieval method, device, equipment and storage medium
CN113569607A (en) Motion recognition method, motion recognition device, motion recognition equipment and storage medium
CN115131634A (en) Image recognition method, device, equipment, storage medium and computer program product
CN112183303A (en) Transformer equipment image classification method and device, computer equipment and medium
Jee et al. Efficacy determination of various base networks in single shot detector for automatic mask localisation in a post COVID setup
CN110717405A (en) Face feature point positioning method, device, medium and electronic equipment
WO2023173552A1 (en) Establishment method for target detection model, application method for target detection model, and device, apparatus and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant