CN110348355A - Model recognizing method based on intensified learning - Google Patents

Model recognizing method based on intensified learning Download PDF

Info

Publication number
CN110348355A
CN110348355A CN201910589073.1A CN201910589073A CN110348355A CN 110348355 A CN110348355 A CN 110348355A CN 201910589073 A CN201910589073 A CN 201910589073A CN 110348355 A CN110348355 A CN 110348355A
Authority
CN
China
Prior art keywords
focusedimage
model
viewpoint
image
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910589073.1A
Other languages
Chinese (zh)
Inventor
孙伟
张国策
张小瑞
张旭
孙敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201910589073.1A priority Critical patent/CN110348355A/en
Publication of CN110348355A publication Critical patent/CN110348355A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of model recognizing methods based on intensified learning, it is intended to excavate advantage of the CNN in terms of feature extraction and Classification and Identification, sufficiently to realize that the vehicle fining identification of accurate robust provides solution.It is characterized in that imitating human visual attention mechanism, design the visual attention model and viewpoint automatic selecting method strong based on classification conspicuousness, visual attention model is characterized in that building pays attention to mapping matrix and visual focusing template, viewpoint is automatically selected the aspect of model and is to be carried out the autonomous selection of viewpoint using the nitrification enhancement based on SARSA, enable model that optimized image identification region is adaptive selected, obtains best vehicle recognition effect.The method of the present invention can not only overcome traditional artificial extraction characteristics algorithm that can not adapt to vehicle the drawbacks of position, scale and profile change in the picture, and can successfully manage video camera shooting angle and change and the challenge of occlusion bring.

Description

Model recognizing method based on intensified learning
Technical field
The present invention relates to technical field of computer vision, in particular to a kind of model recognizing method based on intensified learning.
Background technique
The main path that the mankind obtain information is realized by vision, with the research and development of computer technology, people Start the function that human vision is simulated with computer, replacing human eye and brain to be perceived, explained to scenery environment and Understand, thereby produces this subject of computer vision.Computer vision is the Hot subject of artificial intelligence field, it is merged The research method and achievement in the fields of subjects such as signal processing, pattern-recognition, applied mathematics, neuro-physiology, are one comprehensive The subject of conjunction property.
In computer vision field, vehicle cab recognition is a basic problem, and vehicle cab recognition essentially consists in identification Vehicle significant characteristics.In terms of the selection of vehicle significant characteristics, Experiment of Psychology is verified: optic nerve ganglion-cell layer Neuron mainly carry the primary features information such as related color, texture, shape and parallax.People rely solely on primary vision spy Sign is unable to complete the task in the cognition world, therefore further relates to complicated treatment mechanism during visual cognition to go deep into processing this A little low-level visual features form the depth characteristic to work to vision sorter.The visual signature treatment mechanism for using for reference human brain, is based on The diversity and classification conspicuousness of the abundant digging vehicle feature of CNN, and the vehicle characteristics of generation are optimized and are jointly formed mutually Feature is mended, the ability to express of vehicle characteristics will be improved.
The present invention is directed to imitate the vision noticing mechanism in human vision cognitive process, under the guidance of vision noticing mechanism Enhance the specific aim of computer vision study and identification, and it is excellent in terms of feature extraction and Classification and Identification sufficiently to excavate CNN Gesture, research are intended to improve the vehicle significant characteristics optimization integrated processes and conspicuousness component area of vehicle fining recognition capability Model is automatically selected, provides solution to realize that the vehicle of accurate robust refines online recognition.
Summary of the invention
Goal of the invention: the object of the present invention is to provide a kind of model recognizing method based on intensified learning, mainly for taking the photograph Camera shooting angle changes and the challenge of occlusion bring and artificial experience method can not adapt to vehicle position in the picture, The variation of scale and profile provides a kind of model recognizing method based on intensified learning.
Technical solution: the model recognizing method of the present invention based on intensified learning, comprising the following steps:
Step 1: using the vehicle pictures composition data collection shot by monitoring camera, and being read in order from data set Take input picture X;
Step 2: one visual focusing device of building, visual focusing device include two parts, i.e., one mapping focus function with One focusing template set, wherein mapping focus function calculation formula is as follows:
Xf=fd(X, φ)=φ ⊙ X (1)
Herein,WithIt is the picture element matrix of original image and focusedimage, M and N are a width The height and width of image,It is viewpoint weight matrix and ⊙ defines element multiplication one by one;Focusing template set is Refer to the set of the range of obtained different shape clear image after viewpoint determines;The effect of visual focusing device is by original image X transforms to focusedimage Xf, keep the region of interest in image clear, other positions are fuzzy;
Step 3: a visual focusing point of initialization original image X, and original image X is input in visual focusing device Obtain focusedimage Xf, and prime area is added to key area set;
Step 4: using improved VGG16 as detection model, by full articulamentum fc8 layers of the last layer in VGG16 model Remove, is the new full articulamentum fc8' of 5 classes using output, and data set is inputted to carry out the parameter of whole network Training, obtains final detection model;
Step 5: belonging to the probability P of every kind of vehicle using detection model, that is, improved VGG16 model outputc, definitionProbabilistic forecasting as model is distributed, and C is the quantity of all vehicles, pcIndicate that the focusedimage of input belongs to C class A possibility that, using PcLast classification of the maximum value as input picture, indicates as the following formula:
Step 6: calculating present convergence image XfClass probability P comentropy H (P), using comentropy as evaluation refer to It marks to evaluate the confidence level of focusedimage result, the calculation formula of comentropy is as follows:
Comentropy is normalized with constant factor, is had:
Step 7: establishing the autonomous preference pattern of viewpoint using intensified learning, current state is characterized by the variation of comentropy Reward value, design a reward functions Ra(s, s') maps each state to a scalar, with the inherent phase of expression status It hopes;Visual focusing device automatically selects current state s and reward r feedback in model under selection to the viewpoint based on SARSA algorithm One viewpoint chooses a template from focusing template set followed by roulette method, which has different shape characteristic Template composition, to generate another width focusedimage X' of original image Xf
Step 8: it is effective to find all pairs of vehicle cab recognitions for circulation step 4,5 and 6, constantly superposition search key area Key area, when the result of detection network output has higher confidence level or the key area being found than given than threshold value Threshold value it is more, stop search key area;
Step 9: marking out all key areas on original image X, obtain vehicle conspicuousness component area, make to detect mould Type exports classification results with a high credibility, and final vehicle classification classification formula 2 indicates.
The utility model has the advantages that the model recognizing method of the invention based on intensified learning SARSA algorithm imitates mankind's attention mechanism, The viewpoint automatic selecting method and visual attention model strong based on classification conspicuousness are studied, the accurate of vehicle fining identification is improved Property and robustness;The present invention sufficiently excavates advantage of the CNN in terms of feature extraction and Classification and Identification, and research is intended to improve vehicle essence The vehicle significant characteristics optimization integrated processes and conspicuousness component area of refinement recognition capability automatically select model, quasi- to realize The vehicle fining online recognition of true robust provides solution;The present invention is by building based on the vision note for paying attention to mapping matrix Meaning model and the viewpoint selection model based on intensified learning SARSA algorithm can successfully manage the change of video camera shooting angle and vehicle It blocks bring challenge and improves the critical component region determined by traditional artificial algorithm in the nothing in subsequent identification process Method adapts to the shortcomings that variation of vehicle position in the picture, scale and profile.
Detailed description of the invention
Fig. 1 is model recognizing method flow chart of the invention;
Fig. 2 is original image;
Fig. 3 is focusedimage.
Specific embodiment
As shown in Figure 1, a kind of model recognizing method based on intensified learning, comprising the following steps:
Step 1: by the vehicle pictures composition data collection of monitoring camera shooting, all vehicles in data set are always divided into 5 kinds of vehicles are sedan, van, truck, SUV, coach respectively.And read input picture X in order from data set, it is defeated The format for entering image is JPEG, having a size of 150 × 150;
Step 2: one visual focusing device of building, i.e., it will by one mapping focus function of construction and a focusing template set Original image X transforms to focusedimage Xf, keep the region of interest in image clear, other positions are fuzzy;
(1) mapping focus function calculation formula is as follows:
Xf=fd(X, φ)=φ ⊙ X
Herein,WithIt is the picture element matrix of original image and focusedimage, M and N are a width The height and width of image,It is viewpoint weight matrix and ⊙ defines element multiplication one by one.
Viewpoint weight matrix φ and another point (i, j) in given focus point (u, v) and (u, v) and image away from It is associated from r.
(2) the element φ in viewpoint weight matrixijIt is considered as attention ratio of the image in the pixel of position (i, j), Element φijWithIt calculates, this is set to randomly selected in practical implementation.? WhereinAdjustment parameter α and β determine the shape of function, are corrected by training.When α=- When 0.0446, β=60, there are best verifying accuracy rate and appropriate training time.
(3) focus template set refer to viewpoint determine after obtained different shape clear image range set, template Collection specifically includes the original rectangular frame having a size of 80 × 60 pixels, the original square-shaped frame of 80 × 80 pixels, and 80 pixels are diameter Original circular frame;And these original frames pass through 1.2 times and 1.6 times of amplified frames, after autonomous selected viewpoint, then choose One template, so that original image is converted to focusedimage;
Step 3: a vision attention point of one width input picture of random selection is based on this vision as initialization input Lime light notices that original image X is transformed to focusedimage X by function by mappingf, and pass is added to as initial key region Key range set;
Step 4: by the way of model fine tuning improved VGG16 as detection model,
(1) the fc8 layer in VGG16 is removed, is the new layer fc8' of 5 classes using output;
(2) and by data set it inputs to be trained the parameter of whole network, obtains final detection model;
Step 5: using ResNet101 as detection model, model output is the probability P for belonging to every kind of vehiclec, definitionProbabilistic forecasting as model is distributed, and C is the quantity of all vehicles, pcIndicate that the focusedimage of input belongs to C class A possibility that, using PcLast classification of the maximum value as input picture, indicates as the following formula:
(1) the focusedimage X of input is readf, call the built-in function in deep learning frame in keras ImageDataGenerator, which is realized, is decoded as rgb pixel grid for jpeg file, is then converted to these pixel grids floating Pixel value (in 0~255 range), is finally zoomed to the function in [0,1] section by points tensor;
(2) pretreated focusedimage X will be passed throughfIt is input to ResNet101 detection model, predicts the mark of diagram picture Label, i.e. vehicle;
Step 6: calculating present convergence image XfClass probability P comentropy H (P), using comentropy as evaluation refer to It marks to evaluate the confidence level of focusedimage result.The calculation formula of comentropy is as follows:
Comentropy is normalized with constant factor, is had:
Step 7: establishing the autonomous preference pattern of viewpoint using intensified learning, that is, an agency is trained next to have to find Cumulative award is maximized with region.The reward value that current state is characterized by the variation of comentropy, designs a reward functions Ra(s, s') maps each state to a scalar, with the inherence expectation of expression status.Visual focusing device is current state s It is automatically selected with reward r feedback to the viewpoint based on SARSA algorithm and selects next viewpoint in model, followed by roulette method A template is chosen in template set from focusing, to generate another width focusedimage X' of original image Xf
(1) viewpoint selection model is established, using the SARSA algorithm in intensified learning come automatic preference pattern, this model Input be current state, i.e. the focusedimage of current view point and reward, output is next state, that is, is had by selection viewpoint Focusedimage.It can be indicated with following formula:
Q(s,a)←(1-α)Q(s,a)+α[r+γQ(s',a')]
Wherein, state behalf focusedimage Xf(t), action value a represents one focus point of selection, and α is learning rate, gives α =0.9;In each step, intensified learning selects an action value a from obtainable action collection a, and then environment table reveals one New state s', and also providing reward a r, γ to agency after execution acts a is attenuation coefficient, gives γ=e-2
(2) the reward value r of current state is characterized by the variation of comentropy, the present invention is mapped using a reward functions Each state is to a scalar, with the inherence expectation of expression status.Reward functions are a piecewise functions, are expressed as Ra(s, S'):
Wherein, y*It is prediction result, y*It is the label of training image, H'(XfIt (t)) is the comentropy of new focusedimage, H'(Xf *) be current time comentropy;When the transfer of label has obtained correct testing result and comentropy reduces, award It is 1, otherwise awards less than 1.For positive reward, the region unit currently selected is stored, is superimposed with key area set, updated and focus Image collection.
It (3) is focusedimage X in current time t, state sf(t).An action of the state action value function Q in state s Value a can pass through Q network FQIt calculates:
Q (s, a)=FQ(Xf(t);θQ)
Wherein, (s a) indicates value of the action value a in all action value set a in state s to Q;θQIndicate Q net Network FQParameter.In the present invention, action collection a includes 8 movements, i.e., respectively there is a step in 8 directions, each direction.By a certain A direction is moved and moves a step to allow current focus point to be transferred to a new focus point.
(4) current state s i.e. focusedimage X are recordedf(t), initial actuating a, that is, initial selected view are executed Point, receive awards r and new state s', i.e., another width focusedimage, and according to current Q function, random select will hold next time Capable step r' selects next viewpoint, and updated with formula in (1) Q (s, a);
Step 8: circulation step 4) 5) 6), constantly superposition search key area, it is effective to find all pairs of vehicle cab recognitions Key area, when the result i.e. type probability that exports of detection network than threshold value p is higher or is found key area n ratio to Fixed threshold value n*More, stop search key area, p setting 0.95, n*It is set as 6.
Step 9: marking out all key areas on original image X, obtain vehicle conspicuousness component area, make to detect mould Type exports classification results with a high credibility, and final vehicle classification classification is usedIt indicates.
In conjunction with Fig. 2 and Fig. 3, it can be seen from the figure that treated, focusedimage has highlighted key area, eliminates week The interference for enclosing background information is more conducive to carrying out the fine identification of vehicle.

Claims (5)

1. a kind of model recognizing method based on intensified learning, it is characterised in that following steps:
Step 1: using the vehicle pictures composition data collection that is shot by monitoring camera, and reading in order from data set defeated Enter image X;
Step 2: one visual focusing device of building, visual focusing device include two parts, i.e., one mapping focus function and one Template set is focused, wherein mapping focus function calculation formula is as follows:
Xf=fd(X, φ)=φ ⊙ X (1)
Herein,WithIt is the picture element matrix of original image and focusedimage, M and N are piece images Height and width,It is viewpoint weight matrix and ⊙ defines element multiplication one by one;It focuses template set and refers to viewpoint The set of the range of obtained different shape clear image after determination;The effect of visual focusing device is to convert original image X To focusedimage Xf, keep the region of interest in image clear, other positions are fuzzy;
Step 3: a visual focusing point of initialization original image X, and original image X is input in visual focusing device and is obtained Focusedimage Xf, and prime area is added to key area set;
Step 4: using improved VGG16 as detection model, full articulamentum fc8 layers of the last layer in VGG16 model are removed, It is the new full articulamentum fc8' of 5 classes using output, and data set is inputted to be trained the parameter of whole network, Obtain final detection model;
Step 5: belonging to the probability P of every kind of vehicle using detection model, that is, improved VGG16 model outputc, definitionMake It is distributed for the probabilistic forecasting of model, C is the quantity of all vehicles, pcIt indicates a possibility that focusedimage of input belongs to C class, adopts Use PcLast classification of the maximum value as input picture, indicates as the following formula:
Step 6: calculating present convergence image XfClass probability P comentropy H (P), commented using comentropy as evaluation index The confidence level of valence focusedimage result, the calculation formula of comentropy are as follows:
Comentropy is normalized with constant factor, is had:
Step 7: establishing the autonomous preference pattern of viewpoint using intensified learning, the prize of current state is characterized by the variation of comentropy Reward value designs a reward functions Ra(s, s') maps each state to a scalar, with the inherence expectation of expression status;Depending on Point focusing device automatically selects current state s and reward r feedback to the viewpoint based on SARSA algorithm and selects next view in model Point has the template group of different shape characteristic from one template of selection, the template set in template set is focused followed by roulette method At to generate another width focusedimage X' of original image Xf
Step 8: circulation step 4,5 and 6, constantly superposition search key area find all pairs of vehicle cab recognitions and effectively close Key range, when the result of detection network output has higher confidence level or the key area being found than given threshold than threshold value Value is more, and stop search key area;
Step 9: marking out all key areas on original image X, obtain vehicle conspicuousness component area, keep detection model defeated Classification results with a high credibility out, final vehicle classification classification formula 2 indicate.
2. the model recognizing method according to claim 1 based on intensified learning, it is characterised in that in step 2, viewpoint The distance of weight matrix φ in focalizer and another point (i, j) in given focus point (u, v) and (u, v) and image R is associated, the element φ in viewpoint weight matrixijIt is considered as attention ratio of the image in the pixel of position (i, j), Element φijCalculation formula it is as follows
Wherein,Adjustment parameter α and β determine the shape of function, and are corrected by training.
3. the model recognizing method according to claim 1 based on intensified learning, it is characterised in that in steps of 5, read The focusedimage X of inputf, calling the built-in function ImageDataGenerator in deep learning frame in keras to realize will Jpeg file is decoded as rgb pixel grid, these pixel grids are then converted to floating number tensor, are finally put into pixel value [0,1] in section;Pretreated focusedimage X will be passed throughfIt is input to ResNet101 detection model, predicts the label of image.
4. the model recognizing method according to claim 1 based on intensified learning, it is characterised in that in step 7, establish The autonomous preference pattern of viewpoint, using the SARSA algorithm in intensified learning come automatic preference pattern, the input of this model is current State, i.e. the focusedimage of current view point and reward, output is next state, that is, is had by the focusedimage of selection viewpoint; It can be indicated with following formula:
Q(s,a)←(1-α)Q(s,a)+α[r+γQ(s',a')] (5)
Wherein, state behalf focusedimage Xf(t), action value a represents one focus point of selection, and α is learning rate;In each step, Intensified learning selects an action value a from obtainable action collection a, and then environment table reveals a new state s', and Also providing reward a r, γ to agency after execution acts a is attenuation coefficient, gives γ=e-2
The reward value r that current state is characterized by the variation of comentropy, is arrived using a reward functions to map each state One scalar, with the inherence expectation of expression status;Reward functions are a piecewise functions, are expressed as Ra(s, s'):
Wherein, y*It is prediction result, y*It is the label of training image, H'(XfIt (t)) is the comentropy of new focusedimage, H' (Xf *) be current time comentropy;When the transfer of label, which has obtained correct testing result and comentropy, to be reduced, awards and be 1, it otherwise awards less than 1;For positive reward, the region unit currently selected is stored, is superimposed with key area set, updates focused view Image set closes.
5. the model recognizing method according to claim 4 based on intensified learning, it is characterised in that in step 7, working as Preceding moment t, state s are focusedimage Xf(t);State action value function Q can pass through Q network in an action value a of state s FQIt calculates:
Q (s, a)=FQ(Xf(t);θQ) (7)
Wherein, (s a) indicates value of the action value a in all action value set a in state s to Q;θQIndicate Q network FQ's Parameter;
Record current state s i.e. focusedimage Xf(t), initial actuating a, that is, initial selected viewpoint are executed, is encouraged Encourage r and new state s', i.e., another width focusedimage, according to current Q function, random selected the step of executing next time R ' selects next viewpoint, and updated with formula in (1) Q (s, a).
CN201910589073.1A 2019-07-02 2019-07-02 Model recognizing method based on intensified learning Pending CN110348355A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910589073.1A CN110348355A (en) 2019-07-02 2019-07-02 Model recognizing method based on intensified learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910589073.1A CN110348355A (en) 2019-07-02 2019-07-02 Model recognizing method based on intensified learning

Publications (1)

Publication Number Publication Date
CN110348355A true CN110348355A (en) 2019-10-18

Family

ID=68178029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910589073.1A Pending CN110348355A (en) 2019-07-02 2019-07-02 Model recognizing method based on intensified learning

Country Status (1)

Country Link
CN (1) CN110348355A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826609A (en) * 2019-10-29 2020-02-21 华中科技大学 Double-flow feature fusion image identification method based on reinforcement learning
WO2022078216A1 (en) * 2020-10-14 2022-04-21 华为云计算技术有限公司 Target recognition method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network
CN106295637A (en) * 2016-07-29 2017-01-04 电子科技大学 A kind of vehicle identification method based on degree of depth study with intensified learning
CN108090443A (en) * 2017-12-15 2018-05-29 华南理工大学 Scene text detection method and system based on deeply study

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network
CN106295637A (en) * 2016-07-29 2017-01-04 电子科技大学 A kind of vehicle identification method based on degree of depth study with intensified learning
CN108090443A (en) * 2017-12-15 2018-05-29 华南理工大学 Scene text detection method and system based on deeply study

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马技 等: "基于视觉注意机制深度强化学习的行人检测方法", 《中国科技论文》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826609A (en) * 2019-10-29 2020-02-21 华中科技大学 Double-flow feature fusion image identification method based on reinforcement learning
CN110826609B (en) * 2019-10-29 2023-03-24 华中科技大学 Double-current feature fusion image identification method based on reinforcement learning
WO2022078216A1 (en) * 2020-10-14 2022-04-21 华为云计算技术有限公司 Target recognition method and device

Similar Documents

Publication Publication Date Title
CN110874578B (en) Unmanned aerial vehicle visual angle vehicle recognition tracking method based on reinforcement learning
CN113158862B (en) Multitasking-based lightweight real-time face detection method
CN106845487A (en) A kind of licence plate recognition method end to end
CN104103033B (en) View synthesis method
CN108805016B (en) Head and shoulder area detection method and device
CN109902646A (en) A kind of gait recognition method based on long memory network in short-term
CN110210320A (en) The unmarked Attitude estimation method of multiple target based on depth convolutional neural networks
CN109800682A (en) Driver attributes' recognition methods and Related product
CN108154102A (en) A kind of traffic sign recognition method
Li et al. Pushing the “Speed Limit”: high-accuracy US traffic sign recognition with convolutional neural networks
CN103295016A (en) Behavior recognition method based on depth and RGB information and multi-scale and multidirectional rank and level characteristics
CN109543632A (en) A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN110110689A (en) A kind of pedestrian's recognition methods again
CN110490083A (en) A kind of pupil accurate detecting method based on fast human-eye semantic segmentation network
CN109360179A (en) A kind of image interfusion method, device and readable storage medium storing program for executing
CN110334656A (en) Multi-source Remote Sensing Images Clean water withdraw method and device based on information source probability weight
CN110348355A (en) Model recognizing method based on intensified learning
CN112561973A (en) Method and device for training image registration model and electronic equipment
CN104143102A (en) Online image data processing method
CN111488940B (en) Navigation mark image intelligent classification method based on deep convolutional neural network
CN116485646A (en) Micro-attention-based light-weight image super-resolution reconstruction method and device
CN116994236A (en) Low-quality image license plate detection method based on deep neural network
Chen et al. Contrast limited adaptive histogram equalization for recognizing road marking at night based on YOLO models
CN112084936B (en) Face image preprocessing method, device, equipment and storage medium
CN109740554A (en) A kind of road edge line recognition methods and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191018

RJ01 Rejection of invention patent application after publication