CN113591936A - Vehicle attitude estimation method, terminal device and storage medium - Google Patents

Vehicle attitude estimation method, terminal device and storage medium Download PDF

Info

Publication number
CN113591936A
CN113591936A CN202110779118.9A CN202110779118A CN113591936A CN 113591936 A CN113591936 A CN 113591936A CN 202110779118 A CN202110779118 A CN 202110779118A CN 113591936 A CN113591936 A CN 113591936A
Authority
CN
China
Prior art keywords
vehicle
attitude estimation
vehicle attitude
parameters
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110779118.9A
Other languages
Chinese (zh)
Other versions
CN113591936B (en
Inventor
陈德意
吴婷婷
赵建强
高志鹏
张辉极
杜新胜
李国庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meiya Pico Information Co Ltd
Original Assignee
Xiamen Meiya Pico Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meiya Pico Information Co Ltd filed Critical Xiamen Meiya Pico Information Co Ltd
Priority to CN202110779118.9A priority Critical patent/CN113591936B/en
Publication of CN113591936A publication Critical patent/CN113591936A/en
Application granted granted Critical
Publication of CN113591936B publication Critical patent/CN113591936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a vehicle attitude estimation method, a terminal device and a storage medium, wherein the method comprises the following steps: s1: collecting an image containing a vehicle, labeling the corresponding posture of the vehicle in the image and a bounding box of a vehicle target, and forming a training set by the labeled image; s2: constructing a vehicle attitude estimation model based on a YOLOv2 network, and training the vehicle attitude estimation model through a training set; s3: and estimating the vehicle attitude and the vehicle target through the trained vehicle attitude estimation model. The invention can be integrated with the detection task of the intelligent traffic system into a backbone network, has better generalization, does not need to additionally design a network structure responsible for vehicle attitude estimation, can realize the vehicle attitude estimation only by modifying the input and the output of the detector, has stronger application in a real scene, and reduces the consumption of hardware facilities.

Description

Vehicle attitude estimation method, terminal device and storage medium
Technical Field
The present invention relates to the field of vehicle detection, and in particular, to a vehicle attitude estimation method, a terminal device, and a storage medium.
Background
With the development of the current intelligent traffic system, the number of vehicles is increased rapidly, license plate recognition, vehicle type detection and the like become main components of the intelligent traffic system, and the application of a monocular camera-based target detection algorithm in a vehicle detector is greatly improved. However, the current vehicle detection system still has limitations, such as being able to detect and recognize only the vehicle, and not being able to recognize a specific posture of the vehicle. The most widely used Pascal Visual Object Classes (VOC) and Cityscapes databases do not currently provide annotation data for vehicle attitude. The posture information of the vehicle can help to acquire the specific orientation of the vehicle and the position of the license plate, the license plate can be better recognized and the vehicle type can be better classified, so that the intelligent transportation system can more accurately recognize the brand and the number of the vehicle, and the traffic supervision is greatly facilitated.
Disclosure of Invention
In order to solve the above problems, the present invention provides a vehicle attitude estimation method, a terminal device, and a storage medium.
The specific scheme is as follows:
a vehicle attitude estimation method, comprising the steps of:
s1: collecting an image containing a vehicle, labeling the corresponding posture of the vehicle in the image and a bounding box of a vehicle target, and forming a training set by the labeled image;
s2: constructing a vehicle attitude estimation model based on a YOLOv2 network, and training the vehicle attitude estimation model through a training set;
the vehicle attitude estimation model is activated through multilayer convolution, leakage ReLU nonlinearity, maximum pooling andextracting a multi-dimensional feature map from an input image by batch normalization; an input image is divided into w0×h0A grid of (2), wherein w0And h0Respectively representing the number of columns and the number of rows in the divided grid of the image; dimension of the multi-dimensional feature map is w0×h0×(nc+np) X N, wherein NcRepresenting the number of poses to be predicted, npRepresenting the number of parameters for converting anchor boxes into bounding boxes, and N representing the number of anchor boxes allocated to each mesh;
the content of the multi-dimensional feature map comprises the probability of the attitude needing to be predicted and the parameters of a boundary frame, and the boundary frame of the vehicle target is obtained according to the parameters of the boundary frame and the parameters of an anchor frame;
s3: and estimating the vehicle attitude and the vehicle target through the trained vehicle attitude estimation model.
Further, the resolution of the input layer in the YOLOv2 network is set to 768 × 384 or 1024 × 512.
Further, the postures include frontward, rearward, leftward and rightward.
Further, the method for obtaining the boundary frame of the vehicle target according to the parameters of the boundary frame and the parameters of the anchor frame comprises the following steps:
bx=Axx(I,ω)
by=Ayy(I,ω)
bω=σω(I,ω)Aω
bh=σh(I,ω)Ah
wherein, bxAnd byX-axis and y-axis coordinates representing the upper left corner of the bounding box, respectively, bωAnd bhRespectively representing the width and height of the bounding box, AxAnd A andyrespectively representing the x-axis and y-axis coordinates of the upper left corner of the anchor frame, AωAnd AhRespectively, the width and height, Delta, of the anchor framex(I,ω)、Δy(I,ω)、σw(I, ω) and σh(I, ω) are parameters of the bounding box, representing x-axis offset, y-axis offset, width scaling factor and height scaling factor, respectivelySub, I denotes an input image, and ω denotes a weight coefficient of the network.
A vehicle attitude estimation terminal device comprises a processor, a memory, and a computer program stored in the memory and operable on the processor, the processor implementing the steps of the method described above in embodiments of the present invention when executing the computer program.
A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method as described above for an embodiment of the invention.
By adopting the technical scheme, the method can be integrated with the detection task of the intelligent transportation system into a backbone network, has better generalization, does not need to additionally design a network structure responsible for vehicle attitude estimation, can realize the vehicle attitude estimation only by modifying the input and the output of the detector, has stronger application in a real scene, and reduces the consumption of hardware facilities.
Drawings
Fig. 1 is a flowchart illustrating a first embodiment of the present invention.
Detailed Description
To further illustrate the various embodiments, the invention provides the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the embodiments. Those skilled in the art will appreciate still other possible embodiments and advantages of the present invention with reference to these figures.
The invention will now be further described with reference to the accompanying drawings and detailed description.
The first embodiment is as follows:
an embodiment of the present invention provides a vehicle attitude estimation method, as shown in fig. 1, the method includes the following steps:
s1: and acquiring an image containing a vehicle, labeling the corresponding posture of the vehicle in the image and a boundary frame of a vehicle target, and forming a training set by the labeled image.
S2: and constructing a vehicle attitude estimation model based on a YOLOv2 network, and training the vehicle attitude estimation model through a training set.
Because the information amount of the feature maps output after the images under different resolutions pass through the model is different, in order to obtain larger and detailed feature maps, the resolutions of the input layers in the YOLOv2 network are set to be 768 × 384 and 1024 × 512 in this embodiment, and two vehicle attitude estimation models are correspondingly constructed.
The vehicle attitude estimation model extracts a multi-dimensional feature map from an input image through multilayer convolution, leaky ReLU nonlinear activation, maximum pooling and batch normalization.
An input image is divided into w0×h0A grid of (2), wherein w0And h0Representing the image as the number of columns and rows in a divided grid, respectively, e.g. setting w for an input image of 1024 × 512 resolution0=32、h0=16。
Dimension of the multi-dimensional feature map is w0×h0×(nc+np) X N, wherein NcRepresenting the number of poses to be predicted, npDenotes the number of parameters for converting the Anchor box (Anchor) into the bounding box (bounding box), and N denotes the number of Anchor boxes allocated to each mesh.
The probability P of the attitude needing to be predicted is contained in a multi-dimensional characteristic diagram output by the vehicle attitude estimation modelk(I, ω) and parameters Δ of the bounding boxx(I,ω),Δy(I,ω),σw(I,ω),σh(I, ω) obtaining a bounding box of the vehicle object according to the parameters of the bounding box and the parameters of the anchor frame:
bx=Axx(I,ω)
by=Ayy(I,ω)
bω=σω(I,ω)Aω
bh=σh(I,ω)Ah
wherein, bxAnd byX-axis and y-axis coordinates representing the upper left corner of the bounding box, respectively, bωAnd bhRespectively represent the boundaryWidth and height of the frame, AxAnd A andyrespectively representing the x-axis and y-axis coordinates of the upper left corner of the anchor frame, AωAnd AhRespectively, the width and height, Delta, of the anchor framex(I,ω)、Δy(I,ω)、σw(I, ω) and σhAnd (I, omega) are parameters of the bounding box and respectively represent an x-axis offset, a y-axis offset, a width scaling factor and a height scaling factor, wherein I represents an input image, and omega represents a weight coefficient of the network.
The loss function loss of the vehicle attitude estimation model is:
Figure BDA0003155492190000051
where Num represents the number of anchor frames used;
Figure BDA0003155492190000052
representing the objects in the ith mesh and the jth anchor box which are responsible for predicting the object bounding box; x is the number ofiAnd yiX-axis and y-axis coordinates representing the center of the real bounding box, respectively; w is aiAnd wiRespectively representing the width and height of the real bounding box,
Figure BDA0003155492190000053
and
Figure BDA0003155492190000054
x-axis and y-axis coordinates representing the center of the jth anchor frame prediction, respectively;
Figure BDA0003155492190000055
and
Figure BDA0003155492190000056
respectively representing the width and the height of the jth anchor frame prediction; c. CiA vector representing the class of the real tag,
Figure BDA0003155492190000057
class vector, λ, representing jth anchor frame predictioncoor、λobj、λnoobjThe parameters are weight superparameters in the loss function, namely a coordinate prediction error of a boundary frame, an object posture prediction error and a background prediction error, and are respectively set to be 1, 5 and 1 in the training period.
Results of the experiment
Since the picture resolution obtained by the real scene monitoring is large, the effect of the input images (768 × 384 and 1024 × 512) with two resolutions on the performance and the speed is verified in this embodiment, as shown in table 1.
TABLE 1
Model (model) mAP Forward facing Towards the back Towards the left To the right FPS
YOLOv2 28.75% 46.41% 42.89% 13.63% 11.86% 50
768×384 31.08% 47.49% 45.15% 15.85% 15.82% 45
1024×512 39.46% 60.65% 56.30% 22.22% 18.65% 29
From the experimental results in table 1, it can be seen that the inference time still meets the real-time requirement when large resolution images are used. Meanwhile, it can be found that as the resolution of the input image increases, the performance of the detector also increases to some extent, and the mAP of the 1024 × 512 network model increases by 10.71% compared with the default YOLOv2 network model and by 8.38% compared with the 768 × 384 network model of the input image size.
The embodiment of the invention provides a vehicle attitude estimation method capable of being embedded into the existing detection task framework, which can realize real-time estimation of the attitude of a vehicle on the basis of not increasing parameters of a model. The vehicle posture can be adjusted by acquiring the vehicle posture, and the license plate recognition and the vehicle recognition can be assisted. The embodiment is based on the YOLOv2 network, and the posture recognition task of the dense vehicles in the intelligent monitoring scene is better adapted through slight modification of the network structure. Experiments prove that the method can meet the requirement of vehicle attitude estimation in real time, the speed of the method reaches 29FPS, the mAP can reach 39.46%, and the method has good practical significance for correcting the vehicle attitude estimation. Meanwhile, the embodiment uses various detectors (one-stage or two-stage), which can be directly embedded into the detection task, and does not increase the model parameters additionally, thereby having good universality.
Example two:
the invention further provides a vehicle attitude estimation terminal device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the method embodiment of the first embodiment of the invention.
Further, as an executable solution, the vehicle attitude estimation terminal device may be a desktop computer, a notebook, a palm computer, a cloud server, and other computing devices. The vehicle attitude estimation terminal device may include, but is not limited to, a processor, a memory. It will be understood by those skilled in the art that the above-described constituent structure of the vehicle attitude estimation terminal device is only an example of the vehicle attitude estimation terminal device, and does not constitute a limitation on the vehicle attitude estimation terminal device, and may include more or less components than the above, or combine some components, or different components, for example, the vehicle attitude estimation terminal device may further include an input-output device, a network access device, a bus, and the like, which is not limited by the embodiment of the present invention.
Further, as an executable solution, the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, and the like. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor is a control center of the vehicle attitude estimation terminal device, and various interfaces and lines are used to connect various parts of the entire vehicle attitude estimation terminal device.
The memory may be used to store the computer program and/or module, and the processor may implement various functions of the vehicle attitude estimation terminal device by operating or executing the computer program and/or module stored in the memory and calling data stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of the mobile phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The invention also provides a computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned method of an embodiment of the invention.
The vehicle attitude estimation terminal device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer-readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), software distribution medium, and the like.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. A vehicle attitude estimation method, characterized by comprising the steps of:
s1: collecting an image containing a vehicle, labeling the corresponding posture of the vehicle in the image and a bounding box of a vehicle target, and forming a training set by the labeled image;
s2: constructing a vehicle attitude estimation model based on a YOLOv2 network, and training the vehicle attitude estimation model through a training set;
the vehicle attitude estimation model extracts a multi-dimensional characteristic diagram from an input image through multilayer convolution, leaky ReLU nonlinear activation, maximum pooling and batch normalization; an input image is divided into w0×h0A grid of (2), wherein w0And h0Respectively representing the number of columns and the number of rows in the divided grid of the image; dimension of the multi-dimensional feature map is w0×h0×(nc+np) X N, wherein NcRepresenting the number of poses to be predicted, npRepresenting the number of parameters for converting anchor boxes into bounding boxes, and N representing the number of anchor boxes allocated to each mesh;
the content of the multi-dimensional feature map comprises the probability of the attitude needing to be predicted and the parameters of a boundary frame, and the boundary frame of the vehicle target is obtained according to the parameters of the boundary frame and the parameters of an anchor frame;
s3: and estimating the vehicle attitude and the vehicle target through the trained vehicle attitude estimation model.
2. The vehicle attitude estimation method according to claim 1, characterized in that: the resolution of the input layer in the YOLOv2 network is set to 768 × 384 or 1024 × 512.
3. The vehicle attitude estimation method according to claim 1, characterized in that: the postures include forward, backward, left, and right.
4. The vehicle attitude estimation method according to claim 1, characterized in that: the method for obtaining the boundary frame of the vehicle target according to the parameters of the boundary frame and the parameters of the anchor frame comprises the following steps:
bx=Axx(I,ω)
by=Ayy(I,ω)
bω=σω(I,ω)Aω
bh=σh(I,ω)Ah
wherein, bxAnd byX-axis and y-axis coordinates representing the upper left corner of the bounding box, respectively, bωAnd bhRespectively representing the width and height of the bounding box, AxAnd A andyrespectively representing the x-axis and y-axis coordinates of the upper left corner of the anchor frame, AωAnd AhRespectively, the width and height, Delta, of the anchor framex(I,ω)、Δy(I,ω)、σw(I, ω) and σhAnd (I, omega) are parameters of the bounding box and respectively represent an x-axis offset, a y-axis offset, a width scaling factor and a height scaling factor, wherein I represents an input image, and omega represents a weight coefficient of the network.
5. A vehicle attitude estimation terminal device characterized in that: comprising a processor, a memory and a computer program stored in the memory and running on the processor, the processor implementing the steps of the method according to any of claims 1 to 4 when executing the computer program.
6. A computer-readable storage medium storing a computer program, characterized in that: the computer program when executed by a processor implementing the steps of the method as claimed in any one of claims 1 to 4.
CN202110779118.9A 2021-07-09 2021-07-09 Vehicle attitude estimation method, terminal device and storage medium Active CN113591936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110779118.9A CN113591936B (en) 2021-07-09 2021-07-09 Vehicle attitude estimation method, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110779118.9A CN113591936B (en) 2021-07-09 2021-07-09 Vehicle attitude estimation method, terminal device and storage medium

Publications (2)

Publication Number Publication Date
CN113591936A true CN113591936A (en) 2021-11-02
CN113591936B CN113591936B (en) 2022-09-09

Family

ID=78246780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110779118.9A Active CN113591936B (en) 2021-07-09 2021-07-09 Vehicle attitude estimation method, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN113591936B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842085A (en) * 2022-07-05 2022-08-02 松立控股集团股份有限公司 Full-scene vehicle attitude estimation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875902A (en) * 2017-12-04 2018-11-23 北京旷视科技有限公司 Neural network training method and device, vehicle detection estimation method and device, storage medium
CN110443208A (en) * 2019-08-08 2019-11-12 南京工业大学 A kind of vehicle target detection method, system and equipment based on YOLOv2
CN110647852A (en) * 2019-09-27 2020-01-03 集美大学 Traffic flow statistical method, terminal equipment and storage medium
CN111174782A (en) * 2019-12-31 2020-05-19 智车优行科技(上海)有限公司 Pose estimation method and device, electronic equipment and computer readable storage medium
JP2020115094A (en) * 2019-01-17 2020-07-30 株式会社トヨタマップマスター Posture estimation device, posture estimation method, posture estimation program, and recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875902A (en) * 2017-12-04 2018-11-23 北京旷视科技有限公司 Neural network training method and device, vehicle detection estimation method and device, storage medium
JP2020115094A (en) * 2019-01-17 2020-07-30 株式会社トヨタマップマスター Posture estimation device, posture estimation method, posture estimation program, and recording medium
CN110443208A (en) * 2019-08-08 2019-11-12 南京工业大学 A kind of vehicle target detection method, system and equipment based on YOLOv2
CN110647852A (en) * 2019-09-27 2020-01-03 集美大学 Traffic flow statistical method, terminal equipment and storage medium
CN111174782A (en) * 2019-12-31 2020-05-19 智车优行科技(上海)有限公司 Pose estimation method and device, electronic equipment and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842085A (en) * 2022-07-05 2022-08-02 松立控股集团股份有限公司 Full-scene vehicle attitude estimation method

Also Published As

Publication number Publication date
CN113591936B (en) 2022-09-09

Similar Documents

Publication Publication Date Title
US10789717B2 (en) Apparatus and method of learning pose of moving object
CN111144242B (en) Three-dimensional target detection method, device and terminal
US20190095212A1 (en) Neural network system and operating method of neural network system
CN109543641B (en) Multi-target duplicate removal method for real-time video, terminal equipment and storage medium
CN111860398B (en) Remote sensing image target detection method and system and terminal equipment
CN110910422A (en) Target tracking method and device, electronic equipment and readable storage medium
CN109816694B (en) Target tracking method and device and electronic equipment
US11074716B2 (en) Image processing for object detection
US20210097290A1 (en) Video retrieval in feature descriptor domain in an artificial intelligence semiconductor solution
JP2013196454A (en) Image processor, image processing method and image processing program
CN114491399A (en) Data processing method and device, terminal equipment and computer readable storage medium
CN110991310A (en) Portrait detection method, portrait detection device, electronic equipment and computer readable medium
CN113591936B (en) Vehicle attitude estimation method, terminal device and storage medium
CN112348116A (en) Target detection method and device using spatial context and computer equipment
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
CN112991349B (en) Image processing method, device, equipment and storage medium
CN110910375A (en) Detection model training method, device, equipment and medium based on semi-supervised learning
CN112541902A (en) Similar area searching method, similar area searching device, electronic equipment and medium
Gong et al. FastRoadSeg: Fast monocular road segmentation network
CN116452631A (en) Multi-target tracking method, terminal equipment and storage medium
CN112508839A (en) Object detection system and object detection method thereof
CN109447943B (en) Target detection method, system and terminal equipment
CN110807463A (en) Image segmentation method and device, computer equipment and storage medium
CN110222576B (en) Boxing action recognition method and device and electronic equipment
US20220270351A1 (en) Image recognition evaluation program, image recognition evaluation method, evaluation apparatus, and evaluation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant