CN106371599A - Method and device for high-precision fingertip positioning in depth image - Google Patents

Method and device for high-precision fingertip positioning in depth image Download PDF

Info

Publication number
CN106371599A
CN106371599A CN201610810889.9A CN201610810889A CN106371599A CN 106371599 A CN106371599 A CN 106371599A CN 201610810889 A CN201610810889 A CN 201610810889A CN 106371599 A CN106371599 A CN 106371599A
Authority
CN
China
Prior art keywords
depth map
edge gradient
features
convolutional neural
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610810889.9A
Other languages
Chinese (zh)
Inventor
王贵锦
郭亨凯
陈醒濠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201610810889.9A priority Critical patent/CN106371599A/en
Publication of CN106371599A publication Critical patent/CN106371599A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for high-precision fingertip positioning in a depth image, which aim at improving positioning accuracy. The method comprises the following steps: S1, extracting an edge gradient map from the depth image; S2, extracting features from the depth map and the edge gradient map by a convolutional neural network; S3, integrating the two paths of features by the convolutional neural network, and regressing a three-dimensional position of a fingertip.

Description

High accuracy finger tip localization method in depth image and device
Technical field
The present invention relates to technical field of image processing is and in particular to a kind of high accuracy finger tip localization method in depth image And device.
Background technology
It is positioned at based on the hand key point of depth map very crucial in man-machine interaction, always study heat in recent years Point.In all hand key points, finger tip is most important part, and the gesture such as click, slip has close relationship. Simultaneously because the great variety of gesture, serious certainly block and larger error at finger tip in various Depth Imaging, lead to refer to The estimation of sharp position is very difficult, and site error is more than 1 centimetre.
Existing critical point detection algorithm is broadly divided into two classes:
(1) production method: the method based on model, mainly there are four ingredients, i.e. model, model and image Similarity measurement, original model parameter and optimal model parameter make the maximum algorithm of similarity.Wherein optimization method The conventional iterative closest point algorithm and the particle swarm optimization algorithm that have based on joint.This kind of method compares robust for blocking, and Do not need the model training process of complexity, can accurately find solution when optimizing near optimal value, but algorithm needs very strong priori With accurate initiation parameter, more sensitive for local optimum, and speed generally relatively slow it is impossible to meet real-time Require, this seriously constrains the scope of application of algorithm.
(2) discriminant method: key point position is directly predicted by characteristics of image, that is, directly from one machine of features training The model of study key point parameter is predicted., typically using the algorithm returning, the target of recurrence is usual for discriminant method There are two classes, a class is position skew, that is, return current location to the position offset of target critical point, another kind of is that error is inclined Move, that is, return the key point position of current predictive and the residual error of true key point position.Conventional model has random forest and volume Long-pending neutral net.Directly the method calculating speed of prediction is faster than the method based on model, and does not need to initialize, predictive value More overall, but need more train, easy over-fitting to training set, time dimension understands saltus step, and for blocking more For sensitivity.This kind of method major part is based on the topological structure of hand at present, progressively navigates to finger tip from palm, and this leads to referring to Position estimation error accumulation at point.
Content of the invention
In view of this, the present invention provides the high accuracy finger tip localization method in a kind of depth image and device, it is possible to increase Positioning precision.
On the one hand, the embodiment of the present invention proposes the high accuracy finger tip localization method in a kind of depth image, comprising:
S1, extract features of edge gradient maps from depth map;
S2, described depth map and features of edge gradient maps are utilized respectively convolutional neural networks extract feature;
S3, two-way characteristic use convolutional neural networks are merged, and returned out the three-dimensional position of finger tip.
On the other hand, the embodiment of the present invention proposes the high accuracy finger tip positioner in a kind of depth image, comprising:
First extraction unit, for extracting features of edge gradient maps from depth map;
Second extraction unit, extracts spy for described depth map and features of edge gradient maps are utilized respectively convolutional neural networks Levy;
Return unit, for being merged two-way characteristic use convolutional neural networks, and return out the three-dimensional position of finger tip Put.
High accuracy finger tip localization method in depth image provided in an embodiment of the present invention and device, creatively make use of The features of edge gradient maps of depth map, and propose new Feature Fusion Algorithm, compared to existing discriminant method it is not necessary to from handss The palm progressively navigates to finger tip such that it is able to overcome the error accumulation problem of location estimation at finger tip, and positioning precision is high, locus Error is less than 1 centimetre, and whole process fast operation, can be issued in real time in monokaryon cpu, algorithm robust, Neng Goushi Should different environment, realize simple it is easy to commercialization.
Brief description
Fig. 1 is the schematic flow sheet of high accuracy finger tip localization method one embodiment in depth image of the present invention;
Fig. 2 is the schematic flow sheet of s1 mono- embodiment in Fig. 1;
Fig. 3 is the part schematic flow sheet of another embodiment of high accuracy finger tip localization method in depth image of the present invention;
Fig. 4 is the structural representation of high accuracy finger tip positioner one embodiment in depth image of the present invention.
Specific embodiment
Purpose, technical scheme and advantage for making the embodiment of the present invention are clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is explicitly described it is clear that described embodiment is the present invention A part of embodiment, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not having The every other embodiment being obtained under the premise of making creative work, broadly falls into the scope of protection of the invention.
Referring to Fig. 1, the present embodiment discloses the high accuracy finger tip localization method in a kind of depth image, comprising:
S1, extract features of edge gradient maps from depth map;
In the present embodiment, extract the flow process of features of edge gradient maps from depth map as shown in Fig. 2 main by depth map or pass through The feature that depth map extracts is input to the model of machine learning, obtains the edge gradient information of each position prediction.Feature Can be the local feature in the pixel value of artwork or other machines vision, the pixel of random point pair in such as artwork Difference.Machine learning model has random forest, convolutional neural networks etc., and these models have demarcated side firstly the need of at some It is trained on the depth map data collection of edge information, minimize margin estimation error.By estimating to the edge of each picture position Meter, can obtain the gradient map of full figure.
S2, described depth map and features of edge gradient maps are utilized respectively convolutional neural networks extract feature;
S3, two-way characteristic use convolutional neural networks are merged, and returned out the three-dimensional position of finger tip.
(in Fig. 3, the rectangle frame of each embedded digital represents convolution god to the flow process of two steps s2 next and s3 as shown in Figure 3 A layer through network, the numeral in rectangle frame represents the parameter of respective layer).Carry out picture firstly the need of by depth map and gradient map The normalization of element value is so as to scope is between -1 to 1.Then feature, wherein convolution are extracted using two-way convolutional neural networks The structure of neutral net is mainly made up of alternate convolutional layer, down-sampled layer and non-linear layer.In order to avoid over-fitting, two road networks The parameter of network is realized shared.Finally, using the slow technology merging, two-way feature is further with convolutional neural networks through pulleying Long-pending, down-sampled and nonlinear operation is merged, and returns out the three dimensional space coordinate of finger tip by full articulamentum.We are in reality Compared for different amalgamation modes in testing, including early merge (two-way figure being entered directly into convolutional neural networks returned), Merge (two-way feature is until full articulamentum just merges and returns) late, strengthen to merge (edge graph being directly superimposed upon former depth Returned as single width figure on degree figure), finally find that slow integration technology effect is best.
It should be noted that the network of the network of feature extraction and finger tip positioning can be using based on stochastic gradient descent Back-propagation algorithm is trained.The depth map data collection be labelled with three-dimensional fingertip location is carried out to two parts network Joint training, minimizes the error of finger tip positioning.It is also required to during training extract edge graph to depth map, to ensure to train and to test Unification.Through assessment, the finger tip position error of this patent is 9.9 millimeters, better than all results in current paper.
High accuracy finger tip localization method in depth image provided in an embodiment of the present invention, creatively make use of depth map Features of edge gradient maps, and propose new Feature Fusion Algorithm, compared to existing discriminant method it is not necessary to from palm progressively Navigate to finger tip such that it is able to overcome the error accumulation problem of location estimation at finger tip, positioning precision is high, volumetric position error is little In 1 centimetre, and whole process fast operation, can be issued in real time in monokaryon cpu, algorithm robust, can adapt to difference Environment, realize simple it is easy to commercialization.
Referring to Fig. 4, the present embodiment discloses the high accuracy finger tip positioner in a kind of depth image, comprising:
First extraction unit 1, for extracting features of edge gradient maps from depth map;
In a particular application, described first extraction unit 1, can be used for:
The feature extracting or from described depth map described depth map is input to default machine learning model, in advance Survey the edge gradient information of each position, thus obtaining described features of edge gradient maps.Wherein, described machine learning model includes at random Forest, convolutional neural networks.
Second extraction unit 2, extracts spy for described depth map and features of edge gradient maps are utilized respectively convolutional neural networks Levy;
In the present embodiment, described second extraction unit 2, can be used for:
Described depth map and features of edge gradient maps are carried out the normalization of pixel value, make pixel coverage between -1 to 1;
Using identical two-way convolutional neural networks, the described depth map after normalization and features of edge gradient maps extract respectively Go out feature, wherein, the structure of described two-way convolutional neural networks is mainly by alternate convolutional layer, down-sampled layer and non-linear layer group Become.
Return unit 3, for being merged two-way characteristic use convolutional neural networks, and return out the three-dimensional position of finger tip Put.
Described recurrence unit 3, can be used for:
Using the slow technology merging, using convolutional neural networks, two-way feature is merged, and returned by full articulamentum Return the three dimensional space coordinate finger tip.
High accuracy finger tip positioner in depth image provided in an embodiment of the present invention, creatively make use of depth map Features of edge gradient maps, and propose new Feature Fusion Algorithm, compared to existing discriminant method it is not necessary to from palm progressively Navigate to finger tip such that it is able to overcome the error accumulation problem of location estimation at finger tip, positioning precision is high, volumetric position error is little In 1 centimetre, and whole process fast operation, can be issued in real time in monokaryon cpu, algorithm robust, can adapt to difference Environment, realize simple it is easy to commercialization.
Those skilled in the art are it should be appreciated that embodiments herein can be provided as method, system or computer program Product.Therefore, the application can be using complete hardware embodiment, complete software embodiment or the reality combining software and hardware aspect Apply the form of example.And, the application can be using in one or more computers wherein including computer usable program code The upper computer program implemented of usable storage medium (including but not limited to disk memory, cd-rom, optical memory etc.) produces The form of product.
The application is the flow process with reference to method, equipment (system) and computer program according to the embodiment of the present application Figure and/or block diagram are describing.It should be understood that can be by each stream in computer program instructions flowchart and/or block diagram Flow process in journey and/or square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be provided The processor instructing general purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device is to produce A raw machine is so that produced for reality by the instruction of computer or the computing device of other programmable data processing device The device of the function of specifying in present one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions may be alternatively stored in and can guide computer or other programmable data processing device with spy Determine in the computer-readable memory that mode works so that the instruction generation inclusion being stored in this computer-readable memory refers to Make the manufacture of device, this command device realize in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or The function of specifying in multiple square frames.
These computer program instructions also can be loaded in computer or other programmable data processing device so that counting On calculation machine or other programmable devices, execution series of operation steps to be to produce computer implemented process, thus in computer or On other programmable devices, the instruction of execution is provided for realizing in one flow process of flow chart or multiple flow process and/or block diagram one The step of the function of specifying in individual square frame or multiple square frame.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality Body or operation are made a distinction with another entity or operation, and not necessarily require or imply these entities or deposit between operating In any this actual relation or order.And, term " inclusion ", "comprising" or its any other variant are intended to Comprising of nonexcludability, wants so that including a series of process of key elements, method, article or equipment and not only including those Element, but also include other key elements being not expressly set out, or also include for this process, method, article or equipment Intrinsic key element.In the absence of more restrictions, the key element that limited by sentence "including a ..." it is not excluded that Also there is other identical element including in the process of described key element, method, article or equipment.Term " on ", D score etc. refers to The orientation showing or position relationship are based on orientation shown in the drawings or position relationship, are for only for ease of the description present invention and simplification Description, rather than indicate or imply that the device of indication or element must have specific orientation, with specific azimuth configuration and behaviour Make, be therefore not considered as limiting the invention.Unless otherwise clearly defined and limited, term " installation ", " being connected ", " connection " should be interpreted broadly, for example, it may be being fixedly connected or being detachably connected, or is integrally connected;Can be It is mechanically connected or electrically connect;Can be to be joined directly together it is also possible to be indirectly connected to by intermediary, can be two The connection of element internal.For the ordinary skill in the art, above-mentioned term can be understood as the case may be at this Concrete meaning in invention.
In the description of the present invention, illustrate a large amount of details.Although it is understood that, embodiments of the invention can To put into practice in the case of there is no these details.In some instances, known method, structure and skill are not been shown in detail Art, so as not to obscure the understanding of this description.Similarly it will be appreciated that disclosing and help understand respectively to simplify the present invention One or more of individual inventive aspect, in the description to the exemplary embodiment of the present invention above, each of the present invention is special Levy and be sometimes grouped together in single embodiment, figure or descriptions thereof.However, should not be by the method solution of the disclosure Release is in reflect an intention that i.e. the present invention for required protection requires than the feature being expressly recited in each claim more Many features.More precisely, as the following claims reflect, inventive aspect is less than single reality disclosed above Apply all features of example.Therefore, it then follows claims of specific embodiment are thus expressly incorporated in this specific embodiment, Wherein each claim itself is as the separate embodiments of the present invention.It should be noted that in the case of not conflicting, this Embodiment in application and the feature in embodiment can be mutually combined.The invention is not limited in any single aspect, It is not limited to any single embodiment, be also not limited to combination in any and/or the displacement of these aspects and/or embodiment.And And, can be used alone each aspect of the present invention and/or embodiment or with other aspects one or more and/or its enforcement Example is used in combination.
Finally it is noted that various embodiments above, only in order to technical scheme to be described, is not intended to limit;To the greatest extent Pipe has been described in detail to the present invention with reference to foregoing embodiments, it will be understood by those within the art that: its according to So the technical scheme described in foregoing embodiments can be modified, or wherein some or all of technical characteristic is entered Row equivalent;And these modifications or replacement, do not make the essence of appropriate technical solution depart from various embodiments of the present invention technology The scope of scheme, it all should be covered in the middle of the claim of the present invention and the scope of description.

Claims (10)

1. the high accuracy finger tip localization method in a kind of depth image is it is characterised in that include:
S1, extract features of edge gradient maps from depth map;
S2, described depth map and features of edge gradient maps are utilized respectively convolutional neural networks extract feature;
S3, two-way characteristic use convolutional neural networks are merged, and returned out the three-dimensional position of finger tip.
2. method according to claim 1 is it is characterised in that described s1, comprising:
The feature extracting or from described depth map described depth map is input to default machine learning model, and prediction is every The edge gradient information of individual position, thus obtain described features of edge gradient maps.
3. method according to claim 2 is it is characterised in that described machine learning model includes random forest, convolution god Through network.
4. method according to claim 1 is it is characterised in that described s2, comprising:
Described depth map and features of edge gradient maps are carried out the normalization of pixel value, make pixel coverage between -1 to 1;
Using identical two-way convolutional neural networks, the described depth map after normalization and features of edge gradient maps extract spy respectively Levy, wherein, the structure of described two-way convolutional neural networks is mainly made up of alternate convolutional layer, down-sampled layer and non-linear layer.
5. method according to claim 1 is it is characterised in that described s3, comprising:
Using the slow technology merging, using convolutional neural networks, two-way feature is merged, and returned out by full articulamentum The three dimensional space coordinate of finger tip.
6. the high accuracy finger tip positioner in a kind of depth image is it is characterised in that include:
First extraction unit, for extracting features of edge gradient maps from depth map;
Second extraction unit, extracts feature for described depth map and features of edge gradient maps are utilized respectively convolutional neural networks;
Return unit, for being merged two-way characteristic use convolutional neural networks, and return out the three-dimensional position of finger tip.
7. device according to claim 6, it is characterised in that described first extraction unit, is used for:
The feature extracting or from described depth map described depth map is input to default machine learning model, and prediction is every The edge gradient information of individual position, thus obtain described features of edge gradient maps.
8. device according to claim 7 is it is characterised in that described machine learning model includes random forest, convolution god Through network.
9. device according to claim 6, it is characterised in that described second extraction unit, is used for:
Described depth map and features of edge gradient maps are carried out the normalization of pixel value, make pixel coverage between -1 to 1;
Using identical two-way convolutional neural networks, the described depth map after normalization and features of edge gradient maps extract spy respectively Levy, wherein, the structure of described two-way convolutional neural networks is mainly made up of alternate convolutional layer, down-sampled layer and non-linear layer.
10. device according to claim 6, it is characterised in that described recurrence unit, is used for:
Using the slow technology merging, using convolutional neural networks, two-way feature is merged, and returned out by full articulamentum The three dimensional space coordinate of finger tip.
CN201610810889.9A 2016-09-08 2016-09-08 Method and device for high-precision fingertip positioning in depth image Pending CN106371599A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610810889.9A CN106371599A (en) 2016-09-08 2016-09-08 Method and device for high-precision fingertip positioning in depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610810889.9A CN106371599A (en) 2016-09-08 2016-09-08 Method and device for high-precision fingertip positioning in depth image

Publications (1)

Publication Number Publication Date
CN106371599A true CN106371599A (en) 2017-02-01

Family

ID=57900213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610810889.9A Pending CN106371599A (en) 2016-09-08 2016-09-08 Method and device for high-precision fingertip positioning in depth image

Country Status (1)

Country Link
CN (1) CN106371599A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240111A (en) * 2017-06-14 2017-10-10 郑州天迈科技股份有限公司 Edge connection segmentation passenger flow statistical method
CN107582001A (en) * 2017-10-20 2018-01-16 珠海格力电器股份有限公司 Dish washing machine and control method, device and system thereof
CN108389172A (en) * 2018-03-21 2018-08-10 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN110738677A (en) * 2019-09-20 2020-01-31 清华大学 Full-definition imaging method and device for camera and electronic equipment
CN113128290A (en) * 2019-12-31 2021-07-16 炬星科技(深圳)有限公司 Moving object tracking method, system, device and computer readable storage medium
WO2022237055A1 (en) * 2021-05-10 2022-11-17 青岛小鸟看看科技有限公司 Virtual keyboard interaction method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160310A (en) * 2015-08-25 2015-12-16 西安电子科技大学 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN105787439A (en) * 2016-02-04 2016-07-20 广州新节奏智能科技有限公司 Depth image human body joint positioning method based on convolution nerve network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160310A (en) * 2015-08-25 2015-12-16 西安电子科技大学 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN105787439A (en) * 2016-02-04 2016-07-20 广州新节奏智能科技有限公司 Depth image human body joint positioning method based on convolution nerve network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
蔡娟: "基于卷积神经网络的手势识别初探", 《计算机***应用》 *
费建超等: "基于梯度的多输入卷积神经网络", 《光电工程》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240111A (en) * 2017-06-14 2017-10-10 郑州天迈科技股份有限公司 Edge connection segmentation passenger flow statistical method
CN107240111B (en) * 2017-06-14 2021-03-26 郑州天迈科技股份有限公司 Edge communication segmentation passenger flow statistical method
CN107582001A (en) * 2017-10-20 2018-01-16 珠海格力电器股份有限公司 Dish washing machine and control method, device and system thereof
CN107582001B (en) * 2017-10-20 2020-08-11 珠海格力电器股份有限公司 Dish washing machine and control method, device and system thereof
CN108389172A (en) * 2018-03-21 2018-08-10 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108389172B (en) * 2018-03-21 2020-12-18 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN110738677A (en) * 2019-09-20 2020-01-31 清华大学 Full-definition imaging method and device for camera and electronic equipment
CN113128290A (en) * 2019-12-31 2021-07-16 炬星科技(深圳)有限公司 Moving object tracking method, system, device and computer readable storage medium
WO2022237055A1 (en) * 2021-05-10 2022-11-17 青岛小鸟看看科技有限公司 Virtual keyboard interaction method and system

Similar Documents

Publication Publication Date Title
CN106371599A (en) Method and device for high-precision fingertip positioning in depth image
Braun et al. Improving progress monitoring by fusing point clouds, semantic data and computer vision
Zhang et al. A critical review of vision-based occupational health and safety monitoring of construction site workers
CN109345596B (en) Multi-sensor calibration method, device, computer equipment, medium and vehicle
JP6940047B2 (en) Computer-based rebar measurement and inspection system and rebar measurement and inspection method
Liao et al. Occlusion gesture recognition based on improved SSD
Bae et al. High-precision vision-based mobile augmented reality system for context-aware architectural, engineering, construction and facility management (AEC/FM) applications
Hou et al. Detecting structural components of building engineering based on deep-learning method
Svarm et al. Accurate localization and pose estimation for large 3d models
Liu et al. YOLO-extract: Improved YOLOv5 for aircraft object detection in remote sensing images
CN110287276A (en) High-precision map updating method, device and storage medium
CN103383731B (en) A kind of projection interactive method based on finger tip location, system and the equipment of calculating
Ding et al. Crack detection and quantification for concrete structures using UAV and transformer
CN105856243A (en) Movable intelligent robot
CN105043396A (en) Method and system for indoor map self-establishment of mobile robot
CN109325538A (en) Object detection method, device and computer readable storage medium
CN103605978A (en) Urban illegal building identification system and method based on three-dimensional live-action data
CN101794349A (en) Experimental system and method for augmented reality of teleoperation of robot
JP2021119507A (en) Traffic lane determination method, traffic lane positioning accuracy evaluation method, traffic lane determination apparatus, traffic lane positioning accuracy evaluation apparatus, electronic device, computer readable storage medium, and program
CN104781849A (en) Fast initialization for monocular visual simultaneous localization and mapping (SLAM)
CN112258567A (en) Visual positioning method and device for object grabbing point, storage medium and electronic equipment
CN110852243B (en) Road intersection detection method and device based on improved YOLOv3
KR20220004009A (en) Key point detection method, apparatus, electronic device and storage medium
CN104570077B (en) Method for extracting offset domain common imaging gathers based on reverse time migration
CN106447698B (en) A kind of more pedestrian tracting methods and system based on range sensor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170201