CN108335308A - A kind of orange automatic testing method, system and intelligent robot retail terminal - Google Patents

A kind of orange automatic testing method, system and intelligent robot retail terminal Download PDF

Info

Publication number
CN108335308A
CN108335308A CN201710048267.1A CN201710048267A CN108335308A CN 108335308 A CN108335308 A CN 108335308A CN 201710048267 A CN201710048267 A CN 201710048267A CN 108335308 A CN108335308 A CN 108335308A
Authority
CN
China
Prior art keywords
image
orange
depth
rgb
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710048267.1A
Other languages
Chinese (zh)
Inventor
阮仕涛
朱勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Prafly Technology Co Ltd
Original Assignee
Shenzhen Prafly Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Prafly Technology Co Ltd filed Critical Shenzhen Prafly Technology Co Ltd
Priority to CN201710048267.1A priority Critical patent/CN108335308A/en
Publication of CN108335308A publication Critical patent/CN108335308A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/0014Coin-freed apparatus for hiring articles; Coin-freed facilities or services for vending, access and use of specific services not covered anywhere else in G07F17/00
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/0064Coin-freed apparatus for hiring articles; Coin-freed facilities or services for processing of food articles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Food Science & Technology (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

A kind of orange automatic testing method, system and intelligent robot retail terminal, method include:Acquire RGB image and depth image;Registration process is carried out to RGB image based on depth image;HSV colour space transformations are carried out to the RGB image after registration, and the channel H, S, V is handled, obtain the initial position range of orange;Based on determining initial position range, edge detection is carried out to depth image and edge is divided, obtains the accurate location information of orange and marginal information.The present invention can obtain the accurate location information of orange and marginal information, and mechanical arm can be assisted to complete the crawl to orange;Even in the case where orange is stacked, the method for executing the present invention captures out orange after can acquiring image detection to orange, most of orange for being blocked or being all blocked is not handled temporarily, the method for executing the present invention before crawl next time again carries out Image Acquisition and orange detection.

Description

A kind of orange automatic testing method, system and intelligent robot retail terminal
Technical field
The present invention relates to a kind of field of machine vision more particularly to orange automatic testing method, system and robot intelligence It can retail terminal.
Background technology
With the development of artificial intelligence technology, the development of modern almost various technologies has been directed to artificial intelligence technology, It may be said that artificial intelligence has been widely applied to many fields.Robot, which is used on production line, a very long time , the industrial machinery arm of early stage and most amusement robots do not perceive environment now, only some limbs are dynamic Make or is moved to another point from a point.Nowadays robot starts to step into daily life, entertainment field.Intelligence zero Sell also more and more popular, intelligent robot retail terminal is following developing direction, and intelligent robot retail terminal sells orange It is also one important application, therefore, solving robot can be with for mechanical arm free-grabbing by vision free-grabbing orange Orange is ready.
Invention content
The technical problem to be solved in the present invention is, for the drawbacks described above of the prior art, provides a kind of orange and examines automatically Survey method, system and intelligent robot retail terminal.
The technical solution adopted by the present invention to solve the technical problems is:A kind of orange automatic testing method is constructed, including:
Acquire RGB image and depth image;
HSV colour space transformations are carried out to RGB image, and the channel H, S, V is handled, obtain the initial position of orange Range;
Based on determining initial position range, edge detection is carried out to depth image and edge is divided, it is accurate to obtain orange Location information and marginal information.
In orange automatic testing method of the present invention, before carrying out HSV colour space transformations to RGB image also Including:Registration process is carried out to RGB image based on depth image.
In orange automatic testing method of the present invention, the registration process includes:According to RGB image and depth map The homography relationship for the RGB image and depth image that the harvester of picture defines determines depth map on the basis of depth image The RGB information of each pixel corresponding position as in, to obtain RGB image identical with depth image size.
In orange automatic testing method of the present invention, the initial position range of the acquisition orange includes:
HSV colour space transformations are carried out to RGB image, respectively obtain the entire image in the channel H, S, V;
Image is handled from the image extracted in the entire image in the channels H within the scope of the first presetted pixel value as the channels H, from The image extracted in the entire image of channel S within the scope of the second presetted pixel value handles image as channel S;
The position intersection for obtaining the channels H processing image, channel S processing image and the entire image in the channels V obtains described first Beginning position range.
In orange automatic testing method of the present invention, the accurate location information of acquisition orange and edge letter Breath includes:
The pixel that initial position range is expanded to preset quantity outward obtains optimization position range;
In optimization position range, edge detection is carried out to depth image and obtains edge image;
Corresponding profile searched to the edge image that detected, and based on contours extract strategy to the profile that finds into The accurate marginal information is obtained by filtration in row;
Depth information corresponding with the marginal information in depth image is extracted, the accurate location information is obtained.
In orange automatic testing method of the present invention, the contours extract strategy includes:The area that will be included Profile less than presetted pixel value filters out;It will be filtered out positioned at the borderline profile of depth map;Circularity is less than circle The profile of degree threshold value filters out.
The invention also discloses a kind of orange automatic checkout systems, including:
Collecting unit, for acquiring RGB image and depth image;
First processing units, for after registration RGB image carry out HSV colour space transformations, and to the channel H, S, V into Row processing, obtains the initial position range of orange;
Second processing unit, for based on determining initial position range, edge detection and edge to be carried out to depth image Segmentation obtains the accurate location information of orange and marginal information.
In orange automatic checkout system of the present invention, the system also includes for being based on depth image to RGB Image carries out the registration unit of registration process, and the registration process includes:According to the collecting unit of RGB image and depth image The RGB image of definition and the homography relationship of depth image determine each pixel in depth image on the basis of depth image The RGB information of point corresponding position, to obtain RGB image identical with depth image size.
In orange automatic checkout system of the present invention,
The first processing units include:
Color space conversion subunit respectively obtains the channel H, S, V for carrying out HSV colour space transformations to RGB image Entire image;
Channel handles subelement, for from the image extracted in the entire image in the channels H within the scope of the first presetted pixel value Image is handled as the channels H, from the image extracted in the entire image of channel S within the scope of the second presetted pixel value as channel S Handle image;
Channel intersection obtains subelement, the whole picture figure for obtaining the channels H processing image, channel S processing image and the channels V The position intersection of picture obtains the initial position range;
The second processing unit includes:
Range optimizes subelement, and the pixel for initial position range to be expanded to preset quantity outward obtains optimization position Range;
Edge detection subelement, in optimization position range, carrying out edge detection to depth image and obtaining edge graph Picture;
Marginal information obtains subelement, for searching corresponding profile to the edge image that detected, and is based on profile Extraction strategy is filtered the profile found to obtain the accurate marginal information;Wherein, the contours extract strategy packet It includes:The profile that the area included is less than to presetted pixel value filters out;It will be filtered out positioned at the borderline profile of depth map; The profile that circularity is less than to circularity threshold value filters out.
Location information obtains subelement and is obtained for extracting depth information corresponding with the marginal information in depth image To the accurate location information.
The invention also discloses a kind of intelligent robot retail terminals, including the orange automatic checkout system.
Implement orange automatic testing method, system and the intelligent robot retail terminal of the present invention, has beneficial below Effect:The present invention is registrated, color space changes, the figure of edge detection and segmentation by acquiring RGB image and depth image After processing, the accurate location information of orange and marginal information can be obtained, mechanical arm can be assisted to complete the crawl to orange;I.e. Make to be to execute after method of the invention can acquire image detection to orange in the case where orange is stacked and capture out orange Son does not handle most of orange for being blocked or being all blocked temporarily, executes the present invention's before crawl next time again Method carries out Image Acquisition and orange detection.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis The attached drawing of offer obtains other attached drawings:
Fig. 1 is the flow chart of the preferred embodiment of the orange automatic testing method of the present invention;
Fig. 2 is the flow chart of step S3 in Fig. 1;
Fig. 3 is the flow chart of step S4 in Fig. 1.
Specific implementation mode
In order to better understand the above technical scheme, in conjunction with appended figures and specific embodiments to upper It states technical solution to be described in detail, it should be understood that the specific features in the embodiment of the present invention and embodiment are to the application The detailed description of technical solution, rather than to the restriction of technical scheme, in the absence of conflict, the present invention is implemented Technical characteristic in example and embodiment can be combined with each other.
It is the flow chart of the preferred embodiment of the orange automatic testing method of the present invention with reference to figure 1.The orange of the present invention is certainly Dynamic detection method, including:
S1, acquisition RGB image and depth image;
In preferred embodiment, the 3D sensor kinect cameras of use obtain RGB image and depth image simultaneously.
S2, registration process is carried out to RGB image based on depth image;
Since the collected RGB image of kinect cameras is bigger than depth image, so needing RGB image and depth map It is registrated, keeps its size identical.Specifically registration process is:The RGB image and depth image defined according to kinect cameras Homography relationship determine the RGB information of each pixel corresponding position in depth image on the basis of depth image, from And obtain RGB image identical with depth image size.
S3, HSV colour space transformations are carried out to the RGB image after registration, and the channel H, S, V is handled, obtain orange The initial position range of son;
Since the difference in perception of rgb color space and human eye is very big, and the color sense of HSV color spaces relatively human eye The space known meets the visual signature of people, therefore carries out HSV colour space transformations to the RGB image after matching.
S4, orange essence is obtained to depth image progress edge detection and edge segmentation based on determining initial position range True location information and marginal information.
Wherein, with reference to figure 2, the step S3 is specifically included:
S31, HSV colour space transformations are carried out to the RGB image after registration, respectively obtains the entire image in the channel H, S, V; Wherein, transformation for mula is as follows:
V=max (r, g, b)
In formula, r, g, b represent the red, green, blue in RGB image space, and max represents maximum value, and min represents minimum value.
S32, from the image extracted in the entire image in the channels H within the scope of the first presetted pixel value as the channels H processing figure Picture handles image from the image extracted in the entire image of channel S within the scope of the second presetted pixel value as channel S;
For example, in preferred embodiment, the first presetted pixel value ranging from 25 to 90 in the channels H selects the pixel in the channels H Pixel of the value in 25 to 90 obtains the channels H processing image HM;First presetted pixel value ranging from 0.4 to 1 of channel S, i.e., Pixel of the pixel value of channel S in 0.4 to 1 is selected to obtain channel S processing image SM;The channels V not restriction, rounding width figure VM。
It is understood that the specific selection range in the channel H, S, V can be set according to the specific color regime of orange, It is not limited to above range.
S33, the position intersection for obtaining the channels H processing image, channel S processing image and the entire image in the channels V, obtain institute State initial position range.That is, taking the intersection image HSVM of image HM, SM, VM as initial position range.
Wherein, with reference to figure 3, the step S4 is specifically included:
S41, the pixel that initial position range is expanded to preset quantity outward obtain optimization position range;
This step is that image intersection is very few in order to prevent, in preferred embodiment, by the boundary of intersection image HSVM toward extending out Fill 5 pixels.
S42, optimizing in position range, carrying out edge detection to depth image obtains edge image;
That is, the intersection image DEP-HSVM for depth image DEP and intersection image HSVM carries out edge detection. Edge detection is carried out to DEP-HSVM images using Laplacian detection algorithm in the step and obtains edge image EDG, it is high This core takes 7 × 7 sizes.
S43, corresponding profile searched the edge image that detected, and based on contours extract strategy to the wheel that finds Exterior feature is filtered to obtain the accurate marginal information;
Depth information corresponding with the marginal information in S44, extraction depth image obtains the accurate position letter Breath.
Wherein, since the profile found is relatively more, so setting contours extract strategy is needed to carry out in step S43 Filter, the contours extract strategy include:
A), profile that the area included is less than to presetted pixel value filters out;For example, passing through experiment in preferred embodiment Verification, presetted pixel value are set as 50, i.e., the profile that the area included is less than to 50 pixels filters out;
B it), will be filtered out positioned at the borderline profile of depth map;
C), profile that circularity is less than to circularity threshold value filters out.For example, by experimental verification in preferred embodiment, Circularity threshold value is set as 1.2, i.e. the profile profile circularity less than 1.2 abandons.Wherein, the circularity calculating process of profile is such as Under:
C1 center of gravity) is calculated:
There are two the square collection of the bounded function f (x, y) of argument for definition tool:Wherein, J, k can use all nonnegative integral values, then the calculating of center of gravity is as follows:
Wherein,Indicate center of gravity, M00It is M for the zeroth order square of image outline01、M10For two single orders of image outline Square, f (x, y) are pixel value of the image in the position (x, y).
(2) circularity is calculated, formula is as follows:
Wherein, C indicates circularity, μRFor from regional barycenterTo the average distance of profile point, σRFor from regional barycenter To the mean square deviation of the distance of profile point.When region R tends to round, circularity C is that single increase tends to infinite, it is not flat by region It moves, rotate, the influence of dimensional variation.
The profile obtained after filtering, the as profile of target orange, the size of target orange can be obtained by profile, often A profile has corresponding X, Y, Z coordinate in depth map, passes through depth camera and the calibration of mechanical arm coordinate, so that it may with by depth The coordinate of camera is converted to the coordinate of mechanical arm, so as to know coordinate position of the orange in entire mechanical arm coordinate system, The orange position that manipulator is calculated by vision can accurately capture orange.
It should be clear that for the orange being stacked, the present invention is only largely sudden and violent to entire orange or orange It is exposed at being detected under vision, the orange be largely blocked to some or the orange being all blocked are not handled temporarily, The profile of these oranges being blocked will be dropped, and after waiting manipulators that front orange is taken away, then be executed step S1-S4 and examined Survey.
Correspondingly, the invention also discloses a kind of orange automatic checkout systems, and detected automatically including the orange The intelligent robot retail terminal of system.Wherein, orange automatic checkout system includes:
Collecting unit, for acquiring RGB image and depth image;
Registration unit, for carrying out registration process to RGB image based on depth image;Wherein, the registration process packet It includes:The homography relationship of the RGB image and depth image that are defined according to the collecting unit of RGB image and depth image, with depth It spends on the basis of image, determines the RGB information of each pixel corresponding position in depth image, to obtain and depth image size Identical RGB image.
First processing units, for after registration RGB image carry out HSV colour space transformations, and to the channel H, S, V into Row processing, obtains the initial position range of orange;
Second processing unit, for based on determining initial position range, edge detection and edge to be carried out to depth image Segmentation obtains the accurate location information of orange and marginal information.
Wherein, the first processing units include:
Color space conversion subunit is respectively obtained for carrying out HSV colour space transformations to the RGB image after registration H, the entire image in the channel S, V;
Channel handles subelement, for from the image extracted in the entire image in the channels H within the scope of the first presetted pixel value Image is handled as the channels H, from the image extracted in the entire image of channel S within the scope of the second presetted pixel value as channel S Handle image;
Channel intersection obtains subelement, the whole picture figure for obtaining the channels H processing image, channel S processing image and the channels V The position intersection of picture obtains the initial position range;
Wherein, the second processing unit includes:
Range optimizes subelement, and the pixel for initial position range to be expanded to preset quantity outward obtains optimization position Range;
Edge detection subelement, in optimization position range, carrying out edge detection to depth image and obtaining edge graph Picture;
Marginal information obtains subelement, for searching corresponding profile to the edge image that detected, and is based on profile Extraction strategy is filtered the profile found to obtain the accurate marginal information;Wherein, the contours extract strategy packet It includes:The profile that the area included is less than to presetted pixel value filters out;It will be filtered out positioned at the borderline profile of depth map; The profile that circularity is less than to circularity threshold value filters out.
Location information obtains subelement and is obtained for extracting depth information corresponding with the marginal information in depth image To the accurate location information.
In conclusion implementing orange automatic testing method, system and the intelligent robot retail terminal of the present invention, have Following advantageous effect:The present invention by acquiring RGB image and depth image, be registrated, color space variation, edge detection and After the image procossing of segmentation, the accurate location information of orange and marginal information can be obtained, mechanical arm can be assisted to complete to orange Crawl;Even in the case where orange is stacked, the method for executing the present invention can acquire image detection to orange After capture out orange, most of orange for being blocked or being all blocked is not handled temporarily, is held again before crawl next time The method of the row present invention carries out Image Acquisition and orange detection.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited in above-mentioned specific Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much Form, all of these belong to the protection of the present invention.

Claims (10)

1. a kind of orange automatic testing method, which is characterized in that including:
Acquire RGB image and depth image;
HSV colour space transformations are carried out to RGB image, and the channel H, S, V is handled, obtain the initial position model of orange It encloses;
Based on determining initial position range, edge detection is carried out to depth image and edge is divided, obtains the accurate position of orange Confidence ceases and marginal information.
2. orange automatic testing method according to claim 1, which is characterized in that empty carrying out HSV colors to RGB image Between transformation before further include:Registration process is carried out to RGB image based on depth image.
3. orange automatic testing method according to claim 2, which is characterized in that the registration process includes:According to RGB The homography relationship for the RGB image and depth image that the harvester of image and depth image defines, using depth image as base Standard determines the RGB information of each pixel corresponding position in depth image, to obtain RGB identical with depth image size Image.
4. orange automatic testing method according to claim 1, which is characterized in that the initial position of the acquisition orange Range includes:
HSV colour space transformations are carried out to RGB image, respectively obtain the entire image in the channel H, S, V;
Image is handled from the image extracted in the entire image in the channels H within the scope of the first presetted pixel value as the channels H, it is logical from S The image extracted in the entire image in road within the scope of the second presetted pixel value handles image as channel S;
The position intersection for obtaining the channels H processing image, channel S processing image and the entire image in the channels V, obtains the initial bit Set range.
5. orange automatic testing method according to claim 1, which is characterized in that the accurate position of acquisition orange Information and marginal information include:
The pixel that initial position range is expanded to preset quantity outward obtains optimization position range;
In optimization position range, edge detection is carried out to depth image and obtains edge image;
Corresponding profile is searched to the edge image that detected, and the profile found was carried out based on contours extract strategy Filter obtains the accurate marginal information;
Depth information corresponding with the marginal information in depth image is extracted, the accurate location information is obtained.
6. orange automatic testing method according to claim 5, which is characterized in that the contours extract strategy includes:It will The profile that the area included is less than presetted pixel value filters out;It will be filtered out positioned at the borderline profile of depth map;It will justify The profile that shape degree is less than circularity threshold value filters out.
7. a kind of orange automatic checkout system, which is characterized in that including:
Collecting unit, for acquiring RGB image and depth image;
First processing units, for after registration RGB image carry out HSV colour space transformations, and to the channel H, S, V at Reason, obtains the initial position range of orange;
Second processing unit, for based on determining initial position range, carrying out edge detection to depth image and edge being divided, Obtain the accurate location information of orange and marginal information.
8. orange automatic checkout system according to claim 7, which is characterized in that the system also includes for being based on depth The registration unit that image carries out RGB image registration process is spent, the registration process includes:According to RGB image and depth map The homography relationship for the RGB image and depth image that the collecting unit of picture defines determines depth map on the basis of depth image The RGB information of each pixel corresponding position as in, to obtain RGB image identical with depth image size.
9. orange automatic checkout system according to claim 7, which is characterized in that
The first processing units include:
Color space conversion subunit respectively obtains the whole of the channel H, S, V for carrying out HSV colour space transformations to RGB image Width image;
Channel handle subelement, for from the image extracted in the entire image in the channels H within the scope of the first presetted pixel value as H Channel handles image, from the image extracted in the entire image of channel S within the scope of the second presetted pixel value as channel S processing figure Picture;
Channel intersection obtains subelement, for obtaining the channels H processing image, channel S processing image and the entire image in the channels V Position intersection obtains the initial position range;
The second processing unit includes:
Range optimizes subelement, and the pixel for initial position range to be expanded to preset quantity outward obtains optimization position range;
Edge detection subelement, in optimization position range, carrying out edge detection to depth image and obtaining edge image;
Marginal information obtains subelement, for searching corresponding profile to the edge image that detected, and is based on contours extract Strategy is filtered the profile found to obtain the accurate marginal information;Wherein, the contours extract strategy includes:It will The profile that the area included is less than presetted pixel value filters out;It will be filtered out positioned at the borderline profile of depth map;It will justify The profile that shape degree is less than circularity threshold value filters out.
Location information obtains subelement and obtains essence for extracting depth information corresponding with the marginal information in depth image The true location information.
10. a kind of intelligent robot retail terminal, which is characterized in that certainly including such as claim 7-9 any one of them orange Dynamic detecting system.
CN201710048267.1A 2017-01-20 2017-01-20 A kind of orange automatic testing method, system and intelligent robot retail terminal Pending CN108335308A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710048267.1A CN108335308A (en) 2017-01-20 2017-01-20 A kind of orange automatic testing method, system and intelligent robot retail terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710048267.1A CN108335308A (en) 2017-01-20 2017-01-20 A kind of orange automatic testing method, system and intelligent robot retail terminal

Publications (1)

Publication Number Publication Date
CN108335308A true CN108335308A (en) 2018-07-27

Family

ID=62922071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710048267.1A Pending CN108335308A (en) 2017-01-20 2017-01-20 A kind of orange automatic testing method, system and intelligent robot retail terminal

Country Status (1)

Country Link
CN (1) CN108335308A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021042693A1 (en) * 2019-09-04 2021-03-11 五邑大学 Mining process-based method for acquiring three-dimensional coordinates of ore and apparatus therefor
CN114029951A (en) * 2021-11-10 2022-02-11 盐城工学院 Robot autonomous recognition intelligent grabbing method based on depth camera
CN114565517A (en) * 2021-12-29 2022-05-31 骨圣元化机器人(深圳)有限公司 Image denoising method and device for infrared camera and computer equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971380A (en) * 2014-05-05 2014-08-06 中国民航大学 Pedestrian trailing detection method based on RGB-D
CN104700404A (en) * 2015-03-02 2015-06-10 中国农业大学 Fruit location identification method
CN105139407A (en) * 2015-09-08 2015-12-09 江苏大学 Color depth matching plant identification method based on Kinect sensor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971380A (en) * 2014-05-05 2014-08-06 中国民航大学 Pedestrian trailing detection method based on RGB-D
CN104700404A (en) * 2015-03-02 2015-06-10 中国农业大学 Fruit location identification method
CN105139407A (en) * 2015-09-08 2015-12-09 江苏大学 Color depth matching plant identification method based on Kinect sensor

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021042693A1 (en) * 2019-09-04 2021-03-11 五邑大学 Mining process-based method for acquiring three-dimensional coordinates of ore and apparatus therefor
CN114029951A (en) * 2021-11-10 2022-02-11 盐城工学院 Robot autonomous recognition intelligent grabbing method based on depth camera
CN114029951B (en) * 2021-11-10 2022-05-10 盐城工学院 Robot autonomous recognition intelligent grabbing method based on depth camera
CN114565517A (en) * 2021-12-29 2022-05-31 骨圣元化机器人(深圳)有限公司 Image denoising method and device for infrared camera and computer equipment
CN114565517B (en) * 2021-12-29 2023-09-29 骨圣元化机器人(深圳)有限公司 Image denoising method and device of infrared camera and computer equipment

Similar Documents

Publication Publication Date Title
CN108510491B (en) Method for filtering human skeleton key point detection result under virtual background
CN105335725B (en) A kind of Gait Recognition identity identifying method based on Fusion Features
CN105144710B (en) For the technology for the precision for increasing depth camera image
CN109272513B (en) Depth camera-based hand and object interactive segmentation method and device
CN104134209B (en) A kind of feature extracting and matching method and system in vision guided navigation
CN108765278A (en) A kind of image processing method, mobile terminal and computer readable storage medium
CN102999918A (en) Multi-target object tracking system of panorama video sequence image
CN110032932B (en) Human body posture identification method based on video processing and decision tree set threshold
KR101753097B1 (en) Vehicle detection method, data base for the vehicle detection, providing method of data base for the vehicle detection
KR20120069331A (en) Method of separating front view and background
CN108335308A (en) A kind of orange automatic testing method, system and intelligent robot retail terminal
CN108377374A (en) Method and system for generating depth information related to an image
CN104282027B (en) Circle detecting method based on Hough transformation
CN106650628B (en) Fingertip detection method based on three-dimensional K curvature
CN106504262A (en) A kind of small tiles intelligent locating method of multiple features fusion
CN111192326B (en) Method and system for visually identifying direct-current charging socket of electric automobile
CN114445440A (en) Obstacle identification method applied to self-walking equipment and self-walking equipment
CN111369529B (en) Article loss and leave-behind detection method and system
CN107491714B (en) Intelligent robot and target object identification method and device thereof
CN112101260A (en) Method, device, equipment and storage medium for identifying safety belt of operator
CN111127556A (en) Target object identification and pose estimation method and device based on 3D vision
CN111160280A (en) RGBD camera-based target object identification and positioning method and mobile robot
CN108805838A (en) A kind of image processing method, mobile terminal and computer readable storage medium
JP2020021212A (en) Information processing device, information processing method, and program
JP2019211981A (en) Information processor, information processor controlling method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180727