CN110674762A - Method for detecting human body in automatic sliding baby process - Google Patents

Method for detecting human body in automatic sliding baby process Download PDF

Info

Publication number
CN110674762A
CN110674762A CN201910922687.7A CN201910922687A CN110674762A CN 110674762 A CN110674762 A CN 110674762A CN 201910922687 A CN201910922687 A CN 201910922687A CN 110674762 A CN110674762 A CN 110674762A
Authority
CN
China
Prior art keywords
guardian
child
camera
face image
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910922687.7A
Other languages
Chinese (zh)
Other versions
CN110674762B (en
Inventor
肖刚军
邓文拔
姜新桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN201910922687.7A priority Critical patent/CN110674762B/en
Publication of CN110674762A publication Critical patent/CN110674762A/en
Application granted granted Critical
Publication of CN110674762B publication Critical patent/CN110674762B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/0202Child monitoring systems using a transmitter-receiver system carried by the parent and the child
    • G08B21/0266System arrangements wherein the object is to detect the exact distance between parent and child or surveyor and item

Landscapes

  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Child & Adolescent Psychology (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a method for detecting a human body in an automatic sliding baby process, which is suitable for an automatic walking robot with a vertical connecting rod and a child seat, wherein the middle part of the vertical connecting rod is provided with a second camera with a visual angle covering the child seat and a peripheral environment, and the method comprises the following steps: pre-storing images of a guardian, including face image information; calling a second camera to acquire image information of the peripheral environment of the child seat, detecting whether a human body appears in a preset safety distance of the child, and if so, analyzing whether a face image shot by the second camera is matched with a pre-stored face image of a guardian; and if not, sending position request information to the mobile equipment of the guardian, and then searching out a path which tracks the guardian and is far away from the obstacle in real time according to the response position information received by the robot walking out of the way.

Description

Method for detecting human body in automatic sliding baby process
Technical Field
The invention relates to the technical field of trolleys, in particular to a method for detecting a human body in an automatic sliding baby process.
Background
The baby carriage is a tool carriage designed for providing convenience for outdoor activities of children, and has various types, and the baby carriage is a favorite walking vehicle for babies, and is a necessary product for mothers to take the babies to shop on the street and go out. Modern stroller manufacturers have introduced various styles of strollers, such as folding, portable, flexible, and shock resistant, to take into account the varied needs of parents and babies.
In every family with children, the child stroller has become a good helper for young parents to nurture children, but adults cannot keep a good look at the children all the time, and children are not enough in self-protection due to their age, so that children's safety is threatened if an unknown person attempts to take away the stroller or the child when the adult leaves the stroller. Therefore, it is necessary to provide a solution for detecting a human body around a child seat for a stroller.
Disclosure of Invention
Aiming at the technical problems, the invention provides a method for detecting a human body in the process of automatically sliding a baby, which is suitable for an automatic baby walking robot with a vertical connecting rod and a child seat, wherein a second camera with a visual angle covering the child seat and the peripheral environment is arranged in the middle of the vertical connecting rod, and the method comprises the following steps: pre-storing images of a guardian, including face image information; calling a second camera to acquire image information of the peripheral environment of the child seat, detecting whether a human body appears in a preset safety distance of the child, and if so, analyzing whether a face image shot by the second camera is matched with a pre-stored face image of a guardian; and if not, sending position request information to the mobile equipment of the guardian, and then searching out a path which tracks the guardian and is far away from the obstacle in real time according to the response position information received by the robot walking out of the way. According to the technical scheme, the method for avoiding the child being carried away on the robot for automatically walking the child is made within a preset range, and the function that a guardian quickly tracks the safety of the child in real time is achieved.
Further, when the gravity center position of the child is detected to be away from the child seat by the preset safety distance, the current position information is located in real time, and reminding information is sent to the mobile equipment of the guardian. The safe activity area of the child on the seat is planned, and the child is initially prevented from moving to the dangerous area beside the robot for walking the baby independently.
Further, the preset safe distance ranges from 10cm to 40 cm. The detection accuracy is increased.
Further, the method for determining whether the face image captured by the second camera matches with the pre-stored face image of the guardian is as follows: and calculating whether the similarity of the key points of the face image shot by the second camera and the pre-stored face image of the guardian reaches more than a preset threshold value, if so, determining that the face image shot by the second camera is matched with the pre-stored face image of the guardian, otherwise, not matching. The comprehensiveness of the detection method is improved.
Further, the preset threshold is 90%. The accuracy of image detection is improved.
Further, if no human body appears within the preset safety distance of the detected child, the method for detecting the human body is stopped. Or under the condition that a human body appears in the preset safety distance of the child, judging that the face image shot by the second camera is matched with a pre-stored face image of a guardian, and stopping executing the method for detecting the human body. The comprehensiveness of the detection method is improved.
Drawings
Fig. 1 is a schematic structural view of an autonomous doll walking robot according to an embodiment of the present invention.
Fig. 2 is a flowchart of an automatic sliding doll method based on the robot for automatically sliding a doll according to the embodiment of the present invention.
Fig. 3 is a flowchart of a method for detecting a surrounding human body according to an embodiment of the present invention.
Fig. 4 is a flowchart of a method for detecting an environmental condition around a moving path according to an embodiment of the present invention.
Reference numerals:
101. a vertical connecting rod with a camera 1011; 102. pushing the handle by hand; 103. a drive wheel; 104. a child seat; 105. a child armrest with a camera 1051; 106. a universal wheel; 107. and moving the base.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail below with reference to the accompanying drawings in the embodiments of the present invention. To further illustrate the various embodiments, the invention provides the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the embodiments. Those skilled in the art will appreciate still other possible embodiments and advantages of the present invention with reference to these figures. Elements in the figures are not drawn to scale and like reference numerals are generally used to indicate like elements.
The invention provides an autonomous walking baby robot, which comprises a hand push handle 102, a vertical connecting rod 101, a child seat 104, a child handrail 105 and a mobile base 107, wherein the vertical connecting rod 101 is matched and connected with the hand push handle 102; the child seat 104 is mounted above the mobile base 107, and the child seat 104 and the mobile base 107 are a combined body of plastic and metal of the entire vehicle. The middle position of the child armrest 105 is horizontal, such design can enable both hands of a child to hold the child armrest 105 well and grasp the balance force of the child armrest, the lower portion of the child armrest 105 is mounted on the moving base 107 through a first fixing bracket (not shown in the figure), the first fixing bracket is also accommodated and assembled with a first camera 1051 and fixed through a hand nut screwing device (not marked in the figure), the view angle of the first camera 1051 covers the peripheral side of the advancing direction of the moving base 107, and the depression angle of the first camera 1051 can be used for detecting obstacles or the track of pedestrians on the peripheral side in the advancing direction of the moving base 107. The lower part of vertical connecting rod 101 passes through the second fixed bolster and installs the rear end at children's seat 104, the middle part of vertical connecting rod 101 is provided with second camera 1011, second camera 1011 set up the height and be greater than the height of riding the children at children's seat 104, it is fixed also through hand screw nut screwing device (not sign in the picture), children's seat 104 and peripheral environment are covered to second camera 1011's visual angle, can acquire the moving image of the children of sitting on children's seat 104 at least, and second camera 1011's elevation angle covers guardian or pedestrian's face image that the robot of independently walking a child was accompanied. Nut screwing device twists nut, position circle and lead screw including the main hand, the lead screw passes assembly screw on position circle, the camera in proper order and twists the nut spiro union with the main hand, and its shape all has certain tapering, can effectually prevent that the camera from rocking back and forth at the fixed bolster, can be firmly fasten these both together. In addition, a visual positioning and mapping module is integrated inside the mobile base 107 and is in cable connection with the first camera 1051 and the second camera 1052, a chip for navigation positioning and instant map construction and a peripheral circuit module are built in the visual positioning and mapping module, and are used for acquiring images of the first camera 1051 and the second camera 1052, performing image processing on the images, and extracting a two-dimensional or three-dimensional point cloud data set in the processed images to complete map construction. Compared with the chinese invention patent CN107215376B, in this embodiment, the functional module of the unmanned aerial vehicle towing child stroller is integrated on the child stroller, and the positioning and navigation work is completed by using the cameras in different directions on the robot for walking a baby autonomously in cooperation with the visual positioning and mapping module, so as to realize that the mobile base towing the robot for walking a baby autonomously in accordance with the image information acquired by the cameras can track the guardian without obstacle. The cost of parents for purchasing the child handcart is reduced.
As shown in fig. 1, the robot for walking the baby autonomously further comprises a driving wheel 103 and a universal wheel 106 which are arranged below a child seat 104, the driving wheel 103 is arranged on the left side and the right side of a movable base 107, the wheel track of the driving wheel on the left side and the wheel track of the driving wheel on the right side of the movable base 107 are widened, the overturning prevention is facilitated, the safety factor and the stability of a vehicle body are increased scientifically, the distance between the widened rear wheels is as long as 55cm, and the rollover angle is prevented safely as high as 15 degrees. The mobile base 107 comprises a driving motor, and is used for driving the driving wheel 103 to drive the robot walking outside the baby to move autonomously according to a preset path; universal wheel 106 is installed at the front end of removal base 107 through rotatable support (not shown in the figure), supports autonomic child robot freely turns to, and 360 degrees universal wheel 106 helps autonomic child robot easily pass through low barrier, and the manipulation is nimble, is applicable to multiple road surface.
As shown in fig. 1, two protection belts (not shown) are respectively led out from the same position of the vertical connecting rod 101, and the two protection belts are used for fixing a child on the child seat 104 by cross-latching on two sides of the child seat 104. The safety of children in the moving process of the robot for walking the baby automatically is guaranteed, and particularly the children riding on the child seat 104 are prevented from falling off in the braking process of the driving wheel 103.
The embodiment also provides an automatic sliding method based on the robot walking baby autonomously, and as shown in fig. 2, the automatic sliding method includes:
and step S1, pre-storing the image of the guardian and the image of the child, wherein the image information of the guardian and the image information of the child comprise face image information and body posture image information of the child.
Step S2, invoking the second camera 1011 to acquire a motion point cloud data set of the child on the child seat and a motion point cloud data set of the guardian, which are acquired image data within the elevation coverage range of the second camera 1011, and mapping the data to a map constructed by the visual positioning and mapping module integrated with the mobile base 107, so as to establish an independent landmark and a child safety monitoring area based on the child, where the motion point cloud data set in this embodiment is formed by encoding image pixels acquired by the second camera 1011, so that the robot walking outside the child can track the pose of the guardian conveniently. The automatic baby walking method is used for controlling the distance between the mobile base and the mobile equipment of the guardian in a constructed map to be not more than a preset monitoring distance, a child safety monitoring area covers an area range with the mobile base as a center and the preset monitoring distance as a radius, the real-time tracking of the guardian by the mobile base is facilitated, in addition, the child safety monitoring area is within a receiving and sending range allowed by a communication signal between the mobile base and the mobile equipment of the guardian, and the mobile equipment of the guardian can be a wireless remote control device of the robot walking the baby autonomously, so that real-time communication can be realized.
Step S3, invoking the first camera 1051 to collect the obstacle point cloud data set around the advancing direction of the mobile base 107, and establishing a road sign of the obstacle in the map constructed by the visual positioning and mapping module integrated with the mobile base 107; the landmarks of these obstacles may be geometric features such as corners for localization extracted from the obstacle point cloud data set by the visual localization and mapping module.
It should be noted that the independent landmarks based on the children, the road signs based on the obstacles, and the child safety monitoring area only form a sparse map, which is enough if only used for positioning, but the guardian also needs to be tracked in real time in the automatic walking process, and the robot walking autonomously needs to move along a reliable navigation path, so that the safety of the children is ensured, and the guardian is more relieved, therefore, the environment images within a range of 360 degrees captured by the second camera 1011 and the first camera 1051 are encoded into a point cloud data set and then mapped into the grids constructed by the visual positioning and mapping module, and the grids contain three states of occupation, idle and unknown, so as to express whether objects exist in the grids, so that the visual positioning and mapping module constructs a dense map based on the grids.
Step S4, constructing a grid map environment between the robot walking a baby and the guardian according to the road signs of the obstacles and the safety monitoring area based on children, when searching a certain spatial position of the guardian, the map can give information whether the position can pass or not, thereby searching a path which tracks the guardian and is far away from the obstacles in real time, then controlling the mobile base 107 to move according to the path searched in real time, and then using the point cloud data set inserted in real time to correct the path, and it is worth noting that the point cloud data set inserted in real time subsequently can be used for judging the similarity between images to complete loop detection to correct the path deviation, when the loop detection uses the algorithm of the similarity of two similar images, the problem that the position estimation drifts along with time is solved, the road signs of the current position are corrected, the position estimation value is pulled back to the actual position, the accumulated error can be significantly reduced. Since the information of the images within the 360 ° range captured by the second camera 1011 and the first camera 1051 is very rich, the difficulty of detecting the loop back in this embodiment is reduced.
Compared with the positioning of the unmanned aerial vehicle in the prior art, the method for visually positioning and immediately constructing the map, particularly the dense map constructed in the step S3 and the loop detection optimization performed in the step S4, overcome the problems of large drift value and large design cost of the unmanned aerial vehicle in the prior art, improve the precision of the automatic sliding baby, ensure that the robot for automatically walking the baby tracks the guardian in real time, and have higher accuracy in automatically returning the robot for automatically walking the baby.
It is worth noting that although the robot for automatically walking a baby supports autonomous movement, the robot for automatically walking a baby can also be divided into the automatic baby walking mode and the manual baby walking mode, and in the manual baby walking mode, a guardian of a child can also push the robot for automatically walking a baby by using the hand push handle 102 and still push the robot for automatically walking a baby by hand.
As an embodiment, according to image information of a pre-stored body posture of a child, when detecting that a gravity center position of the child is away from a child seat by a preset safety distance, positioning current position information in real time, and sending reminding information to a mobile device of a guardian; this embodiment calls second camera 1011 and gathers children's on the children's seat activity point cloud data set, detects children and is in children's seat 104 predetermines safe distance internalization, and this embodiment uses this to predetermine safe distance and can mark children in the map that visual localization and drawing module structure establish as the radius and be in independently walk the monitoring range on the baby robot, regard as children's safe activity region, confine to children's seat 104's peripheral region, so predetermine safe distance and set up to 10 cm. In the embodiment, the movement area of the child on the seat is planned, so that the movement of the child to the dangerous area beside the robot walking the child is initially prevented. Specifically, when the child is not fixed on the child seat by the protective belt, the child leaves the child seat and moves beyond the child safety activity area, the current position information of the gravity center of the child is located by being collected and detected by the second camera 1011, and the reminding information is sent to the mobile device of the guardian in real time; or when the child falls off the child seat to exceed the child safety activity area, positioning the current position information of the gravity center of the child, and sending reminding information to the mobile device of the guardian in real time.
As shown in fig. 3, an embodiment of the present invention provides a method for detecting a human body in an automatic doll moving process, where the method, as an implementation manner of the foregoing step S4, specifically includes the following steps:
s301, calling a second camera 1011 to acquire image information of the peripheral environment of the child seat 104, and then entering step S302; step S302, judging whether a human body appears in the preset safety distance of the child according to the image detection result of the step S301, if so, entering the step S303, otherwise, returning to the step S301 to continuously shoot the images around and detecting whether other human bodies exist around the child according to the shot images. And step S303, analyzing whether the face image shot by the second camera is matched with a pre-stored face image of the guardian, judging whether the face image is matched by calculating the similarity of key points of the image, if so, returning to the step S301 to continuously shoot the images around, detecting whether other human bodies exist around the child according to the shot images, and otherwise, entering the step S304. Step S304, the mobile base sends position request information to the guardian' S mobile device, and then step S305 is carried out. Step S305, according to the response position information received by the robot walking the baby autonomously, tag information is made in the constructed grid map environment between the robot walking the baby autonomously and the guardian, and then a path which tracks the guardian and is far away from the obstacle is searched.
In the present embodiment, the preset safe distance is set to 10cm to 40cm, in the present embodiment, 26 cm; the matching is to compare the captured face image with a pre-stored face image, and analyze whether the similarity reaches more than 90%, and the safety distance is a substantially limited detection distance, so that the embodiment is limited to detecting a human body at a short distance, the operation processing resources of the second camera 1011 and the related processing module thereof are saved, and the detection precision can also be improved.
Specifically, the second camera 1011 installed in the middle of the vertical connecting rod 101 is used for capturing images around the child and judging whether other human bodies exist in the range of 26cm of the child according to the captured images, after the other human bodies are detected, the captured facial images are compared with prestored face image information of the guardian, whether the similarity between the captured facial images and the prestored face image information reaches more than 90% is analyzed, and if the similarity does not reach more than 90%, the robot walking automatically continues to move to the position of the guardian along a pre-planned path, and the mobile base sends position request information to the mobile equipment of the guardian.
It should be noted that, if no human body is detected within the preset safety distance of the child, the method for detecting a human body is stopped. Or under the condition that a human body appears in the preset safety distance of the child, judging that the face image shot by the second camera is matched with a pre-stored face image of a guardian, and stopping executing the method for detecting the human body. The comprehensiveness of the detection method is improved.
As an example, as shown in fig. 4, a flowchart of a method for detecting an environmental condition around a moving path, which is an implementation manner of the foregoing step S4, specifically includes the following steps:
step S401, shooting an environment image of the advancing direction of the mobile base by using the first camera, and then entering step S402.
Step S402, according to an image point cloud data set obtained by encoding an environment image collected by the first camera, fitting a plane formed by point cloud blocks by using a RANSAC plane fitting method, traversing the fitted plane line by line, calculating the difference between the fitting height of the nth line in the fitted plane and the fitting height of the (n-1) th line in the fitted plane, if the height difference is greater than a preset threshold value, judging that the robot walking outside a baby moves to a step edge, and then entering step S404, otherwise, carrying out step S403.
And S403, controlling the robot for walking the baby autonomously to keep moving forward, analyzing whether an obstacle is close to the robot for walking the baby autonomously in the child safety monitoring area or not according to the acquired image information, if so, entering S405, and otherwise, returning to S401. Since the path correction needs to be performed by using the point cloud data set inserted in real time in step S4, in the process of analyzing the child safety monitoring area, scenes of different scales are taken into consideration in step S406, so that the scale drift phenomenon is reduced, and the phenomenon that the moving direction of an obstacle moving close to the robot walking out of the child is uncertain is avoided. The detection range is limited in the child safety monitoring area because only pedestrians or automobiles around the robot walking out of the child will affect the safety of the child; and moving obstacles (pedestrians or automobiles) outside the child safety monitoring area do not need to be considered, so that excessive calculation amount is avoided. It is noted that the coverage of the child safety monitoring area of the present embodiment includes the child safety activity area of the previous embodiment.
Step S404, controlling the robot to move back, and in this embodiment, controlling the robot to move back until the image point cloud data set collected by the first camera in real time does not have gradient change, because the gray scale map corresponding to the dense map constructed by the visual positioning and mapping module in this embodiment has an obvious gradient in the step edge region.
Step S405, controlling the robot to avoid an obstacle, in this embodiment, moving in the opposite direction of the current forward direction in the child safety activity area to get away from the obstacle.
Alarm information is then sent to the guardian's mobile device. Because autonomic child robot of sauntering removes along the route of tracking guardian and keeping away from the barrier, so this embodiment can be through installing speaker on the robot of independently sauntering sends the alarm sound of certain decibel, warns when dodging the pedestrian, prevents that the pedestrian from playing the cell-phone and not noticing and bump over at the low head robot of sauntering. The aforementioned guardian is understood to be an immediate relative of the aforementioned child. The embodiment utilizes the image information of the first camera to avoid the obstacle on the automatic forward path of the robot walking the child autonomously, so that the robot walking the child autonomously is prevented from colliding with the fixed and moving obstacle to disturb children or disturb pedestrians.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and are intended to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the scope of the present invention. All equivalent changes or modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.

Claims (7)

1. The method for detecting the human body in the process of automatically sliding the baby is characterized in that the method is suitable for an automatic baby walking robot with a vertical connecting rod and a child seat, a second camera with a view angle covering the child seat and the peripheral environment is arranged in the middle of the vertical connecting rod, and the method comprises the following steps:
pre-storing images of a guardian, including face image information;
calling a second camera to acquire image information of the peripheral environment of the child seat, detecting whether a human body appears in a preset safety distance of the child, and if so, analyzing whether a face image shot by the second camera is matched with a pre-stored face image of a guardian;
and if not, sending position request information to the mobile equipment of the guardian, and then searching out a path which tracks the guardian and is far away from the obstacle in real time according to the response position information received by the robot walking out of the way.
2. The method of detecting a human body according to claim 1, comprising:
and when the gravity center position of the child is detected to be away from the child seat by one preset safety distance, positioning the current position information in real time, and sending reminding information to the mobile equipment of the guardian.
3. The method of claim 2, wherein the predetermined safety distance is in a range of 10cm to 40 cm.
4. The method for detecting the human body according to claim 2, wherein the method for determining whether the face image captured by the second camera matches with the pre-stored face image of the guardian comprises: and calculating whether the similarity of the key points of the face image shot by the second camera and the pre-stored face image of the guardian reaches more than a preset threshold value, if so, determining that the face image shot by the second camera is matched with the pre-stored face image of the guardian, otherwise, not matching.
5. The method for detecting human body according to claim 4, wherein the preset threshold is 90%.
6. The method of claim 1, wherein the method of detecting a human body is stopped if no human body is present within the preset safety distance of the detected child.
7. The method of claim 1, wherein when the presence of a human body within the predetermined safety distance of the child is analyzed, it is determined that the facial image captured by the second camera matches a pre-stored face image of the guardian, and the method of detecting a human body is stopped.
CN201910922687.7A 2019-09-27 2019-09-27 Method for detecting human body in automatic doll walking process Active CN110674762B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910922687.7A CN110674762B (en) 2019-09-27 2019-09-27 Method for detecting human body in automatic doll walking process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910922687.7A CN110674762B (en) 2019-09-27 2019-09-27 Method for detecting human body in automatic doll walking process

Publications (2)

Publication Number Publication Date
CN110674762A true CN110674762A (en) 2020-01-10
CN110674762B CN110674762B (en) 2022-03-04

Family

ID=69079781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910922687.7A Active CN110674762B (en) 2019-09-27 2019-09-27 Method for detecting human body in automatic doll walking process

Country Status (1)

Country Link
CN (1) CN110674762B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112859841A (en) * 2020-12-31 2021-05-28 青岛海尔科技有限公司 Route guidance method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046876A (en) * 2015-07-23 2015-11-11 中山大学深圳研究院 Child safety monitoring system based on image identification
CN105243780A (en) * 2015-09-11 2016-01-13 中山大学 Child safety monitoring method and system
CN105825627A (en) * 2016-05-23 2016-08-03 蔡俊豪 Pram anti-theft method and pram anti-theft system
CN106681359A (en) * 2016-07-18 2017-05-17 歌尔股份有限公司 Method for controlling intelligent baby pram, intelligent baby pram and control system
CN107323512A (en) * 2017-06-28 2017-11-07 太仓迪米克斯节能服务有限公司 A kind of intelligent detecting method and its system based on baby carriage
CN107577203A (en) * 2017-08-03 2018-01-12 深圳君匠科技有限公司 Intelligent doll carriage control method, apparatus and system
CN207433612U (en) * 2017-11-28 2018-06-01 西南大学 Safety detection perambulator and intelligent guarding system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046876A (en) * 2015-07-23 2015-11-11 中山大学深圳研究院 Child safety monitoring system based on image identification
CN105243780A (en) * 2015-09-11 2016-01-13 中山大学 Child safety monitoring method and system
CN105825627A (en) * 2016-05-23 2016-08-03 蔡俊豪 Pram anti-theft method and pram anti-theft system
CN106681359A (en) * 2016-07-18 2017-05-17 歌尔股份有限公司 Method for controlling intelligent baby pram, intelligent baby pram and control system
CN107323512A (en) * 2017-06-28 2017-11-07 太仓迪米克斯节能服务有限公司 A kind of intelligent detecting method and its system based on baby carriage
CN107577203A (en) * 2017-08-03 2018-01-12 深圳君匠科技有限公司 Intelligent doll carriage control method, apparatus and system
CN207433612U (en) * 2017-11-28 2018-06-01 西南大学 Safety detection perambulator and intelligent guarding system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112859841A (en) * 2020-12-31 2021-05-28 青岛海尔科技有限公司 Route guidance method and device

Also Published As

Publication number Publication date
CN110674762B (en) 2022-03-04

Similar Documents

Publication Publication Date Title
JP7465615B2 (en) Smart aircraft landing
US11051999B2 (en) Patient support apparatuses with navigation and guidance systems
JP4871160B2 (en) Robot and control method thereof
Kobilarov et al. People tracking and following with mobile robot using an omnidirectional camera and a laser
Frassl et al. Magnetic maps of indoor environments for precise localization of legged and non-legged locomotion
US10423159B1 (en) Smart self-driving systems with side follow and obstacle avoidance
CN107596683B (en) Virtual amusement method, device and system for baby carrier based on augmented reality
US20200409382A1 (en) Intelligent cleaning robot
WO2016031105A1 (en) Information-processing device, information processing method, and program
CN105911999A (en) Mobile luggage case with automatic following and obstacle avoiding functions and using method thereof
US20220155791A1 (en) Moving robot system comprising moving robot and charging station
EP3225456B1 (en) Moving body
US20210089037A1 (en) Travel route creation system
Chatterjee et al. Vision based autonomous robot navigation: algorithms and implementations
US11822334B2 (en) Information processing apparatus, information processing method, and program for control of a moving body capable of autonomous movement
KR101758736B1 (en) Guard and surveillance robot system and method for travelling of mobile robot
US10571924B2 (en) Information processing apparatus, mobile body, and information processing method
CN106066179A (en) A kind of robot location based on ROS operating system loses method for retrieving and control system
KR102574289B1 (en) Reverse drive control apparatus and everse drive control method thereof
JPH08166822A (en) User tracking type moving robot device and sensing method
CN110674762B (en) Method for detecting human body in automatic doll walking process
KR20150080940A (en) Intelligent Golf Cart
CN110632929B (en) Danger avoiding method in automatic child walking process
US11826616B2 (en) Ball retrieval system and method
US9761108B2 (en) Apparatus, method and computer program for monitoring positions of objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 519000 2706, No. 3000, Huandao East Road, Hengqin new area, Zhuhai, Guangdong

Applicant after: Zhuhai Yiwei Semiconductor Co.,Ltd.

Address before: Room 105-514, No.6 Baohua Road, Hengqin New District, Zhuhai City, Guangdong Province

Applicant before: AMICRO SEMICONDUCTOR Co.,Ltd.

GR01 Patent grant
GR01 Patent grant