CN109981970A - A kind of method, apparatus and robot of determining photographed scene - Google Patents

A kind of method, apparatus and robot of determining photographed scene Download PDF

Info

Publication number
CN109981970A
CN109981970A CN201711465681.9A CN201711465681A CN109981970A CN 109981970 A CN109981970 A CN 109981970A CN 201711465681 A CN201711465681 A CN 201711465681A CN 109981970 A CN109981970 A CN 109981970A
Authority
CN
China
Prior art keywords
face
picture
benchmark
determining
photographed scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711465681.9A
Other languages
Chinese (zh)
Other versions
CN109981970B (en
Inventor
熊友军
刘锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youbixuan Intelligent Robot Co ltd
Shenzhen Ubtech Technology Co ltd
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201711465681.9A priority Critical patent/CN109981970B/en
Publication of CN109981970A publication Critical patent/CN109981970A/en
Application granted granted Critical
Publication of CN109981970B publication Critical patent/CN109981970B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)
  • Studio Devices (AREA)

Abstract

The present invention is suitable for robotic technology field, provides the method, apparatus and robot of a kind of determining photographed scene.This method comprises: starting the filming apparatus when receiving shooting instruction;It detects in the picture of the filming apparatus shooting with the presence or absence of face;Face if it exists then selects the face to conform to a predetermined condition as benchmark face from the face;The face in the picture in addition to the benchmark face is identified based on the benchmark face, with the face quantity in the determination picture;Current photographed scene is determined based on the face quantity.The present invention identifies the face in the picture of shooting in addition to the benchmark face by selected reference face, with the face quantity in the determination picture;Current photographed scene is determined based on the face quantity, realizes the function of automatically determining photographed scene.The embodiment of the present invention is easy to operate, cost of implementation is low, has stronger usability and practicality.

Description

A kind of method, apparatus and robot of determining photographed scene
Technical field
The invention belongs to robotic technology field more particularly to a kind of method, apparatus and robot of determining photographed scene.
Background technique
Intelligent hardware product is very universal in people's daily life, especially has phonetic function or assistant's function The Intelligent hardware product of energy, such as robot, are mainly used in the associated scenario of music and Intelligent housing.However, Artificially passive use state, the function without identification scene are unable to locating for automatic identification user existing major part machine Actual scene.
Therefore, it is necessary to a kind of scheme be proposed, to solve the above problems.
Summary of the invention
In view of this, the embodiment of the invention provides the method, apparatus and robot of a kind of determining photographed scene, to solve Existing robot can not automatic identification application scenarios the problem of.
The first aspect of the embodiment of the present invention provides a kind of method of determining photographed scene, comprising:
When receiving shooting instruction, start the filming apparatus;
It detects in the picture of the filming apparatus shooting with the presence or absence of face;
Face if it exists then selects the face to conform to a predetermined condition as benchmark face from the face;
The face in the picture in addition to the benchmark face is identified based on the benchmark face, with the determination picture In face quantity;
Current photographed scene is determined based on the face quantity.
Optionally, before starting the filming apparatus, further includes:
Determine sound source position;
Correspondingly, after starting the filming apparatus, further includes:
The picture at the sound source position is shot by the filming apparatus.
Optionally, the face for selecting one to conform to a predetermined condition from the face includes: as benchmark face
Clarity highest is selected from the face and the face nearest apart from the robot is as benchmark face.
Optionally, described that the face in the picture in addition to the benchmark face is identified based on the benchmark face, with Determine that the face quantity in the picture includes:
Using the shooting angle of the benchmark face as benchmark angle, respectively offset predetermined angular is detected in the picture to the left and right Face, with the face quantity in the determination picture.
Optionally, after determining the face quantity in the picture, further includes:
Acquire the voice messaging in the picture;
Correspondingly, described to determine that current photographed scene includes: based on the face quantity
Current photographed scene is determined based on the voice messaging of the face quantity and acquisition.
The second aspect of the embodiment of the present invention provides a kind of device of scene type judgement, comprising:
Starting module, for when receiving shooting instruction, starting the filming apparatus;
Detection module, for detecting in the picture that the filming apparatus is shot with the presence or absence of face;
Selecting module, for face if it exists, then selected from the face face to conform to a predetermined condition as Benchmark face;
Identification module, for being identified based on the benchmark face to the face in the picture in addition to the benchmark face, With the face quantity in the determination picture;
First determining module, for determining current photographed scene based on the face quantity.
Optionally, the device of the determination photographed scene, further includes:
Second determining module, for determining sound source position;
Shooting module, for shooting the picture at the sound source position by the filming apparatus.
Optionally, the selecting module includes:
Selecting unit, for clarity highest is selected from the face and the face nearest apart from the robot as Benchmark face.
Optionally, the first determining module includes:
First determination unit, for respectively deviating predetermined angle to the left and right using the shooting angle of the benchmark face as benchmark angle Degree detects the face in the picture, with the face quantity in the determination picture.
Optionally it is determined that the device of photographed scene further include:
Acquisition module, for acquiring the voice messaging in the picture;
Correspondingly, first determining module includes:
Second determination unit, for determining current shooting based on the voice messaging of the face quantity and acquisition Scene.
The third aspect of the embodiment of the present invention provides a kind of robot, including memory, processor and is stored in institute State the computer program that can be run in memory and on the processor, which is characterized in that the processor executes the meter The step of method in first aspect is realized when calculation machine program.
The fourth aspect of the embodiment of the present invention provides a kind of computer readable storage medium, the computer-readable storage Media storage has computer program, which is characterized in that side in first aspect is realized when the computer program is executed by processor The step of method.
In embodiments of the present invention, by starting the filming apparatus in robot when receiving shooting instruction, institute is detected State in the picture of filming apparatus shooting with the presence or absence of face, if it exists face, then selected from the face one meet it is predetermined The face of condition identifies the face in the picture of shooting in addition to the benchmark face as benchmark face, with the determination picture Face quantity in face determines current photographed scene based on the face quantity, realizes the function for automatically determining photographed scene Energy.The embodiment of the present invention is easy to operate, cost of implementation is low, has stronger usability and practicality.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only of the invention some Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these Attached drawing obtains other attached drawings.
Fig. 1 is the implementation process schematic diagram of the method for the determination photographed scene that the embodiment of the present invention one provides;
Fig. 2 is the implementation process schematic diagram of the method for determining photographed scene provided by Embodiment 2 of the present invention;
Fig. 3 is the structural block diagram of the device for the determination photographed scene that the embodiment of the present invention three provides;
Fig. 4 is the schematic diagram for the robot that the embodiment of the present invention four provides.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed Body details, to understand thoroughly the embodiment of the present invention.However, it will be clear to one skilled in the art that there is no these specific The present invention also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity The detailed description of road and method, in case unnecessary details interferes description of the invention.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " instruction is described special Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step, Operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this description of the invention merely for the sake of description specific embodiment And be not intended to limit the present invention.As description of the invention and it is used in the attached claims, unless on Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in description of the invention and the appended claims is Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " Or " if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to Determine " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In order to illustrate technical solutions according to the invention, the following is a description of specific embodiments.
Embodiment one
Fig. 1 shows the implementation process schematic diagram of the method for the determination photographed scene of the offer of the embodiment of the present invention one.Such as Fig. 1 Shown, the method for the determination photographed scene can be applied to the robot comprising filming apparatus, specifically may include following steps:
Step 101: when receiving shooting instruction, starting filming apparatus.
When robot receives shooting instruction, control starts the filming apparatus.Wherein, filming apparatus can be machine Camera or camera on people etc..User remotely can send shooting instruction to robot by remote controler, be also possible to User presses the switch button in robot and sends shooting instruction to robot, can also be that user passes through phonetic order to machine Human hair send shooting instruction.
Step 102: detecting in the picture of the filming apparatus shooting with the presence or absence of face.
Wherein, picture can be the picture in short-sighted frequency, the picture being also possible in photo.If it is short-sighted frequency, to short Video carries out parsing and obtains image, detects to the picture in image, judges in picture with the presence or absence of face;If it is photograph Piece, the then picture directly detected in photo whether there is face.
Step 103: face if it exists then selects the face to conform to a predetermined condition as benchmark from the face Face.
It should be noted that general robot all can detecte face, face if it does not exist, robot is to user Prompting message is sent, so that user carries out prompt investigation to robot according to the prompting message, to confirm being that robot is System has that the shooting angle of failure or robot exists, and user carries out corresponding adjustment.
Optionally, step 103 specifically includes:
Clarity highest is selected from the face and the face nearest apart from the robot is as benchmark face.
The distance of the clarity and every face of all faces in picture apart from robot filming apparatus is calculated, it will be clear It spends highest and the face nearest apart from the robot is as benchmark face.Generally, the people nearest apart from robot filming apparatus Face, its resolution is the highest.Illustratively, can be by face feature point quantity more than predetermined value, and the maximum face of face frame is made For clarity highest and the face nearest apart from the robot.
Step 104: the face in the picture in addition to the benchmark face being identified based on the benchmark face, with determination Face quantity in the picture.
Optionally, the face in the picture in addition to the benchmark face is identified based on the benchmark face, with determination Face quantity in the picture includes:
Using the shooting angle of the benchmark face as benchmark angle, respectively offset predetermined angular is detected in the picture to the left and right Face, with the face quantity in the determination picture.
Illustratively, using the shooting angle of benchmark face as benchmark angle, each 45 degree of offset is to the left and right to detect the picture In face to get to face quantity in addition to benchmark face, the face quantity in addition to benchmark face is added into base The quantity 1 of quasi- face then obtains the face quantity in the picture.
Step 105: current photographed scene is determined based on the face quantity.
Illustratively, face quantity and the relationship of photographed scene are as shown in table 1:
Number Photographed scene
1 people Desk, computer desk
2~3 people Home scenarios, appointment
4~5 people Have a dinner party, gambling party, game open it is black
6~8 people Family party is had a dinner party
9 people or more KTV, large-scale party
Table 1
Based on the quantity of face in determining picture, according to the relationship of preset face quantity and photographed scene, determination is worked as Preceding photographed scene.
In embodiments of the present invention, by starting the filming apparatus in robot when receiving shooting instruction, institute is detected State in the picture of filming apparatus shooting with the presence or absence of face, if it exists face, then selected from the face one meet it is predetermined The face of condition identifies the face in the picture of shooting in addition to the benchmark face as benchmark face, with the determination picture Face quantity in face determines current photographed scene based on the face quantity, realizes the function for automatically determining photographed scene Energy.The embodiment of the present invention is easy to operate, cost of implementation is low, has stronger usability and practicality.
Embodiment two
Fig. 2 shows the implementation process schematic diagrames of the method for determining photographed scene provided by Embodiment 2 of the present invention.Such as Fig. 1 Shown, the method for the determination photographed scene specifically comprises the following steps 201 to step applied to the robot comprising filming apparatus Rapid 208.
Step 201: determining sound source position.
Optionally, the sound source position of current scene is determined by sound transducer and position sensor.
Step 202: when receiving shooting instruction, starting filming apparatus.
Step 203: the picture at the sound source position is shot by the filming apparatus.
Due to having determined that sound source position, robot shoot the sound source by the filming apparatus in step 201 Picture at position.
Step 204: detecting in the picture of the filming apparatus shooting with the presence or absence of face.
Step 205: face if it exists then selects the face to conform to a predetermined condition as benchmark from the face Face.
Step 206: the face in the picture in addition to the benchmark face being identified based on the benchmark face, with determination Face quantity in the picture.
Step 207: acquiring the voice messaging in the picture.
The voice messaging in picture is acquired, judges that the comprehensive state of the voice messaging, comprehensive state can be quiet, orderly Speech or mixed and disorderly speech etc..
Step 208: current photographed scene is determined based on the voice messaging of the face quantity and acquisition.
Illustratively, the relationship of the voice and photographed scene of face quantity and acquisition is as shown in table 2:
Number Voice messaging Photographed scene
1 people Self-timer Desk, computer desk
2~3 people Orderly speech Home scenarios, appointment
4~5 people Mixed and disorderly speech Have a dinner party, gambling party, game open it is black
6~8 people Mixed and disorderly speech Family party is had a dinner party
9 people or more Mixed and disorderly speech KTV, large-scale party
Table 2
The voice messaging based on the face quantity and acquisition carries out comprehensive analysis, according to preset face number The relationship of the voice and photographed scene of amount and acquisition, determines current photographed scene.
Wherein, the realization process of step 202, step 204, step 205 and step 206 respectively with step 101, step 102, Step 103 is similar with step 104, and details are not described herein.
The embodiment of the present invention on the basis of example 1, increases the acquisition to voice messaging in shooting picture, is based on The face quantity and the voice messaging of acquisition determine current photographed scene, to improve the accuracy of identification.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit It is fixed.
Embodiment three
Referring to FIG. 3, it illustrates the structural block diagrams of the device of the determination photographed scene of the offer of the embodiment of the present invention three.Really The device 30 for determining photographed scene includes: that starting module 31, detection module 32, selecting module 33, identification module 34 and first are determining Module 35.Wherein, the concrete function of each module is as follows:
Starting module 31, for starting filming apparatus when receiving shooting instruction;
Detection module 32, for detecting in the picture that the filming apparatus is shot with the presence or absence of face;
Selecting module 33 then selects the face to conform to a predetermined condition to make for face if it exists from the face For benchmark face;
Identification module 34, for being known based on the benchmark face to the face in the picture in addition to the benchmark face Not, with the face quantity in the determination picture;
First determining module 35, for determining current photographed scene based on the face quantity.
Optionally it is determined that the device 30 of photographed scene, further includes:
Second determining module, for determining sound source position;
Shooting module, for shooting the picture at the sound source position by the filming apparatus.
Optionally, selecting module 33 includes:
Selecting unit, for clarity highest is selected from the face and the face nearest apart from the robot as Benchmark face.
Optionally, the first determining module 35 includes:
First determination unit, for respectively deviating predetermined angle to the left and right using the shooting angle of the benchmark face as benchmark angle Degree detects the face in the picture, with the face quantity in the determination picture.
Optionally it is determined that the device 30 of photographed scene further include:
Acquisition module, for acquiring the voice messaging in the picture;
Correspondingly, the first determining module 35 includes:
Second determination unit, for determining current shooting based on the voice messaging of the face quantity and acquisition Scene.
In embodiments of the present invention, by starting the filming apparatus in robot when receiving shooting instruction, institute is detected State in the picture of filming apparatus shooting with the presence or absence of face, if it exists face, then selected from the face one meet it is predetermined The face of condition identifies the face in the picture of shooting in addition to the benchmark face as benchmark face, with the determination picture Face quantity in face determines current photographed scene based on the face quantity, realizes the function for automatically determining photographed scene Energy.The embodiment of the present invention is easy to operate, cost of implementation is low, has stronger usability and practicality.
Example IV
Fig. 4 is the schematic diagram for the robot that the embodiment of the present invention four provides.As shown in figure 4, the robot 4 of the embodiment wraps It includes: processor 40, memory 41 and being stored in the computer that can be run in the memory 41 and on the processor 40 Program 42, such as determine the method program of photographed scene.The processor 40 is realized above-mentioned when executing the computer program 42 Step in the embodiment of the method for each determining photographed scene, such as step 101 shown in FIG. 1 is to 105.Alternatively, the processing Device 40 realizes the function of each module in above-mentioned each Installation practice, such as module shown in Fig. 3 when executing the computer program 42 31 to 35 function.
Illustratively, the computer program 42 can be divided into one or more module/units, it is one or Multiple module/units are stored in the memory 41, and are executed by the processor 40, to complete the present invention.Described one A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for Implementation procedure of the computer program 42 in the robot 4 is described.For example, the computer program 42 can be divided At first judgment module and sending module, the concrete function of each module is as follows:
First judgment module, for judging institute when listening to the voice messaging inputted by the speech input device Whether the length for stating voice messaging is greater than preset length;
Sending module, for when the length is greater than preset length, then the voice messaging being sent to server, with So that the server is carried out parsing to the voice messaging and obtain parsing result, and is corresponding according to parsing result generation It is complete to be sent to the mobile terminal for matching connection with the robot by operational order for the parsing result and the operational order At corresponding operation.
The robot 4 can be desktop PC, notebook, palm PC etc. and calculate equipment.The robot can It include but are not limited to, processor 40, memory 41.It will be understood by those skilled in the art that Fig. 4 is only showing for robot Example, does not constitute the restriction to robot, may include components more more or fewer than diagram, or combine certain components, or The different component of person, such as the robot can also include input-output equipment, network access equipment, bus etc..
Alleged processor 40 can be central processing unit (Central Processing Unit, CPU), can also be Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field- Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor Deng.
The memory 41 can be the internal storage unit of the robot 4, such as the hard disk or memory of robot 4. The memory 41 is also possible to the External memory equipment of the robot 4, such as the plug-in type being equipped in the robot 4 is hard Disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) etc..Further, the memory 41 can also both include the internal storage unit of the robot 4 or wrap Include External memory equipment.The memory 41 is for other programs needed for storing the computer program and the robot And data.The memory 41 can be also used for temporarily storing the data that has exported or will export.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
In embodiment provided by the present invention, it should be understood that disclosed device/terminal device and method, it can be with It realizes by another way.For example, device described above/terminal device embodiment is only schematical, for example, institute The division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such as Multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Separately A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device Or the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or In use, can store in a computer readable storage medium.Based on this understanding, the present invention realizes above-mentioned implementation All or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer program Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium It may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry the computer program code Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice Subtract, such as does not include electric carrier signal and electricity according to legislation and patent practice, computer-readable medium in certain jurisdictions Believe signal.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all It is included within protection scope of the present invention.

Claims (10)

1. a kind of method of determining photographed scene, which is characterized in that applied to the robot comprising filming apparatus, the method packet It includes:
When receiving shooting instruction, start the filming apparatus;
It detects in the picture of the filming apparatus shooting with the presence or absence of face;
Face if it exists then selects the face to conform to a predetermined condition as benchmark face from the face;
The face in the picture in addition to the benchmark face is identified based on the benchmark face, in the determination picture Face quantity;
Current photographed scene is determined based on the face quantity.
2. determining the method for photographed scene as described in claim 1, which is characterized in that before starting the filming apparatus, Further include:
Determine sound source position;
Correspondingly, after starting the filming apparatus, further includes:
The picture at the sound source position is shot by the filming apparatus.
3. determining the method for photographed scene as described in claim 1, which is characterized in that described to select one from the face The face to conform to a predetermined condition includes: as benchmark face
Clarity highest is selected from the face and the face nearest apart from the robot is as benchmark face.
4. determining the method for photographed scene as described in claim 1, which is characterized in that described to be based on the benchmark face to described Face in picture in addition to the benchmark face is identified, includes: with the face quantity in the determination picture
Using the shooting angle of the benchmark face as benchmark angle, respectively offset predetermined angular detects the people in the picture to the left and right Face, with the face quantity in the determination picture.
5. determining the method for photographed scene as described in claim 1, which is characterized in that determining the face number in the picture After amount, further includes:
Acquire the voice messaging in the picture;
Correspondingly, described to determine that current photographed scene includes: based on the face quantity
Current photographed scene is determined based on the voice messaging of the face quantity and acquisition.
6. a kind of device of determining photographed scene, which is characterized in that applied to the robot comprising filming apparatus, the determining bat The device for taking the photograph scene includes:
Starting module, for when receiving shooting instruction, starting the filming apparatus;
Detection module, for detecting in the picture that the filming apparatus is shot with the presence or absence of face;
Selecting module then selects the face to conform to a predetermined condition as benchmark for face if it exists from the face Face;
Identification module, for being identified based on the benchmark face to the face in the picture in addition to the benchmark face, with true Face quantity in the fixed picture;
First determining module, for determining current photographed scene based on the face quantity.
7. determining the device of photographed scene as claimed in claim 6, which is characterized in that further include:
Second determining module, for determining sound source position;
Shooting module, for shooting the picture at the sound source position by the filming apparatus.
8. determining the device of photographed scene as claimed in claim 6, which is characterized in that the selecting module includes:
Selecting unit, for selecting clarity highest from the face and the face nearest apart from the robot is as benchmark Face.
9. a kind of robot, including memory, processor and storage can transport in the memory and on the processor Capable computer program, which is characterized in that the processor realizes such as claim 1 to 5 times when executing the computer program The step of one the method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In when the computer program is executed by processor the step of any one of such as claim 1 to 5 of realization the method.
CN201711465681.9A 2017-12-28 2017-12-28 Method and device for determining shooting scene and robot Active CN109981970B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711465681.9A CN109981970B (en) 2017-12-28 2017-12-28 Method and device for determining shooting scene and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711465681.9A CN109981970B (en) 2017-12-28 2017-12-28 Method and device for determining shooting scene and robot

Publications (2)

Publication Number Publication Date
CN109981970A true CN109981970A (en) 2019-07-05
CN109981970B CN109981970B (en) 2021-07-27

Family

ID=67075260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711465681.9A Active CN109981970B (en) 2017-12-28 2017-12-28 Method and device for determining shooting scene and robot

Country Status (1)

Country Link
CN (1) CN109981970B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111199197A (en) * 2019-12-26 2020-05-26 深圳市优必选科技股份有限公司 Image extraction method and processing equipment for face recognition

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1302056A (en) * 1999-12-28 2001-07-04 索尼公司 Information processing equiopment, information processing method and storage medium
CN1416538A (en) * 2001-01-12 2003-05-07 皇家菲利浦电子有限公司 Method and appts. for determining camera movement control criteria
CN102047652A (en) * 2009-03-31 2011-05-04 松下电器产业株式会社 Image capturing device, integrated circuit, image capturing method, program, and recording medium
CN103051838A (en) * 2012-12-25 2013-04-17 广东欧珀移动通信有限公司 Shoot control method and device
CN103534755A (en) * 2012-04-20 2014-01-22 松下电器产业株式会社 Speech processor, speech processing method, program and integrated circuit
CN103945105A (en) * 2013-01-23 2014-07-23 北京三星通信技术研究有限公司 Intelligent photographing and picture sharing method and device
US20150092078A1 (en) * 2006-08-11 2015-04-02 Fotonation Limited Face tracking for controlling imaging parameters
CN105373784A (en) * 2015-11-30 2016-03-02 北京光年无限科技有限公司 Intelligent robot data processing method, intelligent robot data processing device and intelligent robot system
CN105530422A (en) * 2014-09-30 2016-04-27 联想(北京)有限公司 Electronic equipment, control method thereof, and control device
CN105915782A (en) * 2016-03-29 2016-08-31 维沃移动通信有限公司 Picture obtaining method based on face identification, and mobile terminal
CN105957521A (en) * 2016-02-29 2016-09-21 青岛克路德机器人有限公司 Voice and image composite interaction execution method and system for robot
US20170011258A1 (en) * 2010-06-07 2017-01-12 Affectiva, Inc. Image analysis in support of robotic manipulation
CN107229625A (en) * 2016-03-23 2017-10-03 北京搜狗科技发展有限公司 It is a kind of to shoot treating method and apparatus, a kind of device for being used to shoot processing
CN107297745A (en) * 2017-06-28 2017-10-27 上海木爷机器人技术有限公司 voice interactive method, voice interaction device and robot
CN107393527A (en) * 2017-07-17 2017-11-24 广东讯飞启明科技发展有限公司 The determination methods of speaker's number

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1302056A (en) * 1999-12-28 2001-07-04 索尼公司 Information processing equiopment, information processing method and storage medium
CN1416538A (en) * 2001-01-12 2003-05-07 皇家菲利浦电子有限公司 Method and appts. for determining camera movement control criteria
US20150092078A1 (en) * 2006-08-11 2015-04-02 Fotonation Limited Face tracking for controlling imaging parameters
CN102047652A (en) * 2009-03-31 2011-05-04 松下电器产业株式会社 Image capturing device, integrated circuit, image capturing method, program, and recording medium
US20170011258A1 (en) * 2010-06-07 2017-01-12 Affectiva, Inc. Image analysis in support of robotic manipulation
CN103534755A (en) * 2012-04-20 2014-01-22 松下电器产业株式会社 Speech processor, speech processing method, program and integrated circuit
CN103051838A (en) * 2012-12-25 2013-04-17 广东欧珀移动通信有限公司 Shoot control method and device
CN103945105A (en) * 2013-01-23 2014-07-23 北京三星通信技术研究有限公司 Intelligent photographing and picture sharing method and device
CN105530422A (en) * 2014-09-30 2016-04-27 联想(北京)有限公司 Electronic equipment, control method thereof, and control device
CN105373784A (en) * 2015-11-30 2016-03-02 北京光年无限科技有限公司 Intelligent robot data processing method, intelligent robot data processing device and intelligent robot system
CN105957521A (en) * 2016-02-29 2016-09-21 青岛克路德机器人有限公司 Voice and image composite interaction execution method and system for robot
CN107229625A (en) * 2016-03-23 2017-10-03 北京搜狗科技发展有限公司 It is a kind of to shoot treating method and apparatus, a kind of device for being used to shoot processing
CN105915782A (en) * 2016-03-29 2016-08-31 维沃移动通信有限公司 Picture obtaining method based on face identification, and mobile terminal
CN107297745A (en) * 2017-06-28 2017-10-27 上海木爷机器人技术有限公司 voice interactive method, voice interaction device and robot
CN107393527A (en) * 2017-07-17 2017-11-24 广东讯飞启明科技发展有限公司 The determination methods of speaker's number

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111199197A (en) * 2019-12-26 2020-05-26 深圳市优必选科技股份有限公司 Image extraction method and processing equipment for face recognition
CN111199197B (en) * 2019-12-26 2024-01-02 深圳市优必选科技股份有限公司 Image extraction method and processing equipment for face recognition

Also Published As

Publication number Publication date
CN109981970B (en) 2021-07-27

Similar Documents

Publication Publication Date Title
CN109032039B (en) Voice control method and device
CN109166156A (en) A kind of generation method, mobile terminal and the storage medium of camera calibration image
CN109101931A (en) A kind of scene recognition method, scene Recognition device and terminal device
CN110010125A (en) A kind of control method of intelligent robot, device, terminal device and medium
CN103365338B (en) A kind of electronic equipment control method and electronic equipment
CN107193598A (en) One kind application startup method, mobile terminal and computer-readable recording medium
CN104933419A (en) Method and device for obtaining iris images and iris identification equipment
CN109981964A (en) Image pickup method, filming apparatus and robot based on robot
CN108924430A (en) photo storage method, device, terminal and computer readable storage medium
CN109413361A (en) A kind of control method of terminal operating status, system and terminal device
CN103984931B (en) A kind of information processing method and the first electronic equipment
CN109492590A (en) A kind of distance detection method, distance detection device and terminal device
CN109388238A (en) The control method and device of a kind of electronic equipment
CN108573218A (en) Human face data acquisition method and terminal device
CN108600559A (en) Control method, device, storage medium and the electronic equipment of silent mode
CN108491177A (en) Space appraisal procedure and device
CN109981970A (en) A kind of method, apparatus and robot of determining photographed scene
CN106603817A (en) Incoming call processing method and device and electronic equipment
CN108449548A (en) A kind of image pickup method, device, capture apparatus and computer readable storage medium
CN109215783B (en) Cerebral hemorrhage qualification authentication method, equipment and server based on data processing
CN110769396B (en) Method, system and terminal equipment for robot to connect network
CN109087081B (en) Conditional electronic red envelope processing method and device
CN108874521A (en) A kind of photographing program application method, device, terminal and computer storage medium
CN110427801A (en) Intelligent home furnishing control method and device, electronic equipment and non-transient storage media
CN105808663B (en) Image classification method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 518000 16th and 22nd Floors, C1 Building, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen UBTECH Technology Co.,Ltd.

Address before: 518000 16th and 22nd Floors, C1 Building, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Youbixuan Technology Co.,Ltd.

CP01 Change in the name or title of a patent holder
TR01 Transfer of patent right

Effective date of registration: 20231206

Address after: Room 601, 6th Floor, Building 13, No. 3 Jinghai Fifth Road, Beijing Economic and Technological Development Zone (Tongzhou), Tongzhou District, Beijing, 100176

Patentee after: Beijing Youbixuan Intelligent Robot Co.,Ltd.

Address before: 518000 16th and 22nd Floors, C1 Building, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen UBTECH Technology Co.,Ltd.

TR01 Transfer of patent right