CN106231174A - A kind of method and apparatus taken pictures - Google Patents

A kind of method and apparatus taken pictures Download PDF

Info

Publication number
CN106231174A
CN106231174A CN201610541335.3A CN201610541335A CN106231174A CN 106231174 A CN106231174 A CN 106231174A CN 201610541335 A CN201610541335 A CN 201610541335A CN 106231174 A CN106231174 A CN 106231174A
Authority
CN
China
Prior art keywords
destination object
state
described destination
image
relative position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610541335.3A
Other languages
Chinese (zh)
Inventor
刘晨晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tinno Wireless Technology Co Ltd
Original Assignee
Shenzhen Tinno Wireless Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tinno Wireless Technology Co Ltd filed Critical Shenzhen Tinno Wireless Technology Co Ltd
Priority to CN201610541335.3A priority Critical patent/CN106231174A/en
Publication of CN106231174A publication Critical patent/CN106231174A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a kind of method and apparatus taken pictures.Described method includes the destination object surely taken pictures, and obtains the behavioral parameters of destination object, judges according to behavioral parameters whether the state of destination object is dbjective state, when the state of destination object is dbjective state, controls to take pictures.By the way, the present invention can accurately capture the moment picture of destination object.

Description

A kind of method and apparatus taken pictures
Technical field
The present invention relates to a kind of method and apparatus taken pictures.
Background technology
Along with the universal of intelligent terminal and the continuous progress of intelligent terminal technology, the movement based on mobile phone The equipment application in the field of taking pictures is more and more extensive.But, for major part user, capture the wink in a motor process Between (" suspension " state such as takeoff) be relatively difficult, general user is difficult to get hold of optimal candid photograph opportunity, or Capture too early, or captured late, be difficult to grasp opportunity.
In existing technology, one of which method is that the object of shooting takeoffs or thrown by the object of shooting, passes through Continuous high speed shoots multiple, chooses the photo reaching " suspension " effect from multiple.Or first shooting one, does not need performance Go out the scene of " suspension " state, then shoot one and by the arbitrarily supporter such as people or support, destination object is supported so that it is Show " suspension " state.Synthesized by two photos again, have the photo of scene to go to replace bat simply shooting and have supporter Picture in supporter, retain destination object, to reach the effect of " suspension ".
But existing method, the first method chip to equipment, the integrated grade of circuit requires height, and cost is big.In second Method is cumbersome, and it cannot be guaranteed that angle is with to find a view be all consistent each time, so the effect obtained is not naturally.
Summary of the invention
The moment picture that technical problem is how accurately to capture destination object that present invention mainly solves.
For solving above-mentioned technical problem, the technical scheme that the present invention uses is: provide a kind of method taken pictures, described Method comprises determining that the destination object taken pictures;Obtain the behavioral parameters of described destination object;Institute is judged according to described behavioral parameters Whether the state stating destination object is dbjective state, and described dbjective state is that described destination object drops to minimum point or jump To peak;When the state of described destination object is dbjective state, control to take pictures.
Wherein, whether the described state judging described destination object according to described behavioral parameters is that dbjective state includes: every Image is captured every the scheduled time;The image captured is processed the multiple translational speeds obtaining described destination object;When described When multiple translational speeds are changed from small to big until a maximum, determine that the state of described destination object is that described destination object drops to Minimum point, when the plurality of translational speed is from large to small until a minima, determines that the state of described destination object is described Destination object jumps to peak.
Wherein, whether the described state judging described destination object according to described behavioral parameters is that dbjective state includes: every Image is captured every the scheduled time;The image captured is analyzed, determines the relative position of destination object in each two field picture; Move when the relative position of described destination object is constantly upper, and the destination object in current frame image is relative in previous frame image When destination object moves down, determine that the state of described destination object is that described destination object jumps to peak;When described target The relative position of object constantly moves down, and the destination object in current frame image is relative to the destination object position in previous frame image Put holding constant time, determine that the state of described destination object is for dropping to minimum point.
Wherein, by infrared emanation technology identification, the destination object taken pictures is determined.
Wherein, when described destination object is behaved, face is identified by face recognition technology or infrared emanation technology Frame, determines the translational speed translational speed as described destination object of described face frame from the image captured;When described mesh Mark when liking object, identify object by infrared emanation technology, from the image captured, determine the movement of described object Speed is as the translational speed of described destination object.
Wherein, when described destination object is behaved, face is identified by face recognition technology or infrared emanation technology Frame, determines the relative position as described destination object, the relative position of described face frame from the image captured;When described mesh When mark object is object, identify object by infrared emanation technology, from the image captured, determine the relative of described object Position is as the relative position of described destination object.
For solving above-mentioned technical problem, another technical solution used in the present invention is: provide a kind of photographing device, described Photographing device includes determining module, acquisition module, judge module and control module, wherein: described determine that module is for determining The destination object taken pictures;Described acquisition module is for obtaining the behavioral parameters of described destination object;Described judge module is used for root Whether the state judging described destination object according to described behavioral parameters is dbjective state, and described dbjective state is described destination object Drop to minimum point or jump to peak;Described control module is used for when the state of described destination object is dbjective state Time, control to take pictures.
Wherein, described judge module includes placement unit, processing unit and determines unit, wherein: described placement unit For capturing image at predetermined time intervals;Described processing unit obtains described destination object for the image captured is carried out process Multiple translational speeds;Described determine that unit is for changing from small to big until during a maximum, determining in the plurality of translational speed The state of described destination object is that described destination object drops to minimum point, and this plurality of translational speed is from large to small until one During minima, determine that the state of described destination object is that described destination object jumps to peak.
Wherein, described placement unit is additionally operable to capture at predetermined time intervals image;Described processing unit is for crawl Image is analyzed, and determines the relative position of destination object in each two field picture;Described determine that unit is in described target The relative position of object is constantly upper to be moved, and the destination object in current frame image has relative to the destination object in previous frame image When moving down, determine that the state of described destination object is that described destination object jumps to peak, relative at described destination object Position constantly moves down, and the destination object in current frame image keeps constant relative to the target object position in previous frame image Time, determine that the state of described destination object is for dropping to minimum point.
Wherein, described determine that module is for by infrared emanation technology identification, determining the destination object taken pictures.
The invention has the beneficial effects as follows: be different from the situation of prior art, present invention determine that the destination object taken pictures, obtain The behavioral parameters of destination object, judges according to behavioral parameters whether the state of destination object is dbjective state, when destination object When state is dbjective state, control to take pictures.The shape of the destination object determined by the behavioral parameters being then based on destination object State, therefore, whole during need not artificially participate in, and the state of Behavior-based control parameter determination is the most accurate, when determining When the state of destination object is dbjective state, automatically controls and take pictures, it is to avoid manual operation is captured opportunity of causing and not accurate enough asked Topic such that it is able to accurately photograph the moment picture of destination object.
Accompanying drawing explanation
Fig. 1 is the flow chart of an embodiment of the photographic method that the present invention provides;
Fig. 2 is the structural representation of an embodiment of the photographing device that the present invention provides;
Fig. 3 is the structural representation of the judge module in the photographing device shown in Fig. 2.
Detailed description of the invention
Referring to Fig. 1, Fig. 1 is a kind of method indicative flowchart taken pictures that the embodiment of the present invention provides, as it can be seen, The method taken pictures of the present embodiment comprises the following steps:
S101: determine the destination object taken pictures.
Wherein, in the embodiment of the present invention, determined the destination object taken pictures by infrared emanation technology.Because each object Infrared emanation be different, therefore, it can identify, based on infrared emanation technology, the destination object taken pictures.
Wherein, before determining the destination object taken pictures, the method for the embodiment of the present invention still further comprises according to user Operational order, determine the step of the view area taken pictures.
Wherein, user can specify the region in whole camera lens to be view area, or by specifying presumptive area conduct View area.Wherein, user can determine view area by the given frame on screen moves the operations such as stretching, it is possible to To be marked presumptive area on the touchscreen by finger as view area.This is not made considered critical by the present invention.
S102: obtain the behavioral parameters of destination object.
Behavioral parameters is for characterizing the behavior type of destination object and movement velocity.The behavior type ratio of destination object As can be that object is tossed or people jumps off from eminence or takeoffs etc. from lower.
Wherein it is possible to by capturing image at predetermined time intervals, image is analyzed comparison, determines the fortune of destination object The dynamic behavioral parameters such as track and movement velocity.
S103: whether the state judging destination object according to behavioral parameters is dbjective state.
Whether the state judging destination object according to behavioral parameters is dbjective state.Goal state is destination object Drop to minimum point or jump to peak.
Wherein, judge whether the state of destination object is that dbjective state can be by judging destination object according to behavioral parameters Translational speed or the relative change in location of destination object realize, implement process as follows:
Method one: capture image at predetermined time intervals, processes the multiple shiftings obtaining destination object to the image captured Dynamic speed, when multiple translational speeds are changed from small to big until a maximum, determines that the state of destination object is destination object landing To minimum point, when multiple translational speeds are from large to small until a minima, determine that the state of destination object is the highest for jumping to Point.
Wherein, the time interval capturing image is identical, by the phase para-position of destination object in each two field picture of comparison Put the displacement determining destination object, may determine that the translational speed of destination object according to displacement and the described scheduled time.Often capture Image carries out the comparison of a translational speed, changes from small to big and arrive one when the translational speed of destination object presents Big value, it is determined that be to fall from above until dropping to minimum point for destination object, and when the translational speed of destination object presents From large to small and reach a minima, it is determined that takeoff until jumping to peak from lower for destination object.
Wherein it is desired to explanation, when described destination object is behaved, face recognition technology or infra-red heat spoke can be passed through The technology of penetrating identifies face frame, determines the translational speed translational speed as destination object of face frame from the image captured. When destination object is object, identify object by infrared emanation technology, from the image captured, determine the movement of object Speed is as the translational speed of destination object.
Certainly, the only citing implemented of the above-mentioned process that realizes, when implementing, however it is not limited to Face frame is reference, can be to identify any one point in destination object as a reference to determine the mobile speed of destination object Degree.
Method two: capture image at predetermined time intervals, is analyzed the image captured, determines the mesh in each two field picture The relative position of mark object, moves when the relative position of destination object is constantly upper, and the destination object in current frame image relative to When destination object in previous frame image moves down, determine that the state of destination object is that destination object jumps to peak, work as mesh The relative position of mark object constantly moves down, and the destination object in current frame image is relative to the destination object in previous frame image When position keeps constant, determine that the state of destination object is for dropping to minimum point.
The image captured is analyzed, determines the relative position of destination object in each two field picture, when destination object Position is constantly upper relatively moves, and the destination object in current frame image moves down relative to the destination object in previous frame image Time, then it represents that current target object is a state rebounded, and when destination object moves down, then it represents that destination object Jump to peak.When the relative position of destination object is constantly to move down, represent that current target object is one and down lands Destination object in state, and current frame image keeps constant relative to the target object position in previous frame image, then it represents that The state of current goal is for dropping to minimum point.
Wherein it is desired to explanation, when destination object is behaved, known by face recognition technology or infrared emanation technology Do not go out face frame, from the image captured, determine the relative position that relative position is destination object of face frame, work as destination object During for object, identify object by infrared emanation technology, from the image captured, determine that the relative position of object is as mesh The relative position of mark object.
Certainly, be not limited with face frame, can be by identification determine destination object any one point as reference, Determine the relative change in location situation of destination object.
S104: control to take pictures.
When the state of destination object is dbjective state, automatically controls and take pictures, such that it is able to capture specified moment Picture.
It is to say, in the embodiment of the present invention, can be that face frame is at the screen of mobile terminal by face identification system Upper movement, judges whether reach extreme higher position or speed maximum etc. according to recognition of face frame, in the state of target, Camera realizes capturing.
In another kind of implementation, it is also possible to be that perception goes out the infrared emanation of people or object (because of each object Heat radiation is different, and the infrared emanation at each position of people is also different), it is determined that go out the shape of people or object of which movement State, captures when highest point or speed maximum etc..
Such as, people starts upwards to jump from level land, jumps downwards from plateau, and target object such as book is quickly entered from upper right side Picture, it is determined that the track going out motion starts to the highest place, begins to decline or fastest place, automatically begins to Take pictures, capture out photo during specific position.
The method that the invention described above is taken pictures, determines the destination object taken pictures, and obtains the behavioral parameters of destination object, according to row Whether the state judging destination object for parameter is dbjective state, when the state of destination object is dbjective state, controls to carry out Take pictures.The state of the destination object determined by the behavioral parameters being then based on destination object, therefore, whole during be not required to very important person For participating in, and the state of Behavior-based control parameter determination is the most accurate, when the state determining destination object is dbjective state, Automatically control and take pictures, it is to avoid the not accurate enough problem on opportunity caused is captured in manual operation such that it is able to accurately photograph target pair The moment picture of elephant.
The method utilizing the present invention, only needs once photo taking, it is to avoid multiple continuous shootings are to hardware requirement height and after needing The continuous inconvenience brought processed.
Refer to the structural representation that Fig. 2, Fig. 2 are a kind of photographing devices that the embodiment of the present invention provides, the present embodiment Photographing device is for the method performing above-mentioned embodiment illustrated in fig. 1, as it can be seen, the photographing device 100 of the present embodiment includes really Cover half block 11, acquisition module 12, judge module 13 and control module 14, wherein:
Determine that module 11 is for determining the destination object taken pictures.
Wherein, in the embodiment of the present invention, determine that module 11 determines the destination object taken pictures by infrared emanation technology.Cause Infrared emanation for each object is different, therefore, it can identify, based on infrared emanation technology, the target pair taken pictures As.
Wherein it is determined that module 11 is additionally operable to, before determining the destination object taken pictures, according to the operational order of user, determine The view area taken pictures.
Wherein it is determined that module 11 can operational order based on user, determine that the region in whole camera lens is view area, Or presumptive area is as view area.
Acquisition module 12 is for obtaining the behavioral parameters of destination object.
Behavioral parameters is for characterizing the behavior type of destination object and movement velocity.The behavior type ratio of destination object As can be that object is tossed or people jumps off from eminence or takeoffs etc. from lower.
Wherein, acquisition module 12 can be analyzed comparison to image by capturing image at predetermined time intervals, determine mesh The behavioral parameters such as the movement locus of mark object and movement velocity.
Whether judge module 13 is dbjective state for judging the state of described destination object according to behavioral parameters, target-like State is that destination object drops to minimum point or jumps to peak.
According to behavioral parameters, judge module 13 judges whether the state of destination object is dbjective state.Goal state Drop to minimum point for destination object or jump to peak.
Wherein, further referring to Fig. 3, the judge module 13 of the present embodiment includes placement unit 131, processing unit 132 And determine unit 133, wherein:
Placement unit 131 is for capturing image at predetermined time intervals.
Processing unit 132 obtains multiple translational speeds of destination object for processing the image captured.
Determine that unit 133 is for when multiple translational speeds are changed from small to big until a maximum, determining the shape of destination object State is that destination object drops to minimum point, when these multiple translational speeds are from large to small until a minima, determines destination object State is that described destination object jumps to peak.
Wherein, the time interval capturing image is identical, by the phase para-position of destination object in each two field picture of comparison Put the displacement determining destination object, may determine that the translational speed of destination object according to displacement and the described scheduled time.Often capture Image carries out the comparison of a translational speed, changes from small to big and arrive one when the translational speed of destination object presents Big value, it is determined that be to fall from above until dropping to minimum point for destination object, and when the translational speed of destination object presents From large to small and reach a minima, it is determined that takeoff until jumping to peak from lower for destination object.
Wherein it is desired to explanation, when described destination object is behaved, processing unit 132 can pass through recognition of face skill Art or infrared emanation technology identify face frame, determine that the translational speed of face frame is as destination object from the image captured Translational speed.When destination object is object, processing unit 132 identifies object by infrared emanation technology, from crawl Image in determine the translational speed translational speed as destination object of object.
Certainly, the only citing implemented of the above-mentioned process that realizes, when implementing, however it is not limited to Face frame is reference, can be to identify any one point in destination object as a reference to determine the mobile speed of destination object Degree.
Or in another embodiment, placement unit 131 is additionally operable to capture at predetermined time intervals image.
Processing unit 132, for being analyzed the image captured, determines the relative of destination object in each two field picture Position.
Determine unit 133 for the relative position of destination object constantly on move, and the destination object in current frame image When moving down relative to the destination object in previous frame image, determine that the state of destination object is that destination object jumps to the highest Point, constantly moves down in the relative position of destination object, and the destination object in current frame image is relative in previous frame image When target object position keeps constant, determine that the state of destination object is for dropping to minimum point.
The image captured is analyzed, determines the relative position of destination object in each two field picture, when destination object Position is constantly upper relatively moves, and the destination object in current frame image moves down relative to the destination object in previous frame image Time, then it represents that current target object is a state rebounded, and when destination object moves down, then it represents that destination object Jump to peak.When the relative position of destination object is constantly to move down, represent that current target object is one and down lands Destination object in state, and current frame image keeps constant relative to the target object position in previous frame image, then it represents that The state of current goal is for dropping to minimum point.
Wherein it is desired to explanation, when destination object is behaved, processing unit 132 is by face recognition technology or infrared Heat radiation technology identifies face frame, determines the phase para-position that relative position is destination object of face frame from the image captured Putting, when destination object is object, processing unit 132 identifies object by infrared emanation technology, from the image captured Determine the relative position as destination object, the relative position of object.
Certainly, be not limited with face frame, can be by identification determine destination object any one point as reference, Determine the relative change in location situation of destination object.
Control module 14, for when the state of destination object is dbjective state, controls to take pictures.
When the state of destination object is dbjective state, control module 14 automatically controls takes pictures, such that it is able to capture Picture to specified moment.
Such as, people starts upwards to jump from level land, jumps downwards from plateau, and target object such as book is quickly entered from upper right side Picture, it is determined that the track going out motion starts to the highest place, begins to decline or fastest place, automatically begins to Take pictures, capture out photo during specific position.
The detailed description of the method and apparatus taken pictures of the invention described above embodiment, it will be understood that the present invention is determined by The destination object taken pictures, obtains the behavioral parameters of destination object, judges according to behavioral parameters whether the state of destination object is mesh Mark state, when the state of destination object is dbjective state, controls to take pictures.By the behavioral parameters being then based on destination object The state of the destination object determined, therefore, whole during need not artificially participate in, and the state of Behavior-based control parameter determination The most accurate, when the state determining destination object is dbjective state, automatically controls and take pictures, it is to avoid manual operation is captured and caused Not accurate enough problem on opportunity such that it is able to accurately photograph the moment picture of destination object.
In several embodiments provided by the present invention, it should be understood that disclosed system, apparatus and method are permissible Realize by another way.Such as, device embodiment described above is only schematically, such as, described module or The division of unit, is only a kind of logic function and divides, and actual can have other dividing mode, the most multiple unit when realizing Or assembly can in conjunction with or be desirably integrated into another system, or some features can be ignored, or does not performs.Another point, institute The coupling each other shown or discuss or direct-coupling or communication connection can be by some interfaces, device or unit INDIRECT COUPLING or communication connection, can be electrical, machinery or other form.
The described unit illustrated as separating component can be or may not be physically separate, shows as unit The parts shown can be or may not be physical location, i.e. may be located at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected according to the actual needs to realize the mesh of the present embodiment scheme 's.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to It is that unit is individually physically present, it is also possible to two or more unit are integrated in a unit.Above-mentioned integrated list Unit both can realize to use the form of hardware, it would however also be possible to employ the form of SFU software functional unit realizes.
If described integrated unit realizes and as independent production marketing or use using the form of SFU software functional unit Time, can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially The part that in other words prior art contributed or this technical scheme completely or partially can be with the form of software product Embodying, this computer software product is stored in a storage medium, including some instructions with so that a computer Equipment (can be personal computer, server, or the network equipment etc.) or processor (processor) perform the present invention each All or part of step of method described in embodiment.And aforesaid storage medium includes: USB flash disk, portable hard drive, read only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD Etc. the various media that can store program code.
The foregoing is only embodiments of the invention, not thereby limit the scope of the claims of the present invention, every utilize this Equivalent structure or equivalence flow process that bright description and accompanying drawing content are made convert, or are directly or indirectly used in other relevant skills Art field, is the most in like manner included in the scope of patent protection of the present invention.

Claims (10)

1. the method taken pictures, it is characterised in that described method includes:
Determine the destination object taken pictures;
Obtain the behavioral parameters of described destination object;
Whether the state judging described destination object according to described behavioral parameters is dbjective state, and described dbjective state is described mesh Mark object drops to minimum point or jumps to peak;
When the state of described destination object is dbjective state, control to take pictures.
Method the most according to claim 1, it is characterised in that
Whether the described state judging described destination object according to described behavioral parameters is that dbjective state includes:
Capture image at predetermined time intervals;
The image captured is processed the multiple translational speeds obtaining described destination object;
When the plurality of translational speed is changed from small to big until a maximum, determine that the state of described destination object is described target Object drops to minimum point, when the plurality of translational speed is from large to small until a minima, determines described destination object State is that described destination object jumps to peak.
Method the most according to claim 1, it is characterised in that
Whether the described state judging described destination object according to described behavioral parameters is that dbjective state includes:
Capture image at predetermined time intervals;
The image captured is analyzed, determines the relative position of destination object in each two field picture;
Move when the relative position of described destination object is constantly upper, and the destination object in current frame image is relative to previous frame image In destination object when moving down, determine that the state of described destination object is that described destination object jumps to peak;
When the relative position of described destination object constantly moves down, and the destination object in current frame image is relative to previous frame image In target object position keep constant time, determine that the state of described destination object is for dropping to minimum point.
Method the most according to claim 1, it is characterised in that by infrared emanation technology identification, determine the mesh taken pictures Mark object.
Method the most according to claim 2, it is characterised in that when described destination object is behaved, by recognition of face skill Art or infrared emanation technology identify face frame, determine that the translational speed of described face frame is as described from the image captured The translational speed of destination object;
When described destination object is object, identify object by infrared emanation technology, from the image captured, determine institute State the translational speed translational speed as described destination object of object.
Method the most according to claim 3, it is characterised in that when described destination object is behaved, by recognition of face skill Art or infrared emanation technology identify face frame, determine that the relative position of described face frame is as described from the image captured The relative position of destination object;
When described destination object is object, identify object by infrared emanation technology, from the image captured, determine institute State the relative position as described destination object, the relative position of object.
7. a photographing device, it is characterised in that described photographing device include determining module, acquisition module, judge module and Control module, wherein:
Described determine that module is for determining the destination object taken pictures;
Described acquisition module is for obtaining the behavioral parameters of described destination object;
Whether described judge module is dbjective state for judging the state of described destination object according to described behavioral parameters, described Dbjective state is that described destination object drops to minimum point or jumps to peak;
Described control module, for when the state of described destination object is dbjective state, controls to take pictures.
Photographing device the most according to claim 7, it is characterised in that described judge module includes placement unit, processes list Unit and determine unit, wherein:
Described placement unit is for capturing image at predetermined time intervals;
Described processing unit obtains multiple translational speeds of described destination object for processing the image captured;
Described determine that unit is for changing from small to big until during a maximum, determining described destination object in the plurality of translational speed State be that described destination object drops to minimum point, this plurality of translational speed is from large to small until during a minima, really The state of fixed described destination object is that described destination object jumps to peak.
Photographing device the most according to claim 8, it is characterised in that
Described placement unit is additionally operable to capture at predetermined time intervals image;
Described processing unit, for being analyzed the image captured, determines the phase para-position of destination object in each two field picture Put;
Described determine unit for the relative position of described destination object constantly on move, and the destination object in current frame image When moving down relative to the destination object in previous frame image, determine that the state of described destination object is that described destination object jumps To peak, constantly move down in the relative position of described destination object, and the destination object in current frame image is relative to previous When target object position in two field picture keeps constant, determine that the state of described destination object is for dropping to minimum point.
Photographing device the most according to claim 7, it is characterised in that described determine that module is for passing through infrared emanation Technology identification, determines the destination object taken pictures.
CN201610541335.3A 2016-07-11 2016-07-11 A kind of method and apparatus taken pictures Pending CN106231174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610541335.3A CN106231174A (en) 2016-07-11 2016-07-11 A kind of method and apparatus taken pictures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610541335.3A CN106231174A (en) 2016-07-11 2016-07-11 A kind of method and apparatus taken pictures

Publications (1)

Publication Number Publication Date
CN106231174A true CN106231174A (en) 2016-12-14

Family

ID=57520563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610541335.3A Pending CN106231174A (en) 2016-07-11 2016-07-11 A kind of method and apparatus taken pictures

Country Status (1)

Country Link
CN (1) CN106231174A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112422823A (en) * 2020-11-09 2021-02-26 广汽本田汽车有限公司 Automatic visual shooting triggering method and device
CN113099109A (en) * 2021-03-23 2021-07-09 南昌欧菲光电技术有限公司 Snapshot control device and method, image pickup apparatus, and computer-readable storage medium
CN113286084A (en) * 2021-05-21 2021-08-20 展讯通信(上海)有限公司 Terminal image acquisition method and device, storage medium and terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102478743A (en) * 2010-11-26 2012-05-30 英业达股份有限公司 Photographic device and control method thereof
US20130258167A1 (en) * 2012-03-28 2013-10-03 Qualcomm Incorporated Method and apparatus for autofocusing an imaging device
CN103971092A (en) * 2014-04-09 2014-08-06 中国船舶重工集团公司第七二六研究所 Facial trajectory tracking method
CN104065878A (en) * 2014-06-03 2014-09-24 小米科技有限责任公司 Method, device and terminal for controlling shooting
CN104270565A (en) * 2014-08-29 2015-01-07 小米科技有限责任公司 Image shooting method and device and equipment
CN105472231A (en) * 2014-09-03 2016-04-06 联想(北京)有限公司 Control method, image acquisition device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102478743A (en) * 2010-11-26 2012-05-30 英业达股份有限公司 Photographic device and control method thereof
US20130258167A1 (en) * 2012-03-28 2013-10-03 Qualcomm Incorporated Method and apparatus for autofocusing an imaging device
CN103971092A (en) * 2014-04-09 2014-08-06 中国船舶重工集团公司第七二六研究所 Facial trajectory tracking method
CN104065878A (en) * 2014-06-03 2014-09-24 小米科技有限责任公司 Method, device and terminal for controlling shooting
CN104270565A (en) * 2014-08-29 2015-01-07 小米科技有限责任公司 Image shooting method and device and equipment
CN105472231A (en) * 2014-09-03 2016-04-06 联想(北京)有限公司 Control method, image acquisition device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李林,黄一帆,王涌天: "《现代光学设计方法》", 30 April 2015, 北京理工大学出版社 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112422823A (en) * 2020-11-09 2021-02-26 广汽本田汽车有限公司 Automatic visual shooting triggering method and device
CN112422823B (en) * 2020-11-09 2022-08-09 广汽本田汽车有限公司 Automatic visual shooting triggering method and device
CN113099109A (en) * 2021-03-23 2021-07-09 南昌欧菲光电技术有限公司 Snapshot control device and method, image pickup apparatus, and computer-readable storage medium
CN113286084A (en) * 2021-05-21 2021-08-20 展讯通信(上海)有限公司 Terminal image acquisition method and device, storage medium and terminal
CN113286084B (en) * 2021-05-21 2022-10-25 展讯通信(上海)有限公司 Terminal image acquisition method and device, storage medium and terminal

Similar Documents

Publication Publication Date Title
CN104580878B (en) Electronic device and the method for automatically determining image effect
CN104205801A (en) Method and apparatus for autofocusing an imaging device
CN103971547B (en) Photography artificial teaching method and system based on mobile terminal
US9848106B2 (en) Intelligent gameplay photo capture
CN105872363A (en) Adjustingmethod and adjusting device of human face focusing definition
RU2016102791A (en) METHOD AND DEVICE FOR PLAYBACK MANAGEMENT AND ELECTRONIC EQUIPMENT
CN105247567B (en) A kind of image focusing device, method, system and non-transient program storage device again
CN106231174A (en) A kind of method and apparatus taken pictures
CN104185981A (en) Method and terminal selecting image from continuous captured image
CN105700769B (en) A kind of dynamic material adding method, device and electronic equipment
CN104580901B (en) Determine the method and terminal of acquisition parameters
CN103905709A (en) Electronic device control method and electronic device
US9888176B2 (en) Video apparatus and photography method thereof
CN104902160B (en) A kind of information processing method and electronic equipment
CN106851094A (en) A kind of information processing method and device
TW201737064A (en) Geometric work scheduling with dynamic and probabilistic work trimming
US20170244938A1 (en) Camera system with an automatic photo taking
US20170214847A1 (en) Method for Setting Shooting Parameters of HDR mode and Electronic Device Using the Same
CN103685932A (en) Moving picture processing device and method for controlling moving picture processing
CN105606622A (en) AOI image acquisition method and device
CN111105434A (en) Motion trajectory synthesis method and electronic equipment
CN103297695B (en) A kind of method of automatic camera and terminal
CN105657262A (en) Image processing method and apparatus
CN104883495B (en) A kind of photographic method and device
CN104902179B (en) The method for previewing and device of a kind of camera image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20161214