CN106791412A - A kind of filming control method and terminal - Google Patents

A kind of filming control method and terminal Download PDF

Info

Publication number
CN106791412A
CN106791412A CN201611245833.XA CN201611245833A CN106791412A CN 106791412 A CN106791412 A CN 106791412A CN 201611245833 A CN201611245833 A CN 201611245833A CN 106791412 A CN106791412 A CN 106791412A
Authority
CN
China
Prior art keywords
acquisition parameters
camera
terminal
screen
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201611245833.XA
Other languages
Chinese (zh)
Inventor
余超
庞振
李小虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jinli Communication Equipment Co Ltd
Original Assignee
Shenzhen Jinli Communication Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jinli Communication Equipment Co Ltd filed Critical Shenzhen Jinli Communication Equipment Co Ltd
Priority to CN201611245833.XA priority Critical patent/CN106791412A/en
Publication of CN106791412A publication Critical patent/CN106791412A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

A kind of filming control method is the embodiment of the invention provides, methods described includes:Open camera;User instruction is received, current acquisition parameters are obtained;According to the user instruction for being received, the camera is controlled to be shot;The generation image corresponding with the acquisition parameters.The embodiment of the present invention additionally provides a kind of terminal.Acquisition parameters just can be obtained in the state of camera is opened, without entering the preview interface for shooting by the embodiment of the present invention, and for realizing quick shooting.

Description

A kind of filming control method and terminal
Technical field
The present invention relates to shoot control technology field, and in particular to a kind of filming control method and terminal.
Background technology
As information technology is fast-developing, the application of terminal (such as mobile phone, panel computer) is increasingly popularized.Meanwhile, use Family it is also proposed requirement higher to terminal, and terminal is not required nothing more than has processing speed faster, and shoot function is also carried Requirements at the higher level are gone out.In the prior art, with wechat circle of friends, the extensive application of the social networking application such as QQ spaces, shooting becomes Present the most frequently used function of mobile phone, but, the realization of existing shoot function is required for carrying out a series of operation every time just entering Enter screening-mode, thus, cause Consumer's Experience not good, therefore, how convenient realization shoots urgently to be resolved hurrily.
The content of the invention
A kind of filming control method and terminal are the embodiment of the invention provides, can quickly realize shooting.
Embodiment of the present invention first aspect provides a kind of filming control method, including:
Open camera;
User instruction is received, current acquisition parameters are obtained;
According to the user instruction for being received, the camera is controlled to be shot;
The generation image corresponding with the acquisition parameters.
Embodiment of the present invention second aspect provides a kind of terminal, including:
Opening unit, for opening camera;
Receiving unit, for receiving user instruction, obtains current acquisition parameters;
Control unit, for according to the user instruction for being received, controlling the camera to be shot;
First generation unit, for generating the image corresponding with the acquisition parameters.
Implement the embodiment of the present invention, have the advantages that:
As can be seen that by the embodiment of the present invention, opening camera, user instruction is received, obtains current acquisition parameters, According to the user instruction for being received, control camera is shot, and the image corresponding with acquisition parameters is generated, in this way, can drop The power consumption of low terminal, and realize shooting.
Brief description of the drawings
Technical scheme in order to illustrate more clearly the embodiments of the present invention, below will be to that will make needed for embodiment description Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the present invention, for ability For the those of ordinary skill of domain, on the premise of not paying creative work, can also obtain other attached according to these accompanying drawings Figure.
Fig. 1 is a kind of first embodiment schematic flow sheet of filming control method provided in an embodiment of the present invention;
Fig. 1 a are the schematic diagrames of mirror imaging principle provided in an embodiment of the present invention;
Fig. 1 b are the schematic diagrames of convex lens imaging principle provided in an embodiment of the present invention;
Fig. 1 c are the schematic diagrames that screen provided in an embodiment of the present invention adjusts user's attitude;
Fig. 2 is a kind of second embodiment schematic flow sheet of filming control method provided in an embodiment of the present invention;
Fig. 2 a are the pictorial diagrams of terminal provided in an embodiment of the present invention;
Fig. 3 a are a kind of first embodiment structural representations of terminal provided in an embodiment of the present invention;
Fig. 3 b are the structural representations of the first generation unit of the terminal described by Fig. 3 a provided in an embodiment of the present invention;
Fig. 4 is a kind of second embodiment structural representation of terminal provided in an embodiment of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is a part of embodiment of the invention, rather than whole embodiments.Based on this hair Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made Example, belongs to the scope of protection of the invention.
Terminal described by the embodiment of the present invention can include smart mobile phone (such as Android phone, iOS mobile phones, Windows Phone mobile phones etc.), panel computer, palm PC, notebook computer, mobile internet device (MID, Mobile Internet Devices) or Wearable etc., above-mentioned terminal is only citing, and non exhaustive, including but not limited to above-mentioned end End.
It should be noted that existing preposition shooting uses process CIMS complicated, general flow is:Terminal enters desktop master Menu, then, it is necessary to user clicks on camera (Camera) application, it (is rearmounted in current shooting pattern to be switched to front camera On the premise of camera), then, image space is determined according to Camera previews, then in the case where user instruction is received, it is complete Into shooting.Due to terminal enter screening-mode after, then by preview adjust preview image, thus, the stand-by period is more long.
Specifically, for example, when being mounted with physical button in terminal, when application framework layer, to listen to physical button long After pressing, then create self-defined camera and perform shoot function.
The foundation step of self-defined camera is as follows:
1st, check that front camera whether there is, and call open () method of Camera to open camera.
2nd, getParameters () method of Camera is called to obtain acquisition parameters.The method returns to one Camera.Parameters objects.
3rd, call Camera.Paramers object method that acquisition parameters are set, such as:Preview picture size, shoots picture big Small, form, focusing mode, picture quality etc..
4th, the setParameters of Camera is called, and it is incoming using Camera.Paramers as parameter, so Acquisition parameters to camera are controlled
5th, call startPreview () method of Camera to start preview to find a view, effect is as the phase seat in the plane described in step 2 Put, angle and Camera preview resolutions determine.
6th, after user instruction is received, takePicture () method of Camera is called to be shot.
When the 7th, terminating program, the StopPriview () of Camera is called to terminate preview of finding a view, and call release () sides Method discharges resource.
As can be seen that after step 1, that is, camera is opened, however, it is desirable to step 2 to 5 is experienced, thus, delay More time, even user are currently shooting, then, getting out preview image needs to consume more time, it is possible to therefore The beautiful scenery of moment is missed, thus, reduce Consumer's Experience.
Fig. 1 is referred to, is a kind of first embodiment schematic flow sheet of filming control method provided in an embodiment of the present invention. Filming control method described in the present embodiment, comprises the following steps:
101st, camera is opened.
Wherein, the camera of terminal can be front camera or rear camera, it is of course also possible to for side is imaged Head.
102nd, user instruction is received, current acquisition parameters are obtained.
Wherein, user is adjusting the process of self attitude against screen, feels properly, you can input user instruction, should User instruction is generated for user according to acquisition parameters.
Certainly, whether the embodiment of the present invention does not require the camera of terminal in black state or bright screen state, unblock State, or, screen lock state.For example, terminal is in black state, if the camera of terminal is in opening, can gather Acquisition parameters, and for example, terminal be in bright screen state, if the camera of terminal be in opening, acquisition parameters can be gathered, Again for example, terminal is under released state, if terminal is in non-preview interface (for example, wechat interface), it is also possible to which collection shoots Parameter.
Alternatively, the user instruction triggers generation for user according to reference object in screen reflecting position.
Alternatively, the acquisition parameters may include the positional information of reference object.
For example, by taking front camera as an example, be under black state in terminal, if user by the screen of terminal against using Family, then can on screen presentation user inverted image, for example, user can see the face of oneself (as looked in the mirror one on screen Sample), in this process, terminal can keep the black state, camera can to gather the face location of user, it is, of course, also possible to adopt Collection current environment brightness.Further, acquisition parameters may also include but be not limited only to:Focusing area, background blurring region, screen Brightness, screen color temp, human face region etc. after camera captures face location, can according to the accounting of face and screen, Focusing area can also be further determined that according to customer location, focusing area is face region, is needing background process Under, the background blurring region (region beyond human face region can also further be obtained according to human face region and customer location It is background blurring region).
Specifically, in practical application, the screen imaging of terminal uses mirror imaging principle, according to level crossing principle Can obtain, it was concluded that as shown in Figure 1a, by taking certain point S as an example, if the position is (x, y), being denoted as:S (x, y) points Image space s ' (x, y), then, in mirror imaging principle, object is equal to as hanging down from minute surface from the vertical length of minute surface Straight length, i.e. s are equal with the vertical range of s ' to minute surface with the vertical range of minute surface (plane where screen is served as), certainly, screen Curtain where plane can regard straight line as, its direction can be understood as z-axis, the shooting image for obtaining be based on imageing sensor into Picture, realizes and screen identical effect, thus, need to only consider the pixel on sensor, position can be denoted as (x, y).
Under normal circumstances, equivalent to convex lens, such as Fig. 1 b are subject to the camera lens of camera by taking the image-forming principle of convex lens as an example Illustrate, wherein, AB represents object, and f is focal length, and A1B1 represents the picture of AB.Because triangle ABO is similar with triangle A1B1O, because And, equation below can be obtained:
Triangle COF is similar with triangle A1B1F again, so
Due to CO=AB, so
In above formula, BO=p, B1O=p ', OF=f, B1F=p ' and-f, can obtain:
Arrangement obtains equation below:
In this way, can obtain user when using screen " looking in the mirror " using above-mentioned principle, ambient parameter is can obtain, wherein, The customer location in image can be included.
Assuming that S points are same point with A points, in order to realize the same effect of preview interface and screen preview, accordingly, it would be desirable to The rgb value of s ' will be replaced with the rgb value of A1 using Digital Signal Processing (DSP).By laboratory facilities, the right of S ' and A1 is found S ' (x, y)=F A1 (x, y) should be related to, wherein, F is camera multiple.Experiment method, F can realize pixel by substantial amounts of By the rule equation being calculated.After completion, the effect of the size of picture can be adjusted to by adjusting camera multiple Really.
Alternatively, based on above-mentioned theory, the current acquisition parameters of above-mentioned acquisition, it may include following steps:
21) the current positional information of reference object is determined;
22) acquisition parameters, are determined according to the positional information.
Wherein, above-mentioned positional information may include the position of reference object, can also include the angle of reference object, Yong Huye Can be as reference object.Alternatively, using range sensor test position information, for example, user is in the screen using terminal Curtain carry out self adjustment when (equivalent to by the screen of terminal as mirror, such as Fig. 1 c), then, the projection of user can be apparent in On the screen of terminal, in this way, the distance between the camera of recordable terminal and user, so that, reference object can be estimated Positional information.
Still optionally further, based on above-mentioned theory, above-mentioned acquisition acquisition parameters, it may include following steps:
23) the current positional information of reference object and current environment parameter are determined;
24), the acquisition parameters according to the current positional information of the reference object and the current environment parameter determination.
Alternatively, the range sensor of terminal can be at least one, thus, can be by least one range sensor and use The average at the distance between family is used as the current customer position information of the user.
Alternatively, current environment parameter includes following at least one:Ambient brightness, environment colour temperature, the weather (example of environment Such as, greasy weather, air humidity etc.).Above-mentioned ambient brightness can be obtained using ambient light sensor detection, and environment colour temperature can utilize colour temperature Sensor detection is obtained, and the weather of environment can be obtained by the application of weather class.
Alternatively, the above-mentioned acquisition parameters according to the positional information and the current environment parameter determination, can be according to Following manner is implemented:
Determine the positional information according to the mapping relations between default positional information and ambient parameter and acquisition parameters And the corresponding acquisition parameters of the current ambient parameter.
Specifically, the mapping relations between the positional information and ambient parameter and acquisition parameters of reference object can be according to as follows Mode is indicated, for example:
Y=F (a, b)
Wherein, Y represents ambient parameter, and a represents the positional information of reference object, and b represents ambient parameter, and F represents that shooting is right Mapping relations between the positional information and ambient parameter and acquisition parameters of elephant.The function needs to be obtained by many experiments, tool Body is as follows:
A number of object of reference can be chosen, record object of reference position on a terminal screen (is obtained using range sensor The position at family is taken, then, due to image theory, object of reference position on a terminal screen is can obtain), with object of reference in terminal To adjust acquisition parameters of the front camera under preview image state, (user shows on a terminal screen for the position occurred on screen Position, the angle shown, and Camera preview resolutions).Certainly, the position of front camera, angle and preview resolution Determination principle can be that Camera preview interfaces object of reference influences reference position on closest terminal screen.It is then desired to Determine camera position, the relevant parameter such as angle and Camera preview resolutions according to many experiments.
Alternatively, before the embodiment of the present invention is performed, user need to adjust oneself attitude (will terminal screen As mirror, position, the angle of oneself are adjusted).Being implemented without of the screen preview of terminal increases extra hardware, existing screen Curtain is respectively provided with good effect.At present, terminal screen species numerous and complicated, for example, TFT (Thin Film Transistor, TFT), plane conversion screen (In-Plane Switching, IPS), active matrix organic light-emitting diode face Plate (Active-matrix Organic Light Emitting Diode, AMOLED) screen etc..In addition, no matter which kind of screen Curtain can realize Fig. 1 a effects, obtain personage, the relevant position of background.
103rd, according to the user instruction for being received, the camera is controlled to be shot.
Alternatively, after acquisition parameters determination, then preview interface can not be entered, but directly controlled according to the acquisition parameters Camera is shot.The resolution ratio of preview image, the position of user, preview in preview image can be adjusted according to acquisition parameters The brightness of image, the colour temperature of preview image etc..Certainly, after the picture is taken, can obtain an image or one section of video.
Specifically, after acquisition parameters determination, then preview interface can not be entered, but directly shot.Certainly, due to Acquisition parameters are it has been determined that because camera has been switched on, even if being not at preview interface, camera is it will also be appreciated that preview graph Seem what sample, simply preview image is not presented on the display screen of terminal, in this way, can directly be shot, so that, can be quickly real Now shoot.
Alternatively, above-mentioned user instruction can be realized by button, it is also possible to by Voice command.Therefore, can be conveniently used for Control terminal is shot.
104th, the image corresponding with the acquisition parameters is generated.
Alternatively, in above-mentioned steps 104, the generation image corresponding with the acquisition parameters, including:
According to default reference object in the reflective position of screen and the incidence relation of acquisition parameters, generate and the shooting Object is in the corresponding image in the reflective position of screen.
Wherein, before step 101, can pre-build reference object between the reflective position of screen and acquisition parameters Incidence relation, specifically, user can record reflective position of the reference object on screen, and the reflective position when shooting every time A relation between acquisition parameters is put, after substantial amounts of relation is obtained, you can be fitted, reference object is obtained in screen A matched curve between the reflective position of curtain and acquisition parameters, the matched curve is reference object in the reflective position of screen Put the incidence relation with acquisition parameters.In this way, the image with the reference object in the reflective location object of screen can be generated.
Alternatively, in above-mentioned steps 104, the image corresponding with the acquisition parameters is generated, it may include following steps:
41) multiple images corresponding with the acquisition parameters, are generated;
42) image quality evaluation, is carried out to described multiple images, the multiple image quality evaluation values are obtained;
43) the corresponding image of maximum in the multiple image quality evaluation values, is chosen.
Wherein, in step 41, continuous shooting mode can be used, obtains corresponding multiple images of acquisition parameters.It is above-mentioned how from Selection picture quality is best in multiple images one.Image can be carried out respectively using at least one image quality evaluation index Image quality evaluation, obtains image quality evaluation values, wherein, image quality evaluation index may include but be not limited only to:Average ash Degree, mean square deviation, entropy, edge conservation degree, signal to noise ratio etc..The image quality evaluation values that may be defined as obtaining are bigger, then image matter Amount is better.
It should be noted that due to when use single evaluation index is evaluated picture quality, with certain limitation Property, therefore, picture quality can be evaluated using multiple images quality evaluation index, certainly, picture quality is evaluated When, not image quality evaluation index is The more the better, because image quality evaluation index is more, the meter of image quality assessment process Calculate complexity higher, it is better also to may not be certain image quality evaluation effect, therefore, in the situation higher to image quality evaluation requirement Under, picture quality can be evaluated using 2~10 image quality evaluation indexs.Specifically, image quality evaluation is chosen to refer to Target number and which index, depending on the situation that implements.Certainly, specifically scene selection picture quality must be also combined to comment Valency index, carries out carrying out the image quality index of image quality evaluation selection under image quality evaluation and bright ring border under dark situation Can be different.
Alternatively, in the case of not high to image quality evaluation required precision, an image quality evaluation index can be used Evaluated, for example, carrying out image quality evaluation values to pending image with entropy, it is believed that entropy is bigger, then illustrate picture quality It is better, on the contrary, entropy is smaller, then illustrate that picture quality is poorer.
Alternatively, in the case of higher to image quality evaluation required precision, multiple images quality evaluation can be used Index is evaluated pending image, and image quality evaluation is carried out to pending image in multiple images quality evaluation index When, the weight of each image quality evaluation index in the plurality of image quality evaluation index can be set, can obtain multiple images matter Amount evaluation of estimate, final image quality evaluation values, example are can obtain according to the plurality of image quality evaluation values and its corresponding weight Such as, three image quality evaluation indexs are respectively:A indexs, B indexs and C indexs, the weight of A is a1, and the weight of B is a2, C's Weight is a3, and when carrying out image quality evaluation to a certain image using A, B and C, the corresponding image quality evaluation values of A are b1, B couples The image quality evaluation values answered are b2, and the corresponding image quality evaluation values of C are b3, then, last image quality evaluation values= a1b1+a2b2+a3b3.Under normal circumstances, image quality evaluation values are bigger, illustrate that picture quality is better.
As can be seen that by the embodiment of the present invention, opening camera, user instruction is received, obtains current acquisition parameters, According to the user instruction for being received, control camera is shot, and the image corresponding with acquisition parameters is generated, in this way, being not required to To enter the preview interface for shooting, just can be used to realizing quick shooting, to a certain degree due to not showing preview image interface, The power consumption of terminal can be reduced.
Consistent with the abovely, Fig. 2 is referred to, is a kind of the second implementation of filming control method provided in an embodiment of the present invention Example schematic flow sheet.Filming control method described in the present embodiment, comprises the following steps:
201st, camera is opened.
Wherein, the camera of terminal can be front camera or rear camera, it is of course also possible to for side is imaged Head.Certainly, whether the embodiment of the present invention does not require the camera of terminal in black state or bright screen state, released state, Or, screen lock state.For example, terminal is in black state, if the camera of terminal is in opening, shooting ginseng can be gathered Number, and for example, terminal be in bright screen state, if the camera of terminal be in opening, can shooting, collecting ambient parameter, and For example, terminal is under released state, if terminal is in non-preview interface (for example, wechat interface), it is also possible to which collection shoots ginseng Number.
202nd, user instruction is received, current acquisition parameters are obtained.
Alternatively, based on above-mentioned theory, the current acquisition parameters of above-mentioned acquisition, it may include following steps:
21) the current positional information of reference object is determined;
22) acquisition parameters, are determined according to the positional information.
Wherein, above-mentioned positional information may include the position of reference object, can also include the angle of reference object, Yong Huye Can be as reference object.Alternatively, using range sensor test position information, for example, user is in the screen using terminal Curtain carry out self adjustment when (equivalent to by the screen of terminal as mirror, such as Fig. 1 c), then, the projection of user can be apparent in On the screen of terminal, in this way, the distance between the camera of recordable terminal and user, so that, reference object can be estimated Positional information.
Still optionally further, based on above-mentioned theory, above-mentioned acquisition acquisition parameters, it may include following steps:
23) the current positional information of reference object and current environment parameter are determined;
24), the acquisition parameters according to the current positional information of the reference object and the current environment parameter determination.
Alternatively, the range sensor of terminal can be at least one, thus, can be by least one range sensor and use The average at the distance between family is used as the current customer position information of the user.
Alternatively, current environment parameter includes following at least one:Ambient brightness, environment colour temperature, the weather (example of environment Such as, greasy weather, air humidity etc.).Above-mentioned ambient brightness can be obtained using ambient light sensor detection, and environment colour temperature can utilize colour temperature Sensor detection is obtained, and the weather of environment can be obtained by the application of weather class.
Alternatively, the above-mentioned acquisition parameters according to the positional information and the current environment parameter determination, can be according to Following manner is implemented:
Determine the positional information according to the mapping relations between default positional information and ambient parameter and acquisition parameters And the corresponding acquisition parameters of the current ambient parameter.
Specifically, the mapping relations between the positional information and ambient parameter and acquisition parameters of reference object can be according to as follows Mode is indicated, for example:
Y=F (a, b)
Wherein, Y represents ambient parameter, and a represents the positional information of reference object, and b represents ambient parameter, and F represents that shooting is right Mapping relations between the positional information and ambient parameter and acquisition parameters of elephant.The function needs to be obtained by many experiments, tool Body is as follows:
A number of object of reference can be chosen, record object of reference position on a terminal screen (is obtained using range sensor The position at family is taken, then, due to image theory, object of reference position on a terminal screen is can obtain), with object of reference in terminal To adjust acquisition parameters of the front camera under preview image state, (user shows on a terminal screen for the position occurred on screen Position, the angle shown, and Camera preview resolutions).Certainly, the position of front camera, angle and preview resolution Determination principle can be that Camera preview interfaces object of reference influences reference position on closest terminal screen.It is then desired to Determine camera position, the relevant parameter such as angle and Camera preview resolutions according to many experiments.
Alternatively, before the embodiment of the present invention is performed, user need to adjust oneself attitude (will terminal screen As mirror, position, the angle of oneself are adjusted).Being implemented without of the screen preview of terminal increases extra hardware, existing screen Curtain is respectively provided with good effect.At present, terminal screen species numerous and complicated, for example, TFT (Thin Film Transistor, TFT), plane conversion screen (In-Plane Switching, IPS), active matrix organic light-emitting diode face Plate (Active-matrix Organic Light Emitting Diode, AMOLED) screen etc..In addition, no matter which kind of screen Curtain can realize Fig. 1 a effects, obtain personage, the relevant position of background.
Alternatively, before the embodiment of the present invention is performed, user need to adjust oneself attitude (will terminal screen As mirror, position, the angle of oneself are adjusted).Being implemented without of the screen preview of terminal increases extra hardware, existing screen Curtain is respectively provided with good effect.At present, terminal screen species numerous and complicated, for example, TFT, plane conversion screen Curtain, active matrix organic light-emitting diode (AMOLED) panel screen etc..In addition, no matter which kind of screen can realize Fig. 1 a effects, obtain Personage, the relevant position of background.
Alternatively, above-mentioned user instruction can be realized by button, for example, have a button on the touch screen of terminal, to this Button carries out touch control operation can be realized shooting.For example, such as Fig. 2 a, be can be set in the side of terminal has button, should in user's pressing After button, you can enter and shoot.After user has adjusted attitude, can grow by the physical button of selection.On physical button, such as Fig. 2 a, can select to increase 1 Camera physical button newly on hardware, or, using key in existing volume or volume lower key.When So, user instruction, it is also possible to by Voice command.Therefore, control terminal can be conveniently used for be shot.
203rd, U.S. face parameter corresponding with the acquisition parameters is determined.
Wherein, can determine that the corresponding U.S. face of the acquisition parameters is joined by the mapping relations between ambient parameter and U.S. face parameter Number.
For example, the mapping relations between ambient parameter and U.S. face parameter can be indicated as follows, for example:
Y=f (x)
Wherein, y represents U.S. face parameter, and x represents ambient parameter, and f is represented with the mapping between ambient parameter and U.S. face parameter Relation.Above-mentioned y=f (x) can be linear function or nonlinear function.
Certainly, when acquisition parameters include multiple parameters, can be determined to shoot ginseng according at least one of the plurality of parameter The corresponding U.S. face parameter of number.
204th, according to received user instruction, the camera is controlled to be shot.
Alternatively, during step 204 is performed, without entering preview image interface, but shooting can directly be utilized Head is shot, so that, obtain shooting image.In this way, can not only reduce the power consumption of terminal, and accelerate shooting efficiency.
205th, the image corresponding with the acquisition parameters is generated.
206th, U.S. face treatment is carried out to described image according to the U.S. face parameter, obtains U.S. face image.
Wherein, terminal can carry out U.S. face treatment, i.e., the U.S. face parameter for determining according to step 202, in this way, can to shooting image Obtain the U.S. face image after U.S. face.So as to can quickly realize the U.S. face effect under screening-mode.
As can be seen that by the embodiment of the present invention, opening camera, user instruction is received, obtains current acquisition parameters, It is determined that U.S. face parameter corresponding with the acquisition parameters, according to the user instruction for being received, control camera is shot, generation with The corresponding image of acquisition parameters, U.S. face treatment is carried out to the image according to the U.S. face parameter, obtains U.S. face image.In this way, can In the state of camera unlatching, although be introduced into preview interface, but it is also possible to obtain ambient parameter, quickly taking has The image of U.S. face effect.
The device for implementing above-mentioned filming control method is provided the following is the embodiment of the present invention, it is specific as follows:
Consistent with the abovely, Fig. 3 a are referred to, is a kind of first embodiment structure of terminal provided in an embodiment of the present invention Schematic diagram.Terminal described in the present embodiment, including:Opening unit 301, receiving unit 302, control unit 303 and first Generation unit 304 is specific as follows:
Opening unit 301, for opening camera;
Receiving unit 302, for receiving user instruction, obtains current acquisition parameters;
Control unit 303, for according to the user instruction for being received, controlling the camera to be shot;
First generation unit 304, for generating the image corresponding with the acquisition parameters.
Alternatively, the user instruction triggers generation for user according to reference object in screen reflecting position.
Alternatively, first generation unit 304 specifically for:
According to default reference object in the reflective position of screen and the incidence relation of acquisition parameters, generate and the shooting Object is in the corresponding image in the reflective position of screen.
Alternatively, the acquisition parameters include the positional information of reference object.
Alternatively, the specific refinement knot of the first generation unit 304 of the terminal such as Fig. 3 b, Fig. 3 b described in Fig. 3 a Structure, first generation unit 304 may include:Second generation unit 3041, evaluation unit 3042 and selection unit 3043, specifically It is as follows:
Second generation unit 3041, for generating multiple images corresponding with the acquisition parameters;
Evaluation unit 3042, for carrying out image quality evaluation to described multiple images, obtains the multiple picture quality Evaluation of estimate;
Unit 3043 is chosen, for choosing the corresponding image of maximum in the multiple image quality evaluation values.
As can be seen that by the embodiment of the present invention, opening camera, user instruction is received, obtains current acquisition parameters, According to the user instruction for being received, control camera is shot, and generates the image corresponding with acquisition parameters, be therefore, it can Obtain acquisition parameters, it is therefore not required into the preview interface for shooting, just can be used to realizing quick shooting, to a certain degree by In not showing preview image interface, it is also possible to reduce the power consumption of terminal.
Fig. 4 is referred to, is a kind of second embodiment structural representation of terminal provided in an embodiment of the present invention.The present embodiment Described in terminal, including:At least one input equipment 1000;At least one output equipment 2000;At least one processor 3000, such as CPU;With memory 4000, above-mentioned input equipment 1000, output equipment 2000, processor 3000 and memory 4000 are connected by bus 5000.
Wherein, above-mentioned input equipment 1000 concretely contact panel, physical button or mouse.
The concretely display screen of above-mentioned output equipment 2000.
Above-mentioned memory 4000 can be high-speed RAM memory, or nonvolatile storage (non-volatile Memory), such as magnetic disk storage.Above-mentioned memory 4000 is used to store batch processing code, above-mentioned input equipment 1000, defeated Going out equipment 2000 and processor 3000 is used to call the program code stored in memory 4000, performs following operation:
Above-mentioned processor 3000, is used for:
Open camera;
User instruction is received, current acquisition parameters are obtained;
According to the user instruction for being received, the camera is controlled to be shot;
The generation image corresponding with the acquisition parameters.
Alternatively, the user instruction triggers generation for user according to reference object in screen reflecting position.
Alternatively, above-mentioned processor 3000 generates the image corresponding with the acquisition parameters, including:
According to default reference object in the reflective position of screen and the incidence relation of acquisition parameters, generate and the shooting Object is in the corresponding image in the reflective position of screen.
Alternatively, the acquisition parameters include the positional information of reference object.
Alternatively, above-mentioned processor 3000 generates the image corresponding with the acquisition parameters, including:
Generation multiple images corresponding with the acquisition parameters;
Image quality evaluation is carried out to described multiple images, the multiple image quality evaluation values are obtained;
Choose the corresponding image of maximum in the multiple image quality evaluation values.
In implementing, input equipment 1000, output equipment 2000 and processor described in the embodiment of the present invention Described in 3000 first embodiments or second embodiment that can perform a kind of filming control method provided in an embodiment of the present invention Implementation, also can perform a kind of first embodiment of terminal provided in an embodiment of the present invention described in terminal realization Mode, will not be repeated here.
Unit in all embodiments of the invention, can be by universal integrated circuit, such as CPU (Central Processing Unit, central processing unit), or by ASIC (Application Specific Integrated Circuit, application specific integrated circuit) realize.
Step in present invention method can according to actual needs carry out order adjustment, merge and delete.
Unit in embodiment of the present invention terminal can according to actual needs be merged, divides and deleted.
One of ordinary skill in the art will appreciate that all or part of flow in realizing above-described embodiment method, can be The hardware of correlation is instructed to complete by computer program, described program can be stored in a computer read/write memory medium In, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage medium can be magnetic Dish, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access Memory, abbreviation RAM) etc..
A kind of filming control method and terminal for being provided the embodiment of the present invention above are described in detail, herein Apply specific case to be set forth principle of the invention and implementation method, the explanation of above example is only intended to help Understand the method for the present invention and its core concept;Simultaneously for those of ordinary skill in the art, according to thought of the invention, Will change in specific embodiments and applications, in sum, this specification content should not be construed as to this The limitation of invention.

Claims (10)

1. a kind of filming control method, it is characterised in that including:
Open camera;
User instruction is received, current acquisition parameters are obtained;
According to the user instruction for being received, the camera is controlled to be shot;
The generation image corresponding with the acquisition parameters.
2. method according to claim 1, it is characterised in that the user instruction be user according to reference object in screen Reflective position and trigger generation.
3. method according to claim 2, it is characterised in that the generation image corresponding with the acquisition parameters, Including:
According to default reference object in the reflective position of screen and the incidence relation of acquisition parameters, generate and the reference object In the corresponding image in the reflective position of screen.
4. the method according to claim any one of 1-3, it is characterised in that the acquisition parameters include the position of reference object Confidence ceases.
5. method according to claim 1, it is characterised in that the generation image corresponding with the acquisition parameters, Including:
Generation multiple images corresponding with the acquisition parameters;
Image quality evaluation is carried out to described multiple images, the multiple image quality evaluation values are obtained;
Choose the corresponding image of maximum in the multiple image quality evaluation values.
6. a kind of terminal, it is characterised in that including:
Opening unit, for opening camera;
Receiving unit, for receiving user instruction, obtains current acquisition parameters;
Control unit, for according to the user instruction for being received, controlling the camera to be shot;
First generation unit, for generating the image corresponding with the acquisition parameters.
7. terminal according to claim 6, it is characterised in that the user instruction be user according to reference object in screen Reflective position and trigger generation.
8. terminal according to claim 7, it is characterised in that first generation unit specifically for:
According to default reference object in the reflective position of screen and the incidence relation of acquisition parameters, generate and the reference object In the corresponding image in the reflective position of screen.
9. the terminal according to claim any one of 6-8, it is characterised in that the acquisition parameters include the position of reference object Confidence ceases.
10. terminal according to claim 6, it is characterised in that first generation unit includes:
Second generation unit, for generating multiple images corresponding with the acquisition parameters;
Evaluation unit, for carrying out image quality evaluation to described multiple images, obtains the multiple image quality evaluation values;
Unit is chosen, for choosing the corresponding image of maximum in the multiple image quality evaluation values.
CN201611245833.XA 2016-12-29 2016-12-29 A kind of filming control method and terminal Withdrawn CN106791412A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611245833.XA CN106791412A (en) 2016-12-29 2016-12-29 A kind of filming control method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611245833.XA CN106791412A (en) 2016-12-29 2016-12-29 A kind of filming control method and terminal

Publications (1)

Publication Number Publication Date
CN106791412A true CN106791412A (en) 2017-05-31

Family

ID=58928859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611245833.XA Withdrawn CN106791412A (en) 2016-12-29 2016-12-29 A kind of filming control method and terminal

Country Status (1)

Country Link
CN (1) CN106791412A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107360375A (en) * 2017-08-29 2017-11-17 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107832675A (en) * 2017-10-16 2018-03-23 广东欧珀移动通信有限公司 Processing method of taking pictures and Related product
CN108965727A (en) * 2018-06-28 2018-12-07 努比亚技术有限公司 A kind of image-pickup method, terminal and computer readable storage medium
CN110086995A (en) * 2019-05-15 2019-08-02 深圳市道通智能航空技术有限公司 A kind of brightness of image adjusting method, device and unmanned plane
CN111327824A (en) * 2020-03-02 2020-06-23 Oppo广东移动通信有限公司 Shooting parameter selection method and device, storage medium and electronic equipment
WO2021098070A1 (en) * 2019-11-18 2021-05-27 深圳传音控股股份有限公司 Terminal mirror surface photographing method, terminal, and computer-readable storage medium
CN114189628A (en) * 2021-11-30 2022-03-15 歌尔光学科技有限公司 Control method and device of shooting function, AR device and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107360375A (en) * 2017-08-29 2017-11-17 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107832675A (en) * 2017-10-16 2018-03-23 广东欧珀移动通信有限公司 Processing method of taking pictures and Related product
CN108965727A (en) * 2018-06-28 2018-12-07 努比亚技术有限公司 A kind of image-pickup method, terminal and computer readable storage medium
CN110086995A (en) * 2019-05-15 2019-08-02 深圳市道通智能航空技术有限公司 A kind of brightness of image adjusting method, device and unmanned plane
WO2021098070A1 (en) * 2019-11-18 2021-05-27 深圳传音控股股份有限公司 Terminal mirror surface photographing method, terminal, and computer-readable storage medium
CN111327824A (en) * 2020-03-02 2020-06-23 Oppo广东移动通信有限公司 Shooting parameter selection method and device, storage medium and electronic equipment
CN111327824B (en) * 2020-03-02 2022-04-22 Oppo广东移动通信有限公司 Shooting parameter selection method and device, storage medium and electronic equipment
CN114189628A (en) * 2021-11-30 2022-03-15 歌尔光学科技有限公司 Control method and device of shooting function, AR device and storage medium
WO2023097724A1 (en) * 2021-11-30 2023-06-08 歌尔股份有限公司 Photographing function control method and apparatus, ar device and storage medium

Similar Documents

Publication Publication Date Title
CN106791412A (en) A kind of filming control method and terminal
US11706520B2 (en) Under-display camera and sensor control
WO2022000992A1 (en) Photographing method and apparatus, electronic device, and storage medium
CN105898143B (en) A kind of grasp shoot method and mobile terminal of moving object
US20120242852A1 (en) Gesture-Based Configuration of Image Processing Techniques
CN103188434B (en) Method and device of image collection
WO2018166069A1 (en) Photographing preview method, graphical user interface, and terminal
CN109756668A (en) Optical zoom and digital zoom are combined under different images contact conditions
US20160247285A1 (en) Image processing device and image depth processing method
CN110392214A (en) Image processing method, device, storage medium and electronic equipment
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112532881A (en) Image processing method and device and electronic equipment
TWI676113B (en) Preview method and device in iris recognition process
CN112822412A (en) Exposure method and electronic apparatus
JP2023545536A (en) Photography methods, photographic equipment, electronic equipment and readable storage media
CN113329172A (en) Shooting method and device and electronic equipment
WO2022156673A1 (en) Display control method and apparatus, electronic device, and medium
CN114500837B (en) Shooting method and device and electronic equipment
WO2021238942A1 (en) Anti-shake method, anti-shake apparatus, and electronic device
CN114390181A (en) Shooting method and device and electronic equipment
CN113866782A (en) Image processing method and device and electronic equipment
CN103677317B (en) The method realized the mobile terminal of cursor control and its realize cursor control
CN114143461B (en) Shooting method and device and electronic equipment
WO2013136602A1 (en) Imaging device with projector and imaging control method therefor
CN104793910A (en) Method and electronic equipment for processing information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20170531

WW01 Invention patent application withdrawn after publication