CN113727021B - Shooting method and device and electronic equipment - Google Patents

Shooting method and device and electronic equipment Download PDF

Info

Publication number
CN113727021B
CN113727021B CN202110999017.2A CN202110999017A CN113727021B CN 113727021 B CN113727021 B CN 113727021B CN 202110999017 A CN202110999017 A CN 202110999017A CN 113727021 B CN113727021 B CN 113727021B
Authority
CN
China
Prior art keywords
voice signal
sound
voice
preview interface
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110999017.2A
Other languages
Chinese (zh)
Other versions
CN113727021A (en
Inventor
陈明杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202110999017.2A priority Critical patent/CN113727021B/en
Publication of CN113727021A publication Critical patent/CN113727021A/en
Application granted granted Critical
Publication of CN113727021B publication Critical patent/CN113727021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, a shooting device and electronic equipment, and belongs to the technical field of shooting. The method comprises the following steps: under the condition of displaying a shooting preview interface, acquiring voice information of a first voice signal, wherein the voice information comprises first sound intensity and sound source information; performing zoom processing on a preview image in the shooting preview interface according to a first zoom magnification when the sound source information indicates that a sound object of the first voice signal is an object displayed in the shooting preview interface; wherein the first zoom magnification is associated with the first sound intensity.

Description

Shooting method and device and electronic equipment
Technical Field
The application belongs to the technical field of shooting, and particularly relates to a shooting method, a shooting device and electronic equipment.
Background
Currently, the zooming mode of a mobile phone camera is completed through a double-finger gesture, and zooming is performed by double-finger scratch and zooming is performed by double-finger furling. However, when the user holds the mobile phone, the user needs to use both hands simultaneously, that is, one hand holds the mobile phone, and the other hand zooms and zooms by using fingers, so that the operation mode is inconvenient.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method, a shooting device and electronic equipment, which can solve the problem that the operation mode of the existing zooming operation is inconvenient.
In a first aspect, an embodiment of the present application provides a photographing method, including:
under the condition of displaying a shooting preview interface, acquiring voice information of a first voice signal, wherein the voice information comprises first sound intensity and sound source information;
performing zoom processing on a preview image in the shooting preview interface according to a first zoom magnification when the sound source information indicates that a sound object of the first voice signal is an object displayed in the shooting preview interface;
wherein the first zoom magnification is associated with the first sound intensity.
In a second aspect, an embodiment of the present application provides a photographing apparatus, including:
the acquisition module is used for acquiring voice information of the first voice signal under the condition of displaying a shooting preview interface, wherein the voice information comprises first sound intensity and sound source information;
the processing module is used for executing zooming processing on the preview image in the shooting preview interface according to a first zooming multiplying power when the sound source information indicates that the sound generating object of the first voice signal is the object displayed in the shooting preview interface;
Wherein the first zoom magnification is associated with the first sound intensity.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, when the shooting preview interface is displayed, first sound intensity and sound source information of a first voice signal are acquired, and when the sound source information indicates that a sound object of the voice signal is an object displayed in the shooting preview interface, zooming is performed on a preview image in the shooting preview interface through a first zoom magnification associated with the first sound intensity. According to the method, automatic zooming is performed on the preview image in the shooting preview interface through the zooming multiplying power related to the sound intensity of the voice signal of the object displayed in the shooting preview interface, manual zooming is not needed by a user, and zooming operation of the user is simplified.
Drawings
Fig. 1 is a flowchart of a photographing method provided in an embodiment of the present application;
fig. 2 is one of interface display schematic diagrams of an electronic device provided in an embodiment of the present application;
FIG. 3 is a second schematic diagram of an interface display of an electronic device according to an embodiment of the present disclosure;
FIG. 4 is a third schematic diagram of an interface display of an electronic device according to an embodiment of the present disclosure;
FIG. 5 is a fourth schematic diagram of an interface display of an electronic device according to an embodiment of the present disclosure;
FIG. 6 is a fifth schematic diagram of an interface display of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present application;
fig. 8 is one of schematic structural diagrams of an electronic device according to an embodiment of the present application;
fig. 9 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image display method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides a shooting method, which may be applied to an electronic device, where the electronic device may be a mobile phone, a tablet computer, a notebook computer, or the like. As shown in FIG. 1, the method may include steps 1100-1400, which are described in detail below.
In step 1100, in the case of displaying the photographing preview interface, the voice information of the first voice signal is acquired.
The shooting preview interface is an interface displayed after entering the shooting application program. Wherein, the shooting preview interface displays the shooting object.
The subject may interact with the electronic device based on a set voice, which may be directional. For example, the voice of the setting may be "see here", "i am here", or the like.
In this embodiment, before performing the step 1100 to obtain the voice information of the first voice signal while displaying the shooting preview interface, the shooting method of the present disclosure may further include: in the case of displaying a photographing preview interface, a configuration entry for performing voice configuration is provided, and a voice input through the configuration entry is acquired as a set voice.
As shown in fig. 2, in the case of displaying the shooting preview interface, the photographer may click "set" to enter the page of the voice entry, click "start recording voice", and then record the set voice, for example, "see here", "i am there", and so on.
The first voice signal may be a sound generated by shooting an object displayed in the preview interface, or may be a sound generated by shooting an object outside the preview interface.
The voice information includes first sound intensity and sound source information.
In this embodiment, the first sound intensity of the first voice signal may be determined by detecting the amplitude of the first voice signal.
In this embodiment, the phase difference of the first voice signals picked up by the two microphones in the electronic device may be obtained first, and the distance and the angle between the two microphones are combined to determine the sound source information of the first voice signals.
Example 1, when the electronic device starts running the photographing application, a photographing preview interface may be displayed on a display screen of the electronic device. In the case of displaying the photographing preview interface, the object displayed in the photographing preview interface may interact with the electronic device through the set voice, as shown in fig. 3 and 4, two objects are displayed in the photographing preview interface, and one of the objects shout "i am at this point".
After shooting one of the objects in the preview interface shouting "i am at this", the electronic device takes "i am at this" as the first speech signal. The sound intensity of the first speech signal "i am at this" is determined as the first sound intensity by the amplitude of the first speech signal "i am at this". The phase difference of the first voice signal 'I am at' picked up by the two microphones is combined with the distance and the angle between the two microphones to determine the sound source information of the first voice signal 'I am at'. The sound source information of the first voice signal corresponding to fig. 3 and fig. 4 indicates that the sound object of the first voice signal is an object displayed in the shooting preview interface.
Under the condition that a shooting preview interface is displayed, after voice information of a first voice signal is acquired, entering:
in step 1200, in a case where the sound source information indicates that the sound object of the first voice signal is an object displayed in the photographing preview interface, the zooming process is performed on the preview image in the photographing preview interface at the first zoom magnification.
The first zoom magnification is associated with a first sound intensity. It is known that in the case where the first voice signal originates from the object displayed in the photographing preview interface, it performs an auto-zoom process on the preview image in the photographing preview interface according to the first zoom magnification associated with the intensity of the first voice signal.
In an embodiment, before performing the zooming process on the preview image in the photographing preview interface according to the first zoom magnification, the first zoom magnification may be determined according to the following steps 2100 to 2200, and further, the automatic zooming process may be performed on the preview image in the photographing preview interface according to the first zoom magnification, where the photographing method may further include the following steps 2100 to 2200:
in step 2100, a first preset magnification corresponding to the first sound intensity is determined as a first zoom magnification in the event that the first sound intensity is less than or equal to a preset intensity threshold.
The preset intensity threshold value can be a numerical value set according to an actual application scene and an actual requirement.
In step 2100, first mapping data reflecting the mapping relationship between different first sound intensities and different first preset magnifications is stored in the electronic device in advance, and after the first sound intensity is obtained, the first preset magnifications corresponding to the first sound intensity can be matched from the first mapping data.
It will be appreciated that, in the case where the first sound intensity is less than or equal to the preset intensity threshold, it is indicated that the object indicated by the sound source information of the first voice signal is far from the image capturing apparatus, zoom magnification is required, that is, the zoom magnification is required to be increased. The smaller the first sound intensity is, the higher the zoom magnification is, so that the actual scene requirement can be met.
Continuing with example 1 in step 1100, in fig. 3, when "i am in this" is shooting the sound made by the object displayed in the preview interface, the object is regarded as the area where zooming is required, and since "i am in this" is less intense, it indicates that the object is far away from the image capturing apparatus, at this time, zooming is required to be performed on the object, and the first preset zoom magnification 5x corresponding to the first sound intensity of "i am in this" obtained from the above first mapping data is regarded as the first zoom magnification, that is, the current zoom magnification 1x is increased to the first zoom magnification 5x.
Step 2200, determining a second preset magnification corresponding to the first sound intensity as the first zoom magnification in the case that the first sound intensity is greater than the preset intensity threshold.
The first preset multiplying power is larger than the second preset multiplying power.
In step 2100, second mapping data reflecting the mapping relationship between different first sound intensities and different second preset magnifications is stored in the electronic device in advance, and after the first sound intensity is obtained, the second preset magnifications corresponding to the first sound intensity can be matched from the second mapping data.
It can be understood that, in the case where the first sound intensity is greater than the preset intensity threshold, it is indicated that the object indicated by the sound source information of the first voice signal is closer to the image capturing device, zooming is required, that is, the zoom magnification is required to be reduced. The larger the first sound intensity is, the higher the zoom magnification is, so that the actual scene requirement can be met.
Continuing with example 1 in step 1100, in fig. 4, when "i am in this" is the sound made by the object displayed in the shooting preview interface, the object is regarded as the area where zooming is required, and since "i am in this" is stronger, it indicates that the object is closer to the image capturing apparatus, at this time, zooming is required to be performed on the object, and when the second preset zoom factor 1x corresponding to the first sound intensity of "i am in this" obtained from the above second mapping data is regarded as the first zoom factor, the current zoom factor 5x is reduced to the first zoom factor 1x.
In this embodiment, performing the zooming process on the preview image in the photographing preview interface according to the first zoom magnification may further include: and performing zooming processing on the preview image in the shooting preview interface with the object displayed in the shooting preview interface as a center according to the first zooming magnification.
As shown in fig. 3, the preview image in the photographing preview interface may be automatically zoomed to 5X centering on the object which shouts "i am there" displayed in the photographing preview interface.
As shown in fig. 4, the preview image in the photographing preview interface may be automatically zoomed to 1X centering on the object which shouts "i am there" displayed in the photographing preview interface.
According to the method of the present embodiment, in the case of displaying a shooting preview interface, first sound intensity and sound source information of a first voice signal are acquired, and in the case where the sound source information indicates that a sound-producing object of the voice signal is an object displayed in the shooting preview interface, a zooming process is performed on a preview image in the shooting preview interface by a first zoom magnification associated with the first sound intensity. According to the method, automatic zooming is performed on the preview image in the shooting preview interface through the zooming multiplying power related to the sound intensity of the voice signal of the object displayed in the shooting preview interface, manual zooming is not needed by a user, and zooming operation of the user is simplified.
In an embodiment, before performing the zooming process on the preview image in the shooting preview interface according to the first zoom magnification, the first zoom magnification may be determined according to the following steps 3100 to 3400, and further performing the automatic zooming process on the preview image in the shooting preview interface according to the first zoom magnification, where the shooting method may further include the following steps 3100 to 3400:
step 3100, determining a first intermediate magnification according to a first sound intensity of the first speech signal.
In step 3100, first mapping data and second mapping data are stored in the electronic device in advance, wherein the first mapping data reflects a mapping relationship between different first sound intensities and different first preset magnifications, and the second mapping data reflects a mapping relationship between different first sound intensities and different second preset magnifications. Generally, when the first sound intensity is less than or equal to the preset intensity threshold, a first preset magnification corresponding to the first sound intensity is matched from the first mapping data as a first intermediate magnification. When the first sound intensity is greater than the preset intensity threshold, a second preset multiplying power corresponding to the first sound intensity is matched from the second mapping data to serve as a first intermediate multiplying power. The first mapping data and the second mapping data in the embodiments of the present application may be stored in a mapping table or other storage forms of mapping relationships, which are not limited herein specifically.
In step 3100, the first sound intensity may be compared with a preset intensity threshold, so as to match a first preset multiplying factor or a second preset multiplying factor corresponding to the first sound intensity from the first mapping data or the second mapping data of the first voice signal, as a first intermediate multiplying factor.
Continuing with example 1 of step 2100 above, when the first sound intensity is less than or equal to the preset intensity threshold, a first preset magnification 5x may be determined as a first intermediate magnification according to the first sound intensity of the first speech signal and the first mapping data.
Continuing with example 1 of step 2200 above, when the first sound intensity is greater than the preset intensity threshold, a second preset magnification 1x may be determined as the first intermediate magnification according to the first sound intensity of the first speech signal and the second mapping data.
Step 3200, a second sound intensity of an interfering signal in the first speech signal is obtained.
It will be appreciated that in an actual shooting scene, the sound source tends to be disturbed by the outside world. The interference signal may be various noise signals such as a whistle of an automobile, a train, etc.
In this step 3200, third mapping data reflecting the mapping relationship between the different interference signals and the different second sound intensities is stored in the electronic device in advance. In this step 3200, the amplitude of the interference signal in the first speech signal may be first obtained, so as to determine the second sound intensity corresponding to the interference signal according to the amplitude of the interference signal and the third mapping data.
In step 3300, a second intermediate magnification is determined based on the second sound intensity and the first intermediate magnification. And performs the following step 3400 or step 3500.
It will be appreciated that in the presence of an interfering signal, the first intermediate multiplying power will typically be reduced. The larger the second sound intensity is, the more the first intermediate multiplying power is reduced, and the smaller the second intermediate multiplying power is.
In this step 3300, when an interference signal exists in the first voice signal of the object displayed in the shooting preview interface, the first intermediate magnification obtained in the above step 3100 is adjusted according to the second sound intensity of the interference signal to obtain a second intermediate magnification. Meanwhile, prompt information 'do not zoom operation' is displayed on a shooting preview interface, and voice feedback of a photographer is waited.
In step 3400, in the case that the second voice signal of the photographer is not received within the preset time, the second intermediate magnification is determined as the first zoom magnification.
In step 3400, if the voice signal of the photographer is not received within the preset time, the second intermediate magnification is used as the first zoom magnification, and then the zooming process is performed on the preview image in the shooting preview interface according to the target zoom magnification.
Continuing with example 1 of step 3100, when it is determined that 5x is the first intermediate multiplying power, the first intermediate multiplying power may be reduced to 3x as the second intermediate multiplying power according to the strength of the interference signal at this time because the interference signal is interfered. Meanwhile, if the voice signal of the photographer is not received within the preset time, the second intermediate magnification 3x is used as a first zoom magnification, and then the zooming process is performed on the preview image in the shooting preview interface according to the first zoom magnification.
Continuing with example 1 of step 3100, when it is determined that 1x is the first intermediate multiplying power, the first intermediate multiplying power may be reduced to 0.8x as the second intermediate multiplying power according to the strength of the interference signal at this time because the interference signal is interfered. Meanwhile, if the voice signal of the photographer is not received within the preset time, the second intermediate multiplying power of 0.8x is used as a first zooming multiplying power, and then zooming processing is carried out on the preview image in the shooting preview interface according to the first zooming multiplying power.
Step 3500, obtaining target information when the second voice signal of the photographer is received within the preset time.
The target information includes at least one of: third sound intensity of the second voice signal, keywords in the second voice signal.
In step 3500, if the second voice signal of the photographer is received within the preset time, the second voice signal of the photographer is identified to obtain the third sound intensity of the second voice signal and the keywords in the second voice signal, and further the second intermediate magnification is adjusted according to the second sound intensity of the second voice signal and/or the keywords in the second voice signal to obtain the first zoom magnification, and the zooming process is performed on the preview image in the shooting preview interface based on the first zoom magnification.
Continuing with example 1 of step 3400, if the electronic device receives the second voice signal "big one point" of the photographer within the preset time, the electronic device will increase the second intermediate magnification according to the second voice signal "big one point" and/or the keyword "big", for example, increase the second intermediate magnification 3x to 5x as the first zoom magnification, and then execute the zoom process on the preview image in the shooting preview interface according to the first zoom magnification.
According to the embodiment, under the condition that the sound emitted by the shooting object is interfered, the expected zoom magnification is provided for carrying out zooming processing on the preview image in the shooting preview interface in combination with the intensity of the interference signal, the user is not required to manually carry out zooming, and the zooming operation of the user is simplified. Simultaneously, the expected zoom magnification is adjusted by supporting user voice feedback, so that the obtained zoom magnification meets the user requirement.
In one embodiment, after the above step 1100 is performed to obtain the voice information of the first voice signal, the photographing method of the embodiment of the disclosure may further include the following steps 4100 to 4200:
in step 4100, if the sound source information indicates that the sound emission object of the first speech signal is an object outside the shooting preview interface, azimuth information of the sound emission object is determined based on the sound source information.
In this embodiment, when the sound source information of the first voice signal indicates that the sound generating object of the first voice signal is an object outside the shooting preview interface, the azimuth information of the sound generating object is determined first based on the sound source information of the first voice signal.
Example 2, after the electronic device begins running the photographing application, the photographing preview interface may be displayed on the electronic device display screen. In the case of displaying the shooting preview interface, the shot person may interact through the set voice, as shown in fig. 5, although the object is not displayed in the shooting preview interface, a voice source "i am in this" exists outside the shooting preview interface, and at this time, the electronic device determines the azimuth information of the sounding object based on the sound source information of "i am in this".
Step 4200, outputting a prompt based on the orientation information.
The prompt information is used for indicating the photographer to rotate the direction of the photographing device so that the sounding object is displayed in the photographing preview interface.
In this embodiment, when the azimuth information of the sound object is determined, prompt information for indicating the direction in which the photographer rotates the photographing device is displayed on the display interface of the electronic device, so that the photographer rotates the direction of the photographing device based on the prompt information, and the sound object is displayed on the photographing preview interface.
Continuing with example 2 of step 4100 above, as shown in fig. 5, the display interface of the electronic device outputs a prompt message that includes not only the text message "please turn the mobile phone" but also the pointing message to the voice source "i am there", which is an arrow in fig. 5.
According to the embodiment, the prompt and interaction of the voice source outside the shooting preview picture are realized, so that the photographer can be helped to find the object to be shot more quickly under the condition that the shot object is lost.
In one embodiment, before performing the above step 1100 to obtain the voice information of the first voice signal, the photographing method of the embodiment of the disclosure may further include the following steps 5100 to 5200:
In step 5100, a fourth speech signal is acquired.
The fourth speech signal comprises sub-speech signals of at least one sound object.
In example 3, as shown in fig. 6, in the case of displaying the photographing preview interface, if three objects displayed in the photographing preview interface are simultaneously uttered, the electronic device may acquire three sub-voice signals, namely, sub-voice signal 1 of object 1, sub-voice signal 2 of object 2, and sub-voice signal 3 of object 3.
In step 5200, a sub-speech signal of the target sound object in the fourth speech signal is obtained.
The target sound object may be an object satisfying preset conditions including: the target sound object is located in a preset area in the shooting preview interface, or object characteristics of the target sound object are matched with the preset object characteristics. That is, in the case where there is a sound of a sound emission object satisfying the preset condition in the fourth voice signal, the sound of the sound emission object satisfying the preset condition may be determined as a sub-voice signal of the target sound emission object.
In one example, the preset condition includes that the object features of the target sound object match the preset object features.
The above preset object features may be pre-stored face information, and attribute information corresponding to each of the noted face information, which may include a name and a relationship with a photographer. Here, the photographing method of the present disclosure may further include: and receiving a first input, and responding to the first input to acquire the preset object characteristics.
In this example, in the case of displaying the shooting preview interface, the electronic device may acquire the sub-voice signal of at least one sound object, and at the same time, the electronic device further identifies whether the sub-voice information of the target sound object exists in the sub-voice signal of the at least one sound object. For example, whether the object features of the sound generating object are matched with the preset object features is firstly identified, and when the object features are matched, the corresponding sound generating object is used as a target sound generating object, and a sub-voice signal of the target sound generating object is acquired.
Continuing with example 3 of step 5100, after the electronic device obtains the three sub-voice signals, the object features of the three sound generating objects are matched with the preset object features, and if the object features of the object 1 in the three sound generating objects are successfully matched with the preset object features, the object 1 is taken as the target sound generating object, and the sub-voice signals of the object 1 are obtained.
In one example, the preset condition includes a preset area in the shooting preview interface where the target sound object is located. The preset area may be a center area of the photographing preview interface.
In this example, when the shooting preview interface is displayed, the electronic device may acquire a sub-voice signal of at least one sound generating object, and at the same time, the electronic device further determines whether an object located in a center area of the shooting preview interface exists in the at least one sound generating object, and if the object located in the shooting preview interface exists, the object is taken as a main object, that is, a target sound generating object.
It is understood that the closer to the face of the center area of the photographing preview interface, the greater the likelihood of being the subject.
Continuing with example 3 of step 5100, in the case of displaying the shooting preview interface, the electronic device may acquire a sub-speech signal of at least one sound object, and at the same time, the electronic device further identifies whether there is an object located in a central region of the shooting preview interface among the three sound objects, and in the case that object 1 of the three sound objects is located in a central region of the shooting preview interface, takes object 1 as a target sound object, and acquires a sub-speech signal of the target sound object.
In step 5300, a sub-speech signal of the target speech object is determined as a first speech signal.
Continuing with example 3 of step 5200, after determining the sub-speech signal of the target sound object, i.e., object 1, the sub-speech signal of object 1 may be determined as the first speech signal, and further, the zooming process may be performed on the preview image in the photographing preview interface with respect to the object 1 in the photographing preview interface as a center according to the first zoom magnification associated with the first sound intensity of the first speech signal.
According to the embodiment, when a plurality of objects in the shooting preview screen emit sound at the same time, the sound emitted by the main object, namely the target sound emitting object, in the shooting preview screen is determined, so that the preview image in the shooting preview interface can be zoomed by taking the target sound emitting object as the center.
In one embodiment, after the above step 1100 is performed to obtain the voice information of the first voice signal, the photographing method of the embodiment of the disclosure may further include the following steps 6100 to 6200:
in step 6100, a third voice signal is acquired.
The third speech signal includes M sub-speech signals of M sound objects. M is a positive integer greater than or equal to 2.
Example 4, in the case of displaying the photographing preview interface, if two objects displayed in the photographing preview interface and one object outside the photographing preview interface are simultaneously uttered, wherein object 1 and object 2 are displayed in the photographing preview interface, object 2 is located outside the photographing preview screen. The electronic device will acquire three sub-speech signals, namely sub-speech signal 1 of object 1, sub-speech signal 2 of object 2, and sub-speech signal 3 of object 3.
In step 6200, at least one of the M sub-voice signals is determined as the first voice signal according to the priority of each sub-voice signal.
In step 6200, at least one sub-speech signal may be obtained from the M sub-speech signals according to the descending order of priority of each sub-speech signal and determined as the first speech signal.
In step 6200, when M sub-voice signals are acquired, the priority of the M sub-voice signals may be first sorted from big to small to obtain a descending sorting order of each sub-voice signal, where the sorting principle is as follows: the priority of the sub-voice signals of the object in the shooting preview interface is higher than the priority of the sub-voice signals outside the shooting preview interface. The priority of the sub-speech signal with large sound intensity is higher than the priority of the sub-speech signal with small sound intensity. The priority of the sub-voice signal closer to the preset area in the shooting preview screen interface is greater than the priority of the sub-voice signal farther from the preset area in the shooting preview interface. The preset area may be a center area of the photographing preview interface.
In one example, the highest priority sub-speech signal may be selected directly to be determined as the first speech signal based on the descending order of priority of each sub-speech signal.
Continuing with example 4 of step 6100, the electronic device obtains three sub-voice signals, where sub-voice signal 1 and sub-voice signal 2 are voice signals sent by an object displayed in the shooting preview interface, and sub-voice signal 3 is a voice signal sent by an object outside the shooting preview interface. The priority of sub-speech signal 1 and the priority of sub-speech signal 2 are both higher than the priority of sub-speech signal 3. Meanwhile, since the sound intensity of the sub-voice signal 1 is greater than that of the sub-voice signal 2, the priority of the sub-voice signal 1 is greater than that of the sub-voice signal 2. Here, the sub-speech signal 1 may be directly selected as the first speech signal.
In one example, the sub-speech signals prioritized first and second may be selected as the first speech signal based on a descending order of priority of each sub-speech signal.
Continuing with example 4 of step 6100, the electronic device obtains three sub-voice signals, where sub-voice signal 1 and sub-voice signal 2 are voice signals sent by an object displayed in the shooting preview interface, and sub-voice signal 3 is a voice signal sent by an object outside the shooting preview interface. Then the priority of sub-speech signal 1 and the priority of sub-speech signal 2 are both higher than the priority of sub-speech signal 3, and sub-speech signal 1 and sub-speech signal 2 are selected as the first speech signal.
It will be appreciated that in the case where the sub-speech signal 1 and the sub-speech signal 2 can be selected as the first speech signal at the same time, it may be that the zoom magnification 1 associated with the sound intensity of the sub-speech signal 1 is acquired, and the zoom magnification 2 associated with the sound intensity of the sub-speech signal 2 is acquired. Meanwhile, the zoom magnification 1 and the zoom magnification 2 are compared, if the zoom magnification 1 and the zoom magnification 2 are close, the zoom magnification 1 and the zoom magnification 2 are fused to obtain a fused magnification, the fused magnification is used as a first zoom magnification, and further, a zooming operation is performed on the preview image in the shooting preview interface according to the first zoom magnification, for example, the zooming process is performed on the preview image in the shooting preview interface by taking the object 1 and the object 2 as the center.
If the zoom magnification 1 and the zoom magnification 2 are not close, at this time, as is known from the above analysis, the priority of the sub-voice signal 1 of the object 1 is higher than the priority of the sub-voice signal 2 of the object 2, the zoom magnification 1 associated with the sub-voice signal 1 is taken as the first zoom magnification, and the zooming operation is performed on the preview image in the photographing preview interface at the first zoom magnification, for example, the zooming process is performed on the preview image in the photographing preview interface centering on the object 1.
According to the present embodiment, when a plurality of objects in a shot preview screen emit sounds at the same time, the objects corresponding to the sounds having high priorities are zoomed based on the priority ranking of the sounds.
Corresponding to the above embodiment, as shown in fig. 7, the embodiment of the present application further provides a photographing device 700, including:
the acquiring module 710 is configured to acquire, when the shooting preview interface is displayed, voice information of a first voice signal, where the voice information includes first sound intensity and sound source information.
And a processing module 720, configured to perform zoom processing on the preview image in the shooting preview interface according to a first zoom magnification when the sound source information indicates that the sound object of the first voice signal is an object displayed in the shooting preview interface.
Wherein the first zoom magnification is associated with the first sound intensity.
In one embodiment, the processing module 720 is further configured to: determining a first preset magnification corresponding to the first sound intensity as a first zoom magnification when the first sound intensity is less than or equal to a preset intensity threshold; and determining a second preset multiplying factor corresponding to the first sound intensity as a first zooming multiplying factor under the condition that the first sound intensity is larger than the preset intensity threshold.
Wherein the first preset multiplying power is larger than the second preset multiplying power.
In one embodiment, the processing module 720 is further configured to: determining a first intermediate multiplying factor according to a first sound intensity of the first voice signal; acquiring a second sound intensity of an interference signal in the first voice signal; determining a second intermediate multiplying power according to the second sound intensity and the first intermediate multiplying power; and determining the second intermediate magnification as the first zoom magnification when the second voice signal of the photographer is not received within a preset time period.
In one embodiment, the processing module 720 is further configured to: and acquiring target information under the condition that a second voice signal of a photographer is received within a preset time period, wherein the target information comprises at least one of the following items: a third sound intensity of the second voice signal, a keyword in the second voice signal; and determining the first zoom magnification according to the target information and the second intermediate magnification.
In one embodiment, the processing module 720 is further configured to: determining azimuth information of the sound generating object according to the sound source information under the condition that the sound source information indicates that the sound generating object of the first voice signal is an object outside the shooting preview interface; and outputting prompt information based on the azimuth information, wherein the prompt information is used for indicating a photographer to rotate the direction of the shooting device so as to enable the sounding object to be displayed in the shooting preview interface.
In one embodiment, the obtaining module 710 is further configured to obtain a third speech signal, where the third speech signal includes M sub-speech signals of M sound objects.
The processing module 720 is further configured to determine at least one sub-voice signal of the M sub-voice signals as a first voice signal according to the priority of each sub-voice signal.
Wherein M is an integer greater than or equal to 2.
In one embodiment, the obtaining module 710 is further configured to obtain a fourth voice signal, where the fourth voice signal includes a sub-voice signal of at least one sound object; and acquiring a sub-voice signal of the target sound object in the fourth voice signal.
The processing module 720 is further configured to determine a sub-speech signal of the target sound object as the first speech signal.
Wherein, the target sound production object satisfies a preset condition, the preset condition includes: and the target sound object is positioned in a preset area in the shooting preview interface, or the object characteristics of the target sound object are matched with the preset object characteristics.
The photographing device in the embodiment of the application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The photographing device in the embodiment of the application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The photographing device provided in the embodiment of the present application can implement each process implemented by the foregoing method embodiment, and in order to avoid repetition, details are not repeated here.
In correspondence to the above embodiment, optionally, as shown in fig. 8, the embodiment of the present application further provides an electronic device 800, including a processor 801, a memory 802, and a program or an instruction stored in the memory 802 and capable of running on the processor 801, where the program or the instruction implements each process of the above shooting method embodiment when executed by the processor 801, and the process can achieve the same technical effect, and for avoiding repetition, a description is omitted herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 900 includes, but is not limited to: radio frequency unit 901, network module 902, audio output unit 903, input unit 904, sensor 905, display unit 906, user input unit 907, interface unit 908, memory 909, and processor 910.
Those skilled in the art will appreciate that the electronic device 900 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 910 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 910 is configured to obtain, when the display unit 906 displays the shooting preview interface, voice information of a first voice signal, where the voice information includes first sound intensity and sound source information; performing zoom processing on a preview image in the shooting preview interface according to a first zoom magnification when the sound source information indicates that a sound object of the first voice signal is an object displayed in the shooting preview interface; wherein the first zoom magnification is associated with the first sound intensity.
In one embodiment, the processor 910 is further configured to determine a first preset magnification corresponding to the first sound intensity as a first zoom magnification when the first sound intensity is less than or equal to a preset intensity threshold; determining a second preset multiplying factor corresponding to the first sound intensity as a first zooming multiplying factor under the condition that the first sound intensity is larger than the preset intensity threshold; wherein the first preset multiplying power is larger than the second preset multiplying power.
In one embodiment, the processor 910 is further configured to determine a first intermediate multiplying factor according to a first sound intensity of the first speech signal; acquiring a second sound intensity of an interference signal in the first voice signal; determining a second intermediate multiplying power according to the second sound intensity and the first intermediate multiplying power; in the case where the second voice signal of the photographer is not received through the user input unit 907 within a preset period of time, the second intermediate magnification is determined as the first zoom magnification.
In one embodiment, the processor 910 is further configured to obtain target information if the second voice signal of the photographer is received through the user input unit 807 within a preset period of time, where the target information includes at least one of the following: a third sound intensity of the second voice signal, a keyword in the second voice signal; and determining the first zoom magnification according to the target information and the second intermediate magnification.
In one embodiment, the processor 910 is further configured to determine, if the sound source information indicates that the sound object of the first speech signal is an object outside the shooting preview interface, azimuth information of the sound object according to the sound source information; based on the azimuth information, a prompt message is output through the display unit 906, where the prompt message is used to instruct a photographer to rotate the direction of the photographing device, so that the sound object is displayed in the photographing preview interface.
In one embodiment, the processor 910 is further configured to obtain a third speech signal, where the third speech signal includes M sub-speech signals of M sound objects; determining at least one sub-voice signal in the M sub-voice signals as a first voice signal according to the priority of each sub-voice signal; wherein M is an integer greater than or equal to 2.
In one embodiment, the processor 910 is further configured to obtain a fourth speech signal, where the fourth speech signal includes a sub-speech signal of at least one sound object; acquiring a sub-voice signal of a target sound object in the fourth voice signal; determining a sub-voice signal of a target sound object as a first voice signal; wherein, the target sound production object satisfies a preset condition, the preset condition includes: and the target sound object is positioned in a preset area in the shooting preview interface, or the object characteristics of the target sound object are matched with the preset object characteristics.
It should be appreciated that in embodiments of the present application, the input unit 904 may include a graphics processor (Graphics Processing Unit, GPU) 9041 and a microphone 9042, with the graphics processor 9041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes a touch panel 9071 and other input devices 9072. Touch panel 9071, also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 909 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 910 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 910.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction realizes each process of the above-mentioned shooting method embodiment, and the same technical effect can be achieved, so that repetition is avoided, and no redundant description is provided herein.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or instructions, implementing each process of the shooting method embodiment, and achieving the same technical effect, so as to avoid repetition, and no redundant description is provided herein.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (8)

1. A photographing method, comprising:
under the condition of displaying a shooting preview interface, acquiring voice information of a first voice signal, wherein the voice information comprises first sound intensity and sound source information;
performing zoom processing on a preview image in the shooting preview interface according to a first zoom magnification when the sound source information indicates that a sound object of the first voice signal is an object displayed in the shooting preview interface;
wherein the first zoom magnification is associated with the first sound intensity;
wherein before the zooming processing is performed on the preview image in the shooting preview interface according to the first zooming magnification, the method further comprises:
determining a first intermediate multiplying factor according to a first sound intensity of the first voice signal;
acquiring a second sound intensity of an interference signal in the first voice signal;
determining a second intermediate multiplying power according to the second sound intensity and the first intermediate multiplying power;
and determining the second intermediate magnification as the first zoom magnification when the second voice signal of the photographer is not received within a preset time period.
2. The method of claim 1, wherein before performing the zooming process on the preview image in the photographing preview interface at the first zoom magnification, further comprising:
Determining a first preset magnification corresponding to the first sound intensity as a first zoom magnification when the first sound intensity is less than or equal to a preset intensity threshold;
determining a second preset multiplying factor corresponding to the first sound intensity as a first zooming multiplying factor under the condition that the first sound intensity is larger than the preset intensity threshold;
wherein the first preset multiplying power is larger than the second preset multiplying power.
3. The method of claim 1, wherein after determining a second intermediate magnification from the second sound intensity and the first intermediate magnification, further comprising:
and acquiring target information under the condition that a second voice signal of a photographer is received within a preset time period, wherein the target information comprises at least one of the following items: a third sound intensity of the second voice signal, a keyword in the second voice signal;
and determining the first zoom magnification according to the target information and the second intermediate magnification.
4. The method of claim 1, wherein after the obtaining the voice information of the first voice signal, further comprising:
determining azimuth information of the sound generating object according to the sound source information under the condition that the sound source information indicates that the sound generating object of the first voice signal is an object outside the shooting preview interface;
And outputting prompt information based on the azimuth information, wherein the prompt information is used for indicating a photographer to rotate the direction of the shooting device so as to enable the sounding object to be displayed in the shooting preview interface.
5. The method of claim 1, wherein prior to the obtaining the voice information of the first voice signal, further comprising:
acquiring a third voice signal, wherein the third voice signal comprises M sub-voice signals of M sounding objects;
determining at least one sub-voice signal in the M sub-voice signals as a first voice signal according to the priority of each sub-voice signal;
wherein M is an integer greater than or equal to 2.
6. The method of claim 1, wherein prior to the obtaining the voice information of the first voice signal, further comprising:
acquiring a fourth voice signal, wherein the fourth voice signal comprises at least one sub-voice signal of a sound production object;
acquiring a sub-voice signal of a target sound object in the fourth voice signal;
determining a sub-voice signal of a target sound object as a first voice signal;
wherein, the target sound production object satisfies a preset condition, the preset condition includes: and the target sound object is positioned in a preset area in the shooting preview interface, or the object characteristics of the target sound object are matched with the preset object characteristics.
7. A photographing apparatus, comprising:
the acquisition module is used for acquiring voice information of the first voice signal under the condition of displaying a shooting preview interface, wherein the voice information comprises first sound intensity and sound source information;
the processing module is used for executing zooming processing on the preview image in the shooting preview interface according to a first zooming multiplying power when the sound source information indicates that the sound generating object of the first voice signal is the object displayed in the shooting preview interface;
wherein the first zoom magnification is associated with the first sound intensity;
wherein, the processing module is further configured to: determining a first intermediate multiplying factor according to a first sound intensity of the first voice signal; acquiring a second sound intensity of an interference signal in the first voice signal; determining a second intermediate multiplying power according to the second sound intensity and the first intermediate multiplying power; and determining the second intermediate magnification as the first zoom magnification when the second voice signal of the photographer is not received within a preset time period.
8. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the shooting method of any of claims 1-6.
CN202110999017.2A 2021-08-27 2021-08-27 Shooting method and device and electronic equipment Active CN113727021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110999017.2A CN113727021B (en) 2021-08-27 2021-08-27 Shooting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110999017.2A CN113727021B (en) 2021-08-27 2021-08-27 Shooting method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113727021A CN113727021A (en) 2021-11-30
CN113727021B true CN113727021B (en) 2023-07-11

Family

ID=78678765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110999017.2A Active CN113727021B (en) 2021-08-27 2021-08-27 Shooting method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113727021B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114629869B (en) * 2022-03-18 2024-04-16 维沃移动通信有限公司 Information generation method, device, electronic equipment and storage medium
CN115550559B (en) * 2022-04-13 2023-07-25 荣耀终端有限公司 Video picture display method, device, equipment and storage medium
CN116055869B (en) * 2022-05-30 2023-10-20 荣耀终端有限公司 Video processing method and terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007306250A (en) * 2006-05-10 2007-11-22 Ricoh Co Ltd Imaging device, photography-time warning method, and computer-readable recording medium
CN103780843A (en) * 2014-03-03 2014-05-07 联想(北京)有限公司 Image processing method and electronic device
CN105100635A (en) * 2015-07-23 2015-11-25 深圳乐行天下科技有限公司 Camera apparatus and camera control method
CN105227849A (en) * 2015-10-29 2016-01-06 维沃移动通信有限公司 A kind of method of front-facing camera auto-focusing and electronic equipment
CN107847800A (en) * 2015-09-15 2018-03-27 喀普康有限公司 Games system, the control method of games system and non-volatile memory medium
CN108668099A (en) * 2017-03-31 2018-10-16 鸿富锦精密工业(深圳)有限公司 video conference control method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011120165A (en) * 2009-12-07 2011-06-16 Sanyo Electric Co Ltd Imaging apparatus
US10158808B2 (en) * 2014-07-02 2018-12-18 Sony Corporation Zoom control device, zoom control method, and program
CN109640032B (en) * 2018-04-13 2021-07-13 河北德冠隆电子科技有限公司 Five-dimensional early warning system based on artificial intelligence multi-element panoramic monitoring detection
CN111464752A (en) * 2020-05-18 2020-07-28 Oppo广东移动通信有限公司 Zoom control method of electronic device and electronic device
CN111641794B (en) * 2020-05-25 2023-03-28 维沃移动通信有限公司 Sound signal acquisition method and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007306250A (en) * 2006-05-10 2007-11-22 Ricoh Co Ltd Imaging device, photography-time warning method, and computer-readable recording medium
CN103780843A (en) * 2014-03-03 2014-05-07 联想(北京)有限公司 Image processing method and electronic device
CN105100635A (en) * 2015-07-23 2015-11-25 深圳乐行天下科技有限公司 Camera apparatus and camera control method
CN107847800A (en) * 2015-09-15 2018-03-27 喀普康有限公司 Games system, the control method of games system and non-volatile memory medium
CN105227849A (en) * 2015-10-29 2016-01-06 维沃移动通信有限公司 A kind of method of front-facing camera auto-focusing and electronic equipment
CN108668099A (en) * 2017-03-31 2018-10-16 鸿富锦精密工业(深圳)有限公司 video conference control method and device

Also Published As

Publication number Publication date
CN113727021A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN113727021B (en) Shooting method and device and electronic equipment
US11120078B2 (en) Method and device for video processing, electronic device, and storage medium
US9031847B2 (en) Voice-controlled camera operations
CN111866392B (en) Shooting prompting method and device, storage medium and electronic equipment
CN112954199B (en) Video recording method and device
WO2020103353A1 (en) Multi-beam selection method and device
CN110572716B (en) Multimedia data playing method, device and storage medium
RU2663709C2 (en) Method and device for data processing
JP7116088B2 (en) Speech information processing method, device, program and recording medium
CN111198620B (en) Method, device and equipment for presenting input candidate items
CN110796094A (en) Control method and device based on image recognition, electronic equipment and storage medium
CN113873165B (en) Photographing method and device and electronic equipment
CN111416996B (en) Multimedia file detection method, multimedia file playing device, multimedia file equipment and storage medium
CN114374663B (en) Message processing method and message processing device
CN112637495B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112437231B (en) Image shooting method and device, electronic equipment and storage medium
CN109725736B (en) Candidate sorting method and device and electronic equipment
CN113936697B (en) Voice processing method and device for voice processing
CN113301444B (en) Video processing method and device, electronic equipment and storage medium
CN112311652B (en) Message sending method, device, terminal and storage medium
CN110502484B (en) Method, device and medium for displaying file information on mobile terminal
CN109918566B (en) Query method, query device, electronic equipment and medium
CN114217754A (en) Screen projection control method and device, electronic equipment and storage medium
CN111601036B (en) Camera focusing method and device, storage medium and mobile terminal
CN107888830B (en) Camera starting method and device, computer device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant