CN105227837A - A kind of image combining method and device - Google Patents

A kind of image combining method and device Download PDF

Info

Publication number
CN105227837A
CN105227837A CN201510618878.6A CN201510618878A CN105227837A CN 105227837 A CN105227837 A CN 105227837A CN 201510618878 A CN201510618878 A CN 201510618878A CN 105227837 A CN105227837 A CN 105227837A
Authority
CN
China
Prior art keywords
image
brightness value
pixel region
predetermined
carried out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510618878.6A
Other languages
Chinese (zh)
Inventor
戴向东
魏宇星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201510618878.6A priority Critical patent/CN105227837A/en
Publication of CN105227837A publication Critical patent/CN105227837A/en
Priority to PCT/CN2016/097937 priority patent/WO2017050115A1/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a kind of image combining method, the method comprises: the image obtaining the shooting of i camera, described i be greater than 1 integer; Obtain the feature of a described i image; According to described feature, a described i image is carried out special efficacy synthesis, generate composograph.The present invention also also discloses a kind of image synthesizer.

Description

A kind of image combining method and device
Technical field
The present invention relates to image processing techniques, particularly relate to a kind of image combining method and device.
Background technology
The binocular camera of current mobile terminal all adopts the technique for taking cooperatively interacted, to reach the photographic effects such as the better depth of field, 3D shooting.11 and 12 is two visible image capturing heads of mobile terminal device as shown in Figure 1, and 13 is links of two cameras.11 and 12 are fixed on link 13, and accomplish that imaging plane is parallel as far as possible.Mobile terminal can obtain the image of 11 shootings and the image of 12 shootings at synchronization, then by two width Images uniting.Such as, 11 mainly take dynamic personage, and 12 shooting personages background after one's death, the image that 11 and 12 take synthesizes by last mobile terminal.But use the composograph of said method single, make composograph lack interest, therefore, cause poor user experience.
Summary of the invention
For solving the problems of the technologies described above, the embodiment of the present invention is expected to provide a kind of image combining method and device, can improve the interest of composograph, improves Consumer's Experience.
Technical scheme of the present invention is achieved in that
First aspect, provides a kind of image combining method, and described method comprises:
Obtain the image of i camera shooting, described i be greater than 1 integer;
Obtain the feature of a described i image;
According to described feature, a described i image is carried out special efficacy synthesis, generate composograph.
Optionally, described in be characterized as overexposure region, for the first image, the feature of described first image of described acquisition comprises:
Determine the center brightness value of each pixel region of described first image;
According to the center brightness value of each pixel region of described first image, determine the average brightness value of described first image;
Judge whether the center brightness value of described each pixel region and the difference of described average brightness value are greater than predetermined threshold value;
When the center brightness value of N number of pixel region and the difference of described average brightness value are greater than predetermined threshold value, using described N number of pixel region as N number of overexposure region, described N is positive integer.
Optionally, described according to described feature, a described i image is carried out special efficacy synthesis, generates composograph and comprise:
In a described i image, mark appears region of exposing to the sun;
Brightness deterioration process is carried out in the overexposure region of i-1 image in a described i image;
Utilize linear taper mode, synthesis is carried out with pixel region corresponding to the image except images themselves in the overexposure region in a described i image and processes, generate described synthesising picture.
Optionally, a described i image is the image of the Same Scene that a described i camera is taken with position from different perspectives, and described feature is predetermined scenic imagery, and the feature of the described i of a described acquisition image comprises:
From a described i image, be partitioned into the predetermined scenic imagery of different depth respectively, wherein, in different images, predetermined scenic imagery is different;
Described according to described feature, a described i image is carried out special efficacy synthesis, generates composograph and comprise:
Special effect processing is carried out to the one or more of predetermined scenic imagery of a described i image, the predetermined scenic imagery of the predetermined scenic imagery after special effect processing and non-special effect processing is synthesized, generates described synthesising picture.
Optionally, described i is 2, and the described predetermined scenic imagery being partitioned into different depth respectively from a described i image comprises:
An Iamge Segmentation is gone out person image;
Another Iamge Segmentation is gone out background video;
The one or more of the described predetermined scenic imagery to a described i image carry out special effect processing, are synthesized by the predetermined scenic imagery of the predetermined scenic imagery after special effect processing and non-special effect processing, obtain the image after synthesizing and comprise:
Special effect processing is carried out to described background video, the background video after process and described person image is synthesized, generates described synthesising picture.
Second aspect, provides a kind of image synthesizer, and described device comprises:
Acquiring unit, for obtaining the image of i camera shooting, described i be greater than 1 integer; Obtain the feature of a described i image;
Synthesis unit, for according to described feature, carries out special efficacy synthesis by a described i image, generates composograph.
Optionally, described in be characterized as overexposure region, for the first image, described acquiring unit specifically for:
Determine the center brightness value of each pixel region of described first image;
According to the center brightness value of each pixel region of described first image, determine the average brightness value of described first image;
Judge whether the center brightness value of described each pixel region and the difference of described average brightness value are greater than predetermined threshold value;
When the center brightness value of N number of pixel region and the difference of described average brightness value are greater than predetermined threshold value, using described N number of pixel region as N number of overexposure region, described N is positive integer.
Optionally, described synthesis unit specifically for:
In a described i image, mark appears region of exposing to the sun;
Brightness deterioration process is carried out in the overexposure region of i-1 image in a described i image;
Utilize linear taper mode, synthesis is carried out with pixel region corresponding to the image except images themselves in the overexposure region in a described i image and processes, generate described synthesising picture.
Optionally, a described i image be a described i camera from different perspectives with the image of Same Scene of position shooting, described feature is predetermined scenic imagery, described acquiring unit also for:
From a described i image, be partitioned into the predetermined scenic imagery of different depth respectively, wherein, in different images, predetermined scenic imagery is different;
Described synthesis unit also for:
Special effect processing is carried out to the one or more of predetermined scenic imagery of a described i image, the predetermined scenic imagery of the predetermined scenic imagery after special effect processing and non-special effect processing is synthesized, generates described synthesising picture.
Optionally, described i is 2, described acquiring unit also for:
An Iamge Segmentation is gone out person image;
Another Iamge Segmentation is gone out background video;
Described synthesis unit also for:
Special effect processing is carried out to described background video, the background video after process and described person image is synthesized, generates described synthesising picture.
Embodiments provide a kind of image combining method and device, first obtain the image of i camera shooting and the feature of i image; Again according to these features, i image is carried out special efficacy synthesis, generate composograph.So, after the feature obtaining photographic images, different features can be carried out special efficacy synthesis by image synthesizer, generates composograph.Like this, image has enriched the kind that multiple camera is taken pictures greatly, makes a same photo can have multiple change, therefore, adds the interest of photomontage, improve Consumer's Experience.
Accompanying drawing explanation
Fig. 1 is the structural representation of existing binocular camera;
Fig. 2 is the hardware configuration schematic diagram of the mobile terminal realizing each embodiment of the present invention;
Fig. 3 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 2;
The flow chart of a kind of image combining method that Fig. 4 provides for the embodiment of the present invention;
The flow chart of the another kind of image combining method that Fig. 5 provides for the embodiment of the present invention;
Fig. 6 is the schematic diagram in the overexposure region marked in the embodiment of the present invention;
The flow chart of another image combining method that Fig. 7 provides for the embodiment of the present invention;
The image of 2 video cameras shootings that Fig. 8 provides for the embodiment of the present invention and the schematic diagram of composograph;
The structural representation of a kind of image synthesizer that Fig. 9 provides for the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described.
Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
The mobile terminal realizing each embodiment of the present invention is described referring now to accompanying drawing.In follow-up description, use the suffix of such as " module ", " parts " or " unit " for representing element only in order to be conducive to explanation of the present invention, itself is specific meaning not.Therefore, " module " and " parts " can mixedly use.
Mobile terminal can be implemented in a variety of manners.Such as, the terminal described in the present invention can comprise the such as mobile terminal of mobile phone, smart phone, notebook computer, digit broadcasting receiver, personal digital assistant (PDA), panel computer (PAD), portable media player (PMP), guider etc. and the fixed terminal of such as digital TV, desktop computer etc.Below, suppose that terminal is mobile terminal.But it will be appreciated by those skilled in the art that except the element except being used in particular for mobile object, structure according to the embodiment of the present invention also can be applied to the terminal of fixed attribute.
Fig. 2 is the hardware configuration signal of the mobile terminal realizing each embodiment of the present invention.
Mobile terminal 100 can comprise wireless communication unit 110, audio/video (A/V) input unit 120, user input unit 130, sensing cell 140, output unit 150, memory 160, interface unit 170, controller 180 and power subsystem 190 etc.Fig. 2 shows the mobile terminal with various assembly, it should be understood that, does not require to implement all assemblies illustrated.Can alternatively implement more or less assembly.Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assembly, and it allows the radio communication between mobile terminal 100 and wireless communication system or network.Such as, wireless communication unit can comprise at least one in broadcast reception module 111, mobile communication module 112, wireless Internet module 113, short range communication module 114 and positional information module 115.
Broadcast reception module 111 via broadcast channel from external broadcasting management server receiving broadcast signal and/or broadcast related information.Broadcast channel can comprise satellite channel and/or terrestrial channel.Broadcast management server can be generate and send the server of broadcast singal and/or broadcast related information or the broadcast singal generated before receiving and/or broadcast related information and send it to the server of terminal.Broadcast singal can comprise TV broadcast singal, radio signals, data broadcasting signal etc.And broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast related information also can provide via mobile communications network, and in this case, broadcast related information can be received by mobile communication module 112.Broadcast singal can exist in a variety of manners, such as, it can exist with the form of the electronic service guidebooks (ESG) of the electronic program guides of DMB (DMB) (EPG), digital video broadcast-handheld (DVB-H) etc.Broadcast reception module 111 can by using the broadcast system Received signal strength broadcast of each attribute.Especially, broadcast reception module 111 can by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video broadcasting-hand-held (DVB-H), forward link media (MediaFLO ) the digit broadcasting system receiving digital broadcast of Radio Data System, received terrestrial digital broadcasting integrated service (ISDB-T) etc.Broadcast reception module 111 can be constructed to be applicable to providing the various broadcast system of broadcast singal and above-mentioned digit broadcasting system.The broadcast singal received via broadcast reception module 111 and/or broadcast related information can be stored in memory 160 (or storage medium of other attribute).
Radio signal is sent at least one in base station (such as, access point, Node B etc.), exterior terminal and server and/or receives radio signals from it by mobile communication module 112.The data of each attribute that such radio signal can comprise voice call signal, video calling signal or send according to text and/or Multimedia Message and/or receive.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.This module can be inner or be externally couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by this module can comprise WLAN (wireless local area network) (WLAN) (Wi-Fi), WiMAX (Wibro), worldwide interoperability for microwave access (Wimax), high-speed downlink packet access (HSDPA) etc.
Short range communication module 114 is the modules for supporting junction service.Some examples of short-range communication technology comprise bluetooth tM, radio-frequency (RF) identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), purple honeybee tMetc..
Positional information module 115 is the modules of positional information for checking or obtain mobile terminal.The typical case of positional information module is GPS (global positioning system).According to current technology, GPS module 115 calculates from the range information of three or more satellite and correct time information and for the Information application triangulation calculated, thus calculates three-dimensional current location information according to longitude, latitude and pin-point accuracy.Current, the method for calculating location and temporal information uses three satellites and by the error of the position that uses an other satellite correction calculation to go out and temporal information.In addition, GPS module 115 can carry out computational speed information by Continuous plus current location information in real time.
A/V input unit 120 is for audio reception or vision signal.A/V input unit 120 can comprise camera 121 and microphone 122, and the view data of camera 121 to the static images obtained by image capture apparatus in Video Capture pattern or image capture mode or video processes.Picture frame after process may be displayed on display unit 151.Picture frame after camera 121 processes can be stored in memory 160 (or other storage medium) or via wireless communication unit 110 and send, and can provide two or more cameras 121 according to the structure of mobile terminal.Such acoustic processing can via microphones sound (voice data) in telephone calling model, logging mode, speech recognition mode etc. operational mode, and can be voice data by microphone 122.Audio frequency (voice) data after process can be converted to the formatted output that can be sent to mobile communication base station via mobile communication module 112 when telephone calling model.The noise that microphone 122 can implement each attribute is eliminated (or suppress) algorithm and is being received and sending to eliminate (or suppression) noise or interference that produce in the process of audio signal.
User input unit 130 can generate key input data to control the various operations of mobile terminal according to the order of user's input.User input unit 130 allows user to input the information of each attribute, and keyboard, the young sheet of pot, touch pad (such as, detecting the touch-sensitive assembly of the change of the resistance, pressure, electric capacity etc. that cause owing to being touched), roller, rocking bar etc. can be comprised.Especially, when touch pad is superimposed upon on display unit 151 as a layer, touch-screen can be formed.
Sensing cell 140 detects the current state of mobile terminal 100, (such as, mobile terminal 100 open or close state), the position of mobile terminal 100, user for mobile terminal 100 contact (namely, touch input) presence or absence, the orientation of mobile terminal 100, the acceleration or deceleration of mobile terminal 100 move and direction etc., and generate order or the signal of the operation for controlling mobile terminal 100.Such as, when mobile terminal 100 is embodied as sliding-type mobile phone, sensing cell 140 can sense this sliding-type phone and open or close.In addition, whether whether sensing cell 140 can detect power subsystem 190 provides electric power or interface unit 170 to couple with external device (ED).Sensing cell 140 can comprise proximity transducer 141 and will be described this in conjunction with touch-screen below.
Interface unit 170 is used as at least one external device (ED) and is connected the interface that can pass through with mobile terminal 100.Such as, external device (ED) can comprise wired or wireless head-band earphone port, external power source (or battery charger) port, wired or wireless FPDP, memory card port, for connecting the port, audio frequency I/O (I/O) port, video i/o port, ear port etc. of the device with identification module.Identification module can be that storage uses the various information of mobile terminal 100 for authentication of users and can comprise subscriber identification module (UIM), subscriber identification module (SIM), USIM (USIM) etc.In addition, the device (hereinafter referred to " recognition device ") with identification module can take the form of smart card, and therefore, recognition device can be connected with mobile terminal 100 via port or other jockey.Interface unit 170 may be used for receive from external device (ED) input (such as, data message, electric power etc.), and the input received be transferred to the one or more element in mobile terminal 100 or may be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 100 is connected with external base, interface unit 170 can be used as to allow by it electric power to be provided to the path of mobile terminal 100 from base or can be used as the path that allows to be transferred to mobile terminal by it from the various command signals of base input.The various command signal inputted from base or electric power can be used as and identify whether mobile terminal is arranged on the signal base exactly.Output unit 150 is constructed to provide output signal (such as, audio signal, vision signal, alarm signal, vibration signal etc.) with vision, audio frequency and/or tactile manner.Output unit 150 can comprise display unit 151, dio Output Modules 152, alarm unit 153 etc.
Display unit 151 may be displayed on the information of process in mobile terminal 100.Such as, when mobile terminal 100 is in telephone calling model, display unit 151 can show with call or other communicate (such as, text messaging, multimedia file are downloaded etc.) be correlated with user interface (UI) or graphic user interface (GUI).When mobile terminal 100 is in video calling pattern or image capture mode, display unit 151 can the image of display capture and/or the image of reception, UI or GUI that video or image and correlation function are shown etc.
Meanwhile, when display unit 151 and touch pad as a layer superposed on one another to form touch-screen time, display unit 151 can be used as input unit and output device.Display unit 151 can comprise at least one in liquid crystal display (LCD), thin-film transistor LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc.Some in these displays can be constructed to transparence and watch from outside to allow user, and this can be called transparent display, and typical transparent display can be such as transparent organic light emitting diode (TOLED) display etc.According to the specific execution mode wanted, mobile terminal 100 can comprise two or more display units (or other display unit), such as, mobile terminal can comprise outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used for detecting touch input pressure and touch input position and touch and inputs area.
When dio Output Modules 152 can be under the isotypes such as call signal receiving mode, call mode, logging mode, speech recognition mode, broadcast reception mode at mobile terminal, voice data convert audio signals that is that wireless communication unit 110 is received or that store in memory 160 and exporting as sound.And dio Output Modules 152 can provide the audio frequency relevant to the specific function that mobile terminal 100 performs to export (such as, call signal receives sound, message sink sound etc.).Dio Output Modules 152 can comprise loud speaker, buzzer etc.
Alarm unit 153 can provide and export that event informed to mobile terminal 100.Typical event can comprise calling reception, message sink, key signals input, touch input etc.Except audio or video exports, alarm unit 153 can provide in a different manner and export with the generation of notification event.Such as, alarm unit 153 can provide output with the form of vibration, when receive calling, message or some other enter communication (incomingcommunication) time, alarm unit 153 can provide sense of touch to export (that is, vibrating) to notify to user.By providing such sense of touch to export, even if when the mobile phone of user is in the pocket of user, user also can identify the generation of various event.Alarm unit 153 also can provide the output of the generation of notification event via display unit 151 or dio Output Modules 152.
Memory 160 software program that can store process and the control operation performed by controller 180 etc., or temporarily can store oneself through exporting the data (such as, telephone directory, message, still image, video etc.) that maybe will export.And, memory 160 can store about when touch be applied to touch-screen time the vibration of various modes that exports and the data of audio signal.
Memory 160 can comprise the storage medium of at least one attribute, described storage medium comprises flash memory, hard disk, multimedia card, card-type memory (such as, SD or DX memory etc.), random access storage device (RAM), static random-access memory (SRAM), read-only memory (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc.And mobile terminal 100 can be connected the memory function of execute store 160 network storage device with by network cooperates.
Controller 180 controls the overall operation of mobile terminal usually.Such as, controller 180 performs the control relevant to voice call, data communication, video calling etc. and process.In addition, controller 180 can comprise the multi-media module 181 for reproducing (or playback) multi-medium data, and multi-media module 181 can be configured in controller 180, or can be configured to be separated with controller 180.Controller 180 can pattern recognition process, is identified as character or image so that input is drawn in the handwriting input performed on the touchscreen or picture.
Power subsystem 190 receives external power or internal power and provides each element of operation and the suitable electric power needed for assembly under the control of controller 180.
Various execution mode described herein can to use such as computer software, the computer-readable medium of hardware or its any combination implements.For hardware implementation, execution mode described herein can by using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, being designed at least one performed in the electronic unit of function described herein and implementing, in some cases, such execution mode can be implemented in controller 180.For implement software, the execution mode of such as process or function can be implemented with allowing the independent software module performing at least one function or operation.Software code can be implemented by the software application (or program) write with any suitable programming language, and software code can be stored in memory 160 and to be performed by controller 180.
So far, oneself is through the mobile terminal according to its functional description.Below, for the sake of brevity, by the slide type mobile terminal that describes in the mobile terminal of each attribute of such as folded form, board-type, oscillating-type, slide type mobile terminal etc. exemplarily.Therefore, the present invention can be applied to the mobile terminal of any attribute, and is not limited to slide type mobile terminal.
Mobile terminal 100 as shown in Figure 2 can be constructed to utilize and send the such as wired and wireless communication system of data via frame or grouping and satellite-based communication system operates.
Describe wherein according to the communication system that mobile terminal of the present invention can operate referring now to Fig. 3.
Such communication system can use different air interfaces and/or physical layer.Such as, the air interface used by communication system comprises such as frequency division multiple access (FDMA), time division multiple access (TDMA), code division multiple access (CDMA) and universal mobile telecommunications system (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc.As non-limiting example, description below relates to cdma communication system, but such instruction is equally applicable to the system of other attribute.
With reference to figure 3, cdma wireless communication system can comprise multiple mobile terminal 100, multiple base station (BS) 270, base station controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is constructed to form interface with Public Switched Telephony Network (PSTN) 290.MSC280 is also constructed to form interface with the BSC275 that can be couple to base station 270 via back haul link.Back haul link can construct according to any one in some interfaces that oneself knows, described interface comprises such as E1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL.Will be appreciated that system as shown in Figure 3 can comprise multiple BSC275.
Each BS270 can serve one or more subregion (or region), by multidirectional antenna or point to specific direction each subregion of antenna cover radially away from BS270.Or each subregion can by two or more antenna covers for diversity reception.Each BS270 can be constructed to support multiple parallel compensate, and each parallel compensate has specific frequency spectrum (such as, 1.25MHz, 5MHz etc.).
Subregion can be called as CDMA Channel with intersecting of parallel compensate.BS270 also can be called as base station transceiver subsystem (BTS) or other equivalent terms.Under these circumstances, term " base station " may be used for broadly representing single BSC275 and at least one BS270.Base station also can be called as " cellular station ".Or each subregion of particular B S270 can be called as multiple cellular station.
As shown in Figure 3, broadcast singal is sent to the mobile terminal 100 at operate within systems by broadcsting transmitter (BT) 295.Broadcast reception module 111 as shown in Figure 2 is arranged on mobile terminal 100 and sentences the broadcast singal receiving and sent by BT295.In figure 3, several global positioning system (GPS) satellite 500 is shown.Satellite 500 helps at least one in the multiple mobile terminal 100 in location.
In figure 3, depict multiple satellite 500, but understand, the satellite of any number can be utilized to obtain useful locating information.GPS module 115 as shown in Figure 2 is constructed to coordinate to obtain the locating information wanted with satellite 500 usually.Substitute GPS tracking technique or outside GPS tracking technique, can use can other technology of position of tracking mobile terminal.In addition, at least one gps satellite 500 optionally or extraly can process satellite dmb transmission.
As a typical operation of wireless communication system, BS270 receives the reverse link signal from various mobile terminal 100.Mobile terminal 100 participates in call usually, information receiving and transmitting communicates with other attribute.Each reverse link signal that certain base station 270 receives is processed by particular B S270.The data obtained are forwarded to relevant BSC275.BSC provides call Resourse Distribute and comprises the mobile management function of coordination of the soft switching process between BS270.The data received also are routed to MSC280 by BSC275, and it is provided for the extra route service forming interface with PSTN290.Similarly, PSTN290 and MSC280 forms interface, and MSC and BSC275 forms interface, and BSC275 correspondingly control BS270 so that forward link signals is sent to mobile terminal 100.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of the inventive method is proposed.
Embodiment one
The embodiment of the present invention provides a kind of image combining method, is applied to image synthesizer, and this image synthesizer can be an independently equipment, also can be a part of mobile terminal.As shown in Figure 4, the method comprises:
The image of step 301, acquisition i camera shooting.
Here, i be greater than 1 integer.The kind of the camera in the present embodiment is not restricted, and preferred camera is visible image capturing head, and this i camera is fixed on link, and the photo of i camera shooting can show on image synthesizer.The image of each camera shooting can be original image, and also can be through simply dealt image, the present embodiment does not limit this.
Preferably, during the number of camera 2.
The feature of step 302, acquisition i image.
Feature in the present embodiment can be overexposure region or default scenic imagery etc., as long as special efficacy or the scenic imagery of synthesis or the parameter value of pixel region namely can be used as feature.
Concrete, to be characterized as overexposure region, step 302 specifically comprises: the center brightness value determining each pixel region of the first image; According to the center brightness value of each pixel region of the first image, determine the average brightness value of the first image; Judge whether the center brightness value of each pixel region and the difference of average brightness value are greater than predetermined threshold value; When the center brightness value of N number of pixel region and the difference of described average brightness value are greater than predetermined threshold value, using this N number of pixel region as N number of overexposure region, described N is positive integer.Here, the pixel region of image can be divided into clear area and overexposure region.Clear area refers to that pixel does not have overexposure, not under-exposure and region clearly of focusing; Corresponding overexposure region is pixel overexposure, under-exposure or unsharp region of focusing; Center brightness value is that mean square deviation, saturation, definition (whether clearly focusing) and the blending weight of left images pixel region according to this pixel region calculates.
Concrete, i image is the image of the Same Scene that i video camera is taken with position from different perspectives respectively, feature is predetermined scenic imagery, step 302 can comprise: the predetermined scenic imagery being partitioned into different depth respectively from i image, wherein, in different images, predetermined scenic imagery is different.Existing dividing method has a variety of.Such as, dividing method can be combined with original colouring information, monochrome information by depth information, and split as union feature, like this, segmentation effect is more accurate.Preferably, be 2 for i, step 302 can comprise: from an image, be partitioned into person image, go out background video from another Iamge Segmentation.
Step 303, according to feature, i image is carried out special efficacy synthesis, generate composograph.
When being characterized as overexposure region, the step 303 corresponding with step 302 comprises: in i image, mark appears region of exposing to the sun; Brightness deterioration process is carried out in the overexposure region of i-1 image in i image; Utilize linear taper mode, synthesis is carried out with pixel region corresponding to the image except images themselves in the overexposure region in i image and processes, generate synthesising picture.
Here, owing to brightness deterioration being carried out in overexposure region, therefore, i image is when synthesizing, and the overexposure region of brightness deterioration will by region overlay corresponding in other images, and like this, the overexposure region of composograph will greatly reduce.
What deserves to be explained is, when not having the difference of pixel region and described average brightness value to be greater than predetermined threshold value, i image can be carried out other special efficacy synthesis, the present embodiment does not limit.
When the image that i image is the Same Scene that i camera is taken with position from different perspectives respectively, feature is predetermined scenic imagery, the step 303 corresponding with step 302 can comprise: carry out special effect processing to the one or more of predetermined scenic imagery of i image, the predetermined scenic imagery of the predetermined scenic imagery after special effect processing and non-special effect processing is synthesized, generates described synthesising picture.Here, because the degree of depth of each image is different, each image needs the scenic imagery of segmentation to be also not quite similar, and this can divided scenic imagery be exactly predetermined scenic imagery.Such as, from an image, be partitioned into person image, after another Iamge Segmentation goes out background video, background video can be carried out special effect processing, the background video after process and untreated person image are synthesized, generate composograph.
So, after the feature obtaining photographic images, different features can be carried out special efficacy synthesis by image synthesizer, generates composograph.Like this, image has enriched the kind that multiple camera is taken pictures greatly, makes a same photo can have multiple change, therefore, adds the interest of photomontage, improve Consumer's Experience.
Further, described special effect processing is carried out to background video after can also comprise: the image of the people in two images and composograph are made animation form, the such as Dynamic Announce of gif form.The person image of two images can be partitioned into equally, the background video of two person images and people's image is made gif form, thus improve the interest of synthesising picture, improve Consumer's Experience.
Further, when feature is background video, Fuzzy Processing can be carried out to background video.Arbitrary image that dual camera is independently taken, first be partitioned into person image, by the part beyond this personage, namely background video carries out Gaussian Blur process, such as, suppose that dual camera can be divided into left camera and right camera, what left camera was taken is the first image, and the second image of right camera shooting, in two images, retain the person image of the first image, the background video of the second image is carried out Gaussian Blur process.The background video of the first image is all replaced with the background video of the second image.Because dual camera itself also can shoot the depth of field picture of simulation slr camera, therefore, in the method, for fuzzy degree, fuzzy pattern can be self-defined, and compared to original shooting depth map, its fuzzy manner can be selected wide, and effect is better.
What deserves to be explained is, the image shot of dual camera can comprise through synthesis again: catch video flowing, and concrete grammar is method is set up two filter icons, respectively corresponding two Video stream saptures.Then, while seizure preview video stream, view data needed for intercepting, real-time for truncated picture data is put in region of memory, and respectively image procossing is carried out to the view data of two cameras leaving different region of memory in, find target, determine the image coordinate in two camera gained images.Finally, use the theory of binocular vision to calculate, obtain the positional information of impact point in world coordinate system.So just can determine the positional information of composograph.
Embodiment two
The embodiment of the present invention provides a kind of image combining method, is applied to mobile terminal, such as smart mobile phone, notebook computer, desktop computer etc.The present embodiment is introduced for 2 cameras, and as shown in Figure 5, the method comprises:
The picture that step 401, acquisition 2 cameras are taken simultaneously.
Here, the object of two camera shootings should be the object of the same area.
Step 402, determine the center brightness value of each pixel region of 2 images.
Here, center brightness value is that mean square deviation, saturation, definition (whether clearly focusing) and the blending weight of left images pixel region according to pixel region calculates.
Step 403, center brightness value according to each pixel region of image, determine the average brightness value of corresponding image.
Generally, the average brightness value of image is the number center brightness value of pixel regions all for image be added divided by the pixel region of image.
Step 404, judge whether the difference of the center brightness value of each pixel region in each image and the average brightness value of correspondence image is greater than predetermined threshold value.If so, then step 405 is performed; If not, then step 407 is performed.
Step 405, the difference of the average brightness value of center brightness value and correspondence image is greater than the pixel region of predetermined threshold value as overexposure region.
Step 406, according to overexposure region, 2 images are carried out special efficacy synthesis, generate composograph.
Concrete, in 2 images, mark appears region of exposing to the sun (the solid black circle in Fig. 6 represents the overexposure region being labeled place); Brightness deterioration process is carried out in the overexposure region of wantonly 1 image; Utilize linear taper mode, synthesis is carried out with pixel region corresponding to the image except images themselves in the overexposure region in 2 images and processes, generate synthesising picture.
Here, special efficacy synthesis can comprise segmentation, virtualization, fuzzy, amendment tone etc. be that special effect processing and synthesis process.
Step 407,2 images are carried out special efficacy synthesis, generate composograph.
Embodiment three
The embodiment of the present invention provides a kind of image combining method, is applied to mobile terminal, such as smart mobile phone, notebook computer, desktop computer etc.The present embodiment is introduced for 2 cameras, and as shown in Figure 7, the method comprises:
The picture that step 501, acquisition 2 cameras are taken simultaneously.
Here, the object of 2 camera shootings should be the object of the same area, and 2 images are the image of the Same Scene that 2 cameras are taken with position from different perspectives respectively.
Step 502, from first image, obtain person image.
Here, person image is predetermined scenic imagery in this first image.
Step 503, from second image background extraction image.
Here, person image is predetermined scenic imagery in this second image.
Step 504, the background video in second image is carried out special effect processing.
Here, special effect processing can comprise virtualization, fuzzy, a series of process being similar to image procossing such as amendment tone etc.
First figure as Fig. 8 is the picture of first camera shooting, and second figure is the picture of first camera shooting, and the 3rd figure is what composograph of the person image of first figure and the background video of special efficacy second figure.
Step 505, the image after special effect processing and person image to be synthesized, obtain composograph.
Embodiment four
The embodiment of the present invention provides a kind of image synthesizer 60, and as shown in Figure 9, this image synthesizer 60 can comprise:
Acquiring unit 601, for obtaining the image of i camera shooting, described i be greater than 1 integer; Obtain the feature of a described i image.
Synthesis unit 602, for according to described feature, carries out special efficacy synthesis by a described i image, generates composograph.
So, after the feature obtaining photographic images, different features can be carried out special efficacy synthesis by image synthesizer, generates composograph.Like this, image has enriched the kind that multiple camera is taken pictures greatly, makes a same photo can have multiple change, therefore, adds the interest of photomontage, improve Consumer's Experience.
Further, described in be characterized as overexposure region, for the first image, described acquiring unit 601 specifically for:
Determine the center brightness value of each pixel region of described first image;
According to the center brightness value of each pixel region of described first image, determine the average brightness value of described first image;
Judge whether the center brightness value of described each pixel region and the difference of described average brightness value are greater than predetermined threshold value;
When the center brightness value of N number of pixel region and the difference of described average brightness value are greater than predetermined threshold value, using described N number of pixel region as N number of overexposure region, described N is positive integer.
Further, described synthesis unit 602 specifically for:
In a described i image, mark appears region of exposing to the sun;
Brightness deterioration process is carried out in the overexposure region of i-1 image in a described i image;
Utilize linear taper mode, synthesis is carried out with pixel region corresponding to the image except images themselves in the overexposure region in a described i image and processes, generate described synthesising picture.
Further, a described i image be respectively a described i camera from different perspectives with the image of Same Scene of position shooting, described feature is predetermined scenic imagery, described acquiring unit 601 also for:
From a described i image, be partitioned into the predetermined scenic imagery of different depth respectively, wherein, in different images, predetermined scenic imagery is different;
Described synthesis unit 602 also for:
Special effect processing is carried out to the one or more of predetermined scenic imagery of a described i image, the predetermined scenic imagery of the predetermined scenic imagery after special effect processing and non-special effect processing is synthesized, generates described synthesising picture.
Further, described acquiring unit 601 also for:
An Iamge Segmentation is gone out person image;
Another Iamge Segmentation is gone out background video;
Described synthesis unit 602 also for:
Special effect processing is carried out to described background video, the background video after process and described person image is synthesized, generates described synthesising picture.
In actual applications, described acquiring unit 601 and synthesis unit 602 all can by the central processing unit (CentralProcessingUnit being arranged in terminal, CPU), microprocessor (MicroProcessorUnit, MPU), digital signal processor (DigitalSignalProcessor, or the realization such as field programmable gate array (FieldProgrammableGateArray, FPGA) DSP).
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt the form of hardware embodiment, software implementation or the embodiment in conjunction with software and hardware aspect.And the present invention can adopt in one or more form wherein including the upper computer program implemented of computer-usable storage medium (including but not limited to magnetic disc store and optical memory etc.) of computer usable program code.
The present invention describes with reference to according to the flow chart of the method for the embodiment of the present invention, equipment (system) and computer program and/or block diagram.Should understand can by the combination of the flow process in each flow process in computer program instructions realization flow figure and/or block diagram and/or square frame and flow chart and/or block diagram and/or square frame.These computer program instructions can being provided to the processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device to produce a machine, making the instruction performed by the processor of computer or other programmable data processing device produce device for realizing the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
These computer program instructions also can be stored in can in the computer-readable memory that works in a specific way of vectoring computer or other programmable data processing device, the instruction making to be stored in this computer-readable memory produces the manufacture comprising command device, and this command device realizes the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make on computer or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computer or other programmable devices is provided for the step realizing the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
The above, be only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention.

Claims (10)

1. an image combining method, is characterized in that, described method comprises:
Obtain the image of i camera shooting, described i be greater than 1 integer;
Obtain the feature of a described i image;
According to described feature, a described i image is carried out special efficacy synthesis, generate composograph.
2. method according to claim 1, is characterized in that, described in be characterized as overexposure region, for the first image, the feature of described first image of described acquisition comprises:
Determine the center brightness value of each pixel region of described first image;
According to the center brightness value of each pixel region of described first image, determine the average brightness value of described first image;
Judge whether the center brightness value of described each pixel region and the difference of described average brightness value are greater than predetermined threshold value;
When the center brightness value of N number of pixel region and the difference of described average brightness value are greater than predetermined threshold value, using described N number of pixel region as N number of overexposure region, described N is positive integer.
3. method according to claim 2, is characterized in that, described according to described feature, and a described i image is carried out special efficacy synthesis, generates composograph and comprises:
In a described i image, mark appears region of exposing to the sun;
Brightness deterioration process is carried out in the overexposure region of i-1 image in a described i image;
Utilize linear taper mode, synthesis is carried out with pixel region corresponding to the image except images themselves in the overexposure region in a described i image and processes, generate described synthesising picture.
4. method according to claim 1, is characterized in that, a described i image is the image of the Same Scene that a described i camera is taken with position from different perspectives, and described feature is predetermined scenic imagery, and the feature of the described i of a described acquisition image comprises:
From a described i image, be partitioned into the predetermined scenic imagery of different depth respectively, wherein, in different images, predetermined scenic imagery is different;
Described according to described feature, a described i image is carried out special efficacy synthesis, generates composograph and comprise:
Special effect processing is carried out to the one or more of predetermined scenic imagery of a described i image, the predetermined scenic imagery of the predetermined scenic imagery after special effect processing and non-special effect processing is synthesized, generates described synthesising picture.
5. method according to claim 4, is characterized in that, described i is 2, and the described predetermined scenic imagery being partitioned into different depth respectively from a described i image comprises:
An Iamge Segmentation is gone out person image;
Another Iamge Segmentation is gone out background video;
The one or more of the described predetermined scenic imagery to a described i image carry out special effect processing, are synthesized by the predetermined scenic imagery of the predetermined scenic imagery after special effect processing and non-special effect processing, obtain the image after synthesizing and comprise:
Special effect processing is carried out to described background video, the background video after process and described person image is synthesized, generates described synthesising picture.
6. an image synthesizer, is characterized in that, described device comprises:
Acquiring unit, for obtaining the image of i camera shooting, described i be greater than 1 integer; Obtain the feature of a described i image;
Synthesis unit, for according to described feature, carries out special efficacy synthesis by a described i image, generates composograph.
7. device according to claim 6, is characterized in that, described in be characterized as overexposure region, for the first image, described acquiring unit specifically for:
Determine the center brightness value of each pixel region of described first image;
According to the center brightness value of each pixel region of described first image, determine the average brightness value of described first image;
Judge whether the center brightness value of described each pixel region and the difference of described average brightness value are greater than predetermined threshold value;
When the center brightness value of N number of pixel region and the difference of described average brightness value are greater than predetermined threshold value, using described N number of pixel region as N number of overexposure region, described N is positive integer.
8. device according to claim 7, is characterized in that, described synthesis unit specifically for:
In a described i image, mark appears region of exposing to the sun;
Brightness deterioration process is carried out in the overexposure region of i-1 image in a described i image;
Utilize linear taper mode, synthesis is carried out with pixel region corresponding to the image except images themselves in the overexposure region in a described i image and processes, generate described synthesising picture.
9. device according to claim 6, is characterized in that, a described i image be a described i camera from different perspectives with the image of Same Scene of position shooting, described feature is predetermined scenic imagery, described acquiring unit also for:
From a described i image, be partitioned into the predetermined scenic imagery of different depth respectively, wherein, in different images, predetermined scenic imagery is different;
Described synthesis unit also for:
Special effect processing is carried out to the one or more of predetermined scenic imagery of a described i image, the predetermined scenic imagery of the predetermined scenic imagery after special effect processing and non-special effect processing is synthesized, generates described synthesising picture.
10. device according to claim 9, is characterized in that, described i is 2, described acquiring unit also for:
An Iamge Segmentation is gone out person image;
Another Iamge Segmentation is gone out background video;
Described synthesis unit also for:
Special effect processing is carried out to described background video, the background video after process and described person image is synthesized, generates described synthesising picture.
CN201510618878.6A 2015-09-24 2015-09-24 A kind of image combining method and device Pending CN105227837A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510618878.6A CN105227837A (en) 2015-09-24 2015-09-24 A kind of image combining method and device
PCT/CN2016/097937 WO2017050115A1 (en) 2015-09-24 2016-09-02 Image synthesis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510618878.6A CN105227837A (en) 2015-09-24 2015-09-24 A kind of image combining method and device

Publications (1)

Publication Number Publication Date
CN105227837A true CN105227837A (en) 2016-01-06

Family

ID=54996488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510618878.6A Pending CN105227837A (en) 2015-09-24 2015-09-24 A kind of image combining method and device

Country Status (2)

Country Link
CN (1) CN105227837A (en)
WO (1) WO2017050115A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454123A (en) * 2016-11-25 2017-02-22 滁州昭阳电信通讯设备科技有限公司 Shooting focusing method and mobile terminal
WO2017050115A1 (en) * 2015-09-24 2017-03-30 努比亚技术有限公司 Image synthesis method
CN106973280A (en) * 2016-01-13 2017-07-21 深圳超多维光电子有限公司 A kind for the treatment of method and apparatus of 3D rendering
CN107018325A (en) * 2017-03-29 2017-08-04 努比亚技术有限公司 A kind of image combining method and device
CN107240072A (en) * 2017-04-27 2017-10-10 努比亚技术有限公司 A kind of screen luminance adjustment method, terminal and computer-readable recording medium
CN107610078A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device
WO2018018771A1 (en) * 2016-07-29 2018-02-01 宇龙计算机通信科技(深圳)有限公司 Dual camera-based photography method and system
WO2018019130A1 (en) * 2016-07-29 2018-02-01 努比亚技术有限公司 Image noise reduction method, apparatus, terminal, and computer storage medium
CN107707839A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device
CN108111748A (en) * 2017-11-30 2018-06-01 维沃移动通信有限公司 A kind of method and apparatus for generating dynamic image
CN108154514A (en) * 2017-12-06 2018-06-12 广东欧珀移动通信有限公司 Image processing method, device and equipment
CN108924435A (en) * 2018-07-12 2018-11-30 Oppo广东移动通信有限公司 Image processing method, device and electronic equipment
CN108924530A (en) * 2017-03-31 2018-11-30 深圳市易快来科技股份有限公司 A kind of 3D shoots method, apparatus and the mobile terminal of abnormal image correction
WO2019014842A1 (en) * 2017-07-18 2019-01-24 辛特科技有限公司 Light field acquisition method and acquisition device
CN110139033A (en) * 2019-05-13 2019-08-16 Oppo广东移动通信有限公司 Camera control method and Related product
CN110166759A (en) * 2018-05-28 2019-08-23 腾讯科技(深圳)有限公司 The treating method and apparatus of image, storage medium, electronic device
CN110248094A (en) * 2019-06-25 2019-09-17 珠海格力电器股份有限公司 Image pickup method and camera terminal
CN110929615A (en) * 2019-11-14 2020-03-27 RealMe重庆移动通信有限公司 Image processing method, image processing apparatus, storage medium, and terminal device
WO2022206168A1 (en) * 2021-03-31 2022-10-06 华为技术有限公司 Video production method and system
US11503228B2 (en) 2017-09-11 2022-11-15 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, image processing apparatus and computer readable storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820404A (en) * 2021-01-29 2022-07-29 北京字节跳动网络技术有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN114173059B (en) * 2021-12-09 2023-04-07 广州阿凡提电子科技有限公司 Video editing system, method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101582993A (en) * 2008-05-14 2009-11-18 富士胶片株式会社 Image processing device and method, computer-readable recording medium containing program
CN102377943A (en) * 2010-08-18 2012-03-14 卡西欧计算机株式会社 Image pickup apparatus, image pickup method, and storage medium storing program
CN104050651A (en) * 2014-06-19 2014-09-17 青岛海信电器股份有限公司 Scene image processing method and device
CN104580910A (en) * 2015-01-09 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Image synthesis method and system based on front camera and rear camera
CN104796625A (en) * 2015-04-21 2015-07-22 努比亚技术有限公司 Picture synthesizing method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8072470B2 (en) * 2003-05-29 2011-12-06 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment
CN102780855B (en) * 2011-05-13 2016-03-16 晨星软件研发(深圳)有限公司 The method of image processing and relevant apparatus
CN107454343B (en) * 2014-11-28 2019-08-02 Oppo广东移动通信有限公司 Photographic method, camera arrangement and terminal
CN105227837A (en) * 2015-09-24 2016-01-06 努比亚技术有限公司 A kind of image combining method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101582993A (en) * 2008-05-14 2009-11-18 富士胶片株式会社 Image processing device and method, computer-readable recording medium containing program
CN102377943A (en) * 2010-08-18 2012-03-14 卡西欧计算机株式会社 Image pickup apparatus, image pickup method, and storage medium storing program
CN104050651A (en) * 2014-06-19 2014-09-17 青岛海信电器股份有限公司 Scene image processing method and device
CN104580910A (en) * 2015-01-09 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Image synthesis method and system based on front camera and rear camera
CN104796625A (en) * 2015-04-21 2015-07-22 努比亚技术有限公司 Picture synthesizing method and device

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017050115A1 (en) * 2015-09-24 2017-03-30 努比亚技术有限公司 Image synthesis method
CN106973280A (en) * 2016-01-13 2017-07-21 深圳超多维光电子有限公司 A kind for the treatment of method and apparatus of 3D rendering
CN106973280B (en) * 2016-01-13 2019-04-16 深圳超多维科技有限公司 A kind for the treatment of method and apparatus of 3D rendering
WO2018018771A1 (en) * 2016-07-29 2018-02-01 宇龙计算机通信科技(深圳)有限公司 Dual camera-based photography method and system
WO2018019130A1 (en) * 2016-07-29 2018-02-01 努比亚技术有限公司 Image noise reduction method, apparatus, terminal, and computer storage medium
CN106454123B (en) * 2016-11-25 2019-02-22 盐城丝凯文化传播有限公司 A kind of method and mobile terminal of focusing of taking pictures
CN106454123A (en) * 2016-11-25 2017-02-22 滁州昭阳电信通讯设备科技有限公司 Shooting focusing method and mobile terminal
CN107018325A (en) * 2017-03-29 2017-08-04 努比亚技术有限公司 A kind of image combining method and device
CN108924530A (en) * 2017-03-31 2018-11-30 深圳市易快来科技股份有限公司 A kind of 3D shoots method, apparatus and the mobile terminal of abnormal image correction
CN107240072A (en) * 2017-04-27 2017-10-10 努比亚技术有限公司 A kind of screen luminance adjustment method, terminal and computer-readable recording medium
CN107240072B (en) * 2017-04-27 2020-06-05 南京秦淮紫云创益企业服务有限公司 Screen brightness adjusting method, terminal and computer readable storage medium
WO2019014842A1 (en) * 2017-07-18 2019-01-24 辛特科技有限公司 Light field acquisition method and acquisition device
US11503228B2 (en) 2017-09-11 2022-11-15 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, image processing apparatus and computer readable storage medium
CN107610078A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device
US11516412B2 (en) 2017-09-11 2022-11-29 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, image processing apparatus and electronic device
CN107707839A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device
CN108111748A (en) * 2017-11-30 2018-06-01 维沃移动通信有限公司 A kind of method and apparatus for generating dynamic image
CN108111748B (en) * 2017-11-30 2021-01-08 维沃移动通信有限公司 Method and device for generating dynamic image
CN108154514A (en) * 2017-12-06 2018-06-12 广东欧珀移动通信有限公司 Image processing method, device and equipment
CN108154514B (en) * 2017-12-06 2021-08-13 Oppo广东移动通信有限公司 Image processing method, device and equipment
CN110166759A (en) * 2018-05-28 2019-08-23 腾讯科技(深圳)有限公司 The treating method and apparatus of image, storage medium, electronic device
CN110166759B (en) * 2018-05-28 2021-10-15 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic device
CN108924435A (en) * 2018-07-12 2018-11-30 Oppo广东移动通信有限公司 Image processing method, device and electronic equipment
CN110139033A (en) * 2019-05-13 2019-08-16 Oppo广东移动通信有限公司 Camera control method and Related product
CN110248094B (en) * 2019-06-25 2020-05-05 珠海格力电器股份有限公司 Shooting method and shooting terminal
CN110248094A (en) * 2019-06-25 2019-09-17 珠海格力电器股份有限公司 Image pickup method and camera terminal
CN110929615A (en) * 2019-11-14 2020-03-27 RealMe重庆移动通信有限公司 Image processing method, image processing apparatus, storage medium, and terminal device
WO2022206168A1 (en) * 2021-03-31 2022-10-06 华为技术有限公司 Video production method and system

Also Published As

Publication number Publication date
WO2017050115A1 (en) 2017-03-30

Similar Documents

Publication Publication Date Title
CN105227837A (en) A kind of image combining method and device
CN104954689B (en) A kind of method and filming apparatus that photo is obtained using dual camera
CN105245774B (en) A kind of image processing method and terminal
CN105163042B (en) A kind of apparatus and method for blurring processing depth image
CN105303543A (en) Image enhancement method and mobile terminal
CN105100775A (en) Image processing method and apparatus, and terminal
CN105338242A (en) Image synthesis method and device
CN105100491A (en) Device and method for processing photo
CN105956999A (en) Thumbnail generating device and method
CN105100642B (en) Image processing method and device
CN106097284B (en) A kind of processing method and mobile terminal of night scene image
CN104917965A (en) Shooting method and device
CN105681582A (en) Control color adjusting method and terminal
CN106851113A (en) A kind of photographic method and mobile terminal based on dual camera
CN106303229A (en) A kind of photographic method and device
CN105245792A (en) Mobile terminal and image shooting method
CN105160628A (en) Method and device for acquiring RGB data
CN106909681A (en) A kind of information processing method and its device
CN105488756A (en) Picture synthesizing method and device
CN105187709A (en) Remote photography implementing method and terminal
CN105100619A (en) Apparatus and method for adjusting shooting parameters
CN105162978A (en) Method and device for photographic processing
CN105306787A (en) Image processing method and device
CN106612393A (en) Image processing method, image processing device and mobile terminal
CN106534590A (en) Photo processing method and apparatus, and terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160106