CN107395965A - A kind of image processing method and mobile terminal - Google Patents

A kind of image processing method and mobile terminal Download PDF

Info

Publication number
CN107395965A
CN107395965A CN201710576627.5A CN201710576627A CN107395965A CN 107395965 A CN107395965 A CN 107395965A CN 201710576627 A CN201710576627 A CN 201710576627A CN 107395965 A CN107395965 A CN 107395965A
Authority
CN
China
Prior art keywords
image
region
face
virtualization
thermal imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710576627.5A
Other languages
Chinese (zh)
Other versions
CN107395965B (en
Inventor
耿筝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710576627.5A priority Critical patent/CN107395965B/en
Publication of CN107395965A publication Critical patent/CN107395965A/en
Application granted granted Critical
Publication of CN107395965B publication Critical patent/CN107395965B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present invention, which provides a kind of image processing method and mobile terminal, the mobile terminal, includes colour imagery shot and infrared thermal imaging camera, and this method includes:Preview image and infrared thermal imaging image that the colour imagery shot and the infrared thermal imaging camera are gathered to same target reference object are obtained respectively;Obtain the face location information of target reference object described in the preview image;Extract the human body contour outline information of target reference object described in the infrared thermal imaging image;According to the face location information and the human body contour outline information, the object region where target reference object described in the preview image is determined;According to the object region, virtualization processing is carried out to the preview image.Because infrared thermal imaging camera is that the temperature taken the photograph according to object spoke is imaged, so as to not protected from environmental, target shooting character image and background image are distinguished exactly, and then obtain preferably virtualization shooting effect.

Description

A kind of image processing method and mobile terminal
Technical field
The present invention relates to communication technical field, more particularly to a kind of image processing method and mobile terminal.
Background technology
With the continuous progress become increasingly popular with camera technology of mobile terminal, user is carried out commonly using mobile terminal Take pictures, and the photo of shooting is shared by social network-i i-platform with friend, this causes the shooting effect of photo to turn into user's Focus of attention.Wherein, virtualization style of shooting can obtain the clear and background blurring shooting effect of main body.At present, mobile terminal It is that the virtualization processing to photo is realized based on dual camera, but in actual conditions, photographed scene is varied, for complexity Photographed scene or the weaker environment of light, existing mobile terminal exist distinguish the accuracy rate of target shooting figure and ground compared with It is low, so as to cause shooting when image virtualization effect it is poor the problem of.
The content of the invention
The embodiment of the present invention provides a kind of image processing method and mobile terminal, to solve existing mobile terminal for complexity Photographed scene or the weaker environment of light, exist distinguish target shooting figure and ground accuracy rate it is relatively low, so as to cause to clap The problem of image virtualization effect is poor when taking the photograph.
In a first aspect, the embodiment of the present invention provides a kind of image processing method, applied to mobile terminal, the mobile terminal Including colour imagery shot and infrared thermal imaging camera, methods described includes:
The colour imagery shot and the infrared thermal imaging camera is obtained respectively to gather same target reference object Preview image and infrared thermal imaging image;
Obtain the face location information of target reference object described in the preview image;
Extract the human body contour outline information of target reference object described in the infrared thermal imaging image;
According to the face location information and the human body contour outline information, determine that target described in the preview image is shot Object region where object;
According to the object region, virtualization processing is carried out to the preview image.
Second aspect, the embodiment of the present invention also provide a kind of mobile terminal, the mobile terminal include colour imagery shot and Infrared thermal imaging camera, the mobile terminal also include:
First acquisition module, obtains the colour imagery shot respectively and the infrared thermal imaging camera is clapped same target Take the photograph preview image and infrared thermal imaging image that object is gathered;
Second acquisition module, for obtaining the face location information of target reference object described in the preview image;
Extraction module, for extracting the human body contour outline information of target reference object described in the infrared thermal imaging image;
Determining module, for according to the face location information and the human body contour outline information, determining the preview image Described in object region where target reference object;
First virtualization processing module, for according to the object region, virtualization processing to be carried out to the preview image.
The third aspect, the embodiment of the present invention also provide a kind of mobile terminal, including:Memory, processor and it is stored in On reservoir and the computer program that can run on a processor, the present invention is realized described in the computing device during computer program The step in image processing method in embodiment.
Fourth aspect, the embodiment of the present invention also provide a kind of computer-readable recording medium, are stored thereon with computer journey Sequence, the computer program realize the step in the image processing method in the embodiment of the present invention when being executed by processor.
So, in the embodiment of the present invention, the colour imagery shot and the infrared thermal imaging camera are obtained respectively to same The preview image and infrared thermal imaging image that one target reference object is gathered;Obtain the shooting of target described in the preview image The face location information of object;Extract the human body contour outline information of target reference object described in the infrared thermal imaging image;Root According to the face location information and the human body contour outline information, determine described in the preview image where target reference object Object region;According to the object region, virtualization processing is carried out to the preview image.Because infrared thermal imaging is taken the photograph As head can detect the temperature that object spoke takes the photograph so that mobile terminal is even in the weaker environment of the photographed scene or light of complexity Under, the human body contour outline information of target reference object can be still determined by infrared thermal imaging camera, such mobile terminal can Not disturbed by environment, target shooting character image and background image are distinguished exactly, and according to target reference object institute Object region virtualization processing is carried out to preview image, it is hereby achieved that preferably blurring shooting effect.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be in embodiment or description of the prior art The required accompanying drawing used is briefly described, it should be apparent that, drawings in the following description are only some realities of the present invention Example is applied, for those of ordinary skill in the art, without having to pay creative labor, can also be attached according to these Figure obtains other accompanying drawings.
Fig. 1 is one of flow chart of image processing method provided in an embodiment of the present invention;
Fig. 2 is the human body contour outline schematic diagram of target reference object in infrared thermal imaging image provided in an embodiment of the present invention;
Fig. 3 is the two of the flow chart of image processing method provided in an embodiment of the present invention;
Fig. 4 is the schematic diagram that infrared character image is shown in infrared thermal imaging image provided in an embodiment of the present invention;
Fig. 5 is dual camera Coordinate Conversion reference model figure provided in an embodiment of the present invention;
Fig. 6 is that the position of the facial image in preview image provided in an embodiment of the present invention and the human body contour outline after conversion shows It is intended to;
Fig. 7 is the facial image region of two object regions and inhuman in preview image provided in an embodiment of the present invention The position view in face image region;
Fig. 8 is the region division schematic diagram of two non-face image-regions provided in an embodiment of the present invention;
Fig. 9 is depth of field computation model schematic diagram provided in an embodiment of the present invention;
Figure 10 is the position in object region and non-object image region in preview image provided in an embodiment of the present invention Schematic diagram;
Figure 11 is one of structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 12 is the two of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 13 is the three of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 14 is the four of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 15 is the five of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 16 is the six of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 17 is the seven of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 18 is the eight of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 19 is the nine of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 20 is the ten of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 21 is the 11 of the structure chart of mobile terminal provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, rather than whole embodiments.Based on this hair Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made Example, belongs to the scope of protection of the invention.
Referring to Fig. 1, Fig. 1 is the flow chart of image processing method provided in an embodiment of the present invention, applied to mobile terminal, institute Stating mobile terminal includes colour imagery shot and infrared thermal imaging camera, as shown in figure 1, the described method comprises the following steps:
Step 101, the colour imagery shot and the infrared thermal imaging camera are obtained respectively to the shooting pair of same target As the preview image and infrared thermal imaging image that are gathered.
Above-mentioned preview image is the preview screen collected by above-mentioned colour imagery shot, due to being taken the photograph by the colour As head can thereby may be ensured that the photo shot by the colour imagery shot possesses preferably with preview to more clearly picture Picture quality.
Above-mentioned infrared thermal imaging camera is to use infrared thermal imaging technique, that is, utilizes infrared detector and optical imagery thing The infrared energy distribution pattern that mirror receives measured target reflects onto the light-sensitive element of infrared detector, infrared so as to obtain Graphic images, this graphic images are corresponding with the heat distribution field of body surface.
In the embodiment of the present invention, by the way that the temperature of object radiation is reflected onto image, and by every kind of temperature all with one False colour shown, these false colours picture to be formed that is stitched together finally formd into the infrared thermal imaging image.With it is visible Light is compared, and infrared thermal imaging is not influenceed by ambient light power, so still may be used under the scene of complexity or compared with dark situation Form clearly infrared thermal imaging image.
It should be noted that the position between the colour imagery shot and the infrared thermal imaging camera is relatively fixed, And required distance therebetween is as far as possible small, so that ensure can be with by the colour imagery shot and the infrared thermal imaging camera Shot for same target reference object, so, with reference to the preview image and infrared thermal imaging image progress When successive image processing or fusion, preferable picture quality can be obtained.
Step 102, the face location information for obtaining target reference object described in the preview image.
The face location information of target reference object described in the above-mentioned acquisition preview image, can be described in first identification Facial image in preview image, so that it is determined that the position where the face of the target reference object, is then obtained described again The face location information of target reference object, wherein, the face location information is including in the region where the facial image Each pixel is in x directions and y directions coordinate value.
So,, can by obtaining the face location information of target reference object described in the preview image in the step The character image and other images in the preview image are tentatively distinguished, and can also prevent that the mobile terminal will to a certain degree Other images in the infrared thermal imaging image are mistakenly identified as infrared character image corresponding to the target reference object.
The human body contour outline information of target reference object described in step 103, the extraction infrared thermal imaging image.
In the embodiment of the present invention, because the temperature of human body radiation is different from the temperature of other object radiations, therefore it can incite somebody to action The temperature range of human body radiation is shown with specific false colour or false colour section, so as to by identifying the infrared thermal imaging False colour corresponding to human body temperature scope or false colour section determine the human body contour outline information of the target reference object in image.
Specifically, because the temperature of people is typically at 36.5 DEG C or so, and the temperature of human body different parts is also not quite similar, Therefore, the temperature range of human body can be pre-defined, such as:The temperature range of pre-defined human body is 36 DEG C~38 DEG C.And Every kind of temperature can be pre-defined and correspond to false colour corresponding to different false colours or human body temperature scope, so, according to pre-defined Human body temperature scope and the corresponding relation of temperature and false colour, determine false colour or false colour section corresponding to human body temperature scope.
The human body contour outline information of target reference object described in infrared thermal imaging image described in said extracted, can be basis False colour corresponding to human body temperature scope or false colour section, corresponding false colour or false colour section is extracted from the infrared thermal imaging image Infrared character image of the image as the target reference object, may thereby determine that the human body wheel of the target reference object Wide information, wherein, the human body contour outline information can be by extracting edge from the positional information of the infrared character image The coordinate of pixel is worth to.Such as:Referring to Fig. 2, Fig. 2 is that target is clapped in infrared thermal imaging image provided in an embodiment of the present invention Take the photograph the human body contour outline schematic diagram of object, the step can be that the coordinate value of edge pixel point is extracted from the A of region come described in determining The human body contour outline information of target reference object.
So, the target shooting pair can be determined in the step by false colour corresponding to human body temperature or false colour section It is accurate so as to make the mobile terminal not disturbed by environment as the human body contour outline information in the infrared thermal imaging image Really distinguish infrared character image and other images.Step 104, believed according to the face location information and the human body contour outline Breath, determines the object region where target reference object described in the preview image.
In the embodiment of the present invention, according to the distance between the colour imagery shot and the infrared thermal imaging camera, with And focal length, the Coordinate Conversion amount between the preview image and the infrared thermal imaging image can be calculated.Therefore, above-mentioned basis The face location information and the human body contour outline information, determine the mesh where target reference object described in the preview image Logo image region, can be that the human body contour outline information of target reference object described in the infrared thermal imaging image is converted into institute Profile information corresponding with the infrared character image in preview image is stated, and determines that corresponding personage takes turns according to the profile information Exterior feature, further according to the face location information of target reference object described in the preview image, mutually calibrate in the preview image The character contour and face location of the target reference object, the target reference object is finally determined according to the profile after calibration The object region at place.
Such as:If carrying out preview just for a target reference object by the colour imagery shot, but drawn in preview More than one facial image is recognized in face, such as recognizes model's facial image on background poster, now can be according to described The human body contour outline information determined in infrared thermal imaging image is to the non-targeted reference object that is misidentified in the preview image Facial image is corrected;Or when by mistake the infrared thermal imaging camera will detect and human body temperature relatively its For his object when for example animal is imaged as human body, the human body contour outline information of extraction may include the profile of animal, now may be used To calibrate the human body contour outline information extracted with reference to the facial image position determined in the preview image.So, the step In, by the face location that target reference object described in the preview image is determined with reference to the infrared thermal imaging camera Information, it can avoid the mobile terminal that the facial image in the background area of the preview image is also identified as into target shooting The facial image of object, the problem of inaccurate is distinguished so as to cause background and target to shoot who object.And for low light environment Under, it is understood that there may be the situation by the colour imagery shot None- identified to facial image, in the embodiment of the present invention, pass through knot Temperature that infrared thermal imaging camera detection human body spoke takes the photograph is closed to determine the people of target reference object in the preview image Body profile, so as to solve because being influenceed that by the colour imagery shot asking for character image can not be identified exactly by light Topic.
Step 105, according to the object region, virtualization processing is carried out to the preview image.
In the embodiment of the present invention, it is determined that object region described in the preview image where target reference object Afterwards, virtualization processing can be carried out to the preview image, specifically, can be by the object-image region in the preview image Domain removes, and carries out virtualization processing to the preview image after removal by certain virtualization grade.Such as:Can be according to non-targeted figure As each distance of the pixel away from certain central point in the object region in region, set each in non-object image region The virtualization grade of pixel, and virtualization processing is carried out to each pixel according to virtualization grade.
After the preview image after to removal carries out virtualization processing, just by the image of the object region of removal with Preview image after virtualization processing carries out image synthesis, you can be interpreted as the image of the object region of removal correspondingly It is put back into object region location before removal described in the preview image after virtualization processing.
So, in the embodiment of the present invention, when the mobile terminal passes through the colour imagery shot and the infrared thermal imaging , can be to removing the object-image region in the preview image that is gathered by the colour imagery shot when camera shoots who object Overseas background area carries out different grades of virtualization processing, it is hereby achieved that target shoots character image clearly Background As fuzzy virtualization shooting effect.
In the present embodiment, above-mentioned mobile terminal can be any equipment with storaging medium, such as:Computer (Computer), mobile phone, tablet personal computer (Tablet Personal Computer), laptop computer (Laptop Computer), personal digital assistant (personal digital assistant, abbreviation PDA), mobile Internet access device The terminal device such as (Mobile Internet Device, MID) or wearable device (Wearable Device).
The image processing method of the embodiment of the present invention, applied to mobile terminal, the mobile terminal includes colour imagery shot And infrared thermal imaging camera, the colour imagery shot is obtained respectively and the infrared thermal imaging camera is shot to same target The preview image and infrared thermal imaging image that object is gathered;Obtain the face of target reference object described in the preview image Positional information;Extract the human body contour outline information of target reference object described in the infrared thermal imaging image;According to the face Positional information and the human body contour outline information, determine the object-image region where target reference object described in the preview image Domain;According to the object region, virtualization processing is carried out to the preview image.Because infrared thermal imaging camera can detect The temperature taken the photograph to object spoke so that mobile terminal can still lead under the weaker environment of the photographed scene or light of complexity Infrared thermal imaging camera is crossed to determine the human body contour outline information of target reference object, such mobile terminal can not be by environment Interference, target shooting character image and background image are distinguished exactly, and according to the target image where target reference object Region carries out virtualization processing to preview image, it is hereby achieved that preferably blurring shooting effect.
Referring to Fig. 3, Fig. 3 is the flow chart of image processing method provided in an embodiment of the present invention, applied to mobile terminal, institute Stating mobile terminal includes colour imagery shot and infrared thermal imaging camera.The embodiment on the basis of the embodiment shown in Fig. 1, The step of to extracting the human body contour outline information of target reference object described in the infrared thermal imaging image, is refined.Such as figure Shown in 3, it the described method comprises the following steps:
Step 301, the colour imagery shot and the infrared thermal imaging camera are obtained respectively to the shooting pair of same target As the preview image and infrared thermal imaging image that are gathered.
Above-mentioned preview image is the preview screen collected by above-mentioned colour imagery shot, due to being taken the photograph by the colour As head can thereby may be ensured that the photo shot by the colour imagery shot possesses preferably with preview to more clearly picture Picture quality.
Above-mentioned infrared thermal imaging camera is to use infrared thermal imaging technique, that is, utilizes infrared detector and optical imagery thing The infrared energy distribution pattern that mirror receives measured target reflects onto the light-sensitive element of infrared detector, infrared so as to obtain Graphic images, this graphic images are corresponding with the heat distribution field of body surface.
In the embodiment of the present invention, by the way that the temperature of object radiation is reflected onto image, and by every kind of temperature all with one False colour shown, these false colours picture to be formed that is stitched together finally formd into the infrared thermal imaging image.With it is visible Light is compared, and infrared thermal imaging is not influenceed by ambient light power, so still may be used under the scene of complexity or compared with dark situation Form clearly infrared thermal imaging image.
It should be noted that the position between the colour imagery shot and the infrared thermal imaging camera is relatively fixed, And required distance therebetween is as far as possible small, so that ensure can be with by the colour imagery shot and the infrared thermal imaging camera Shot for same target reference object, so, with reference to the preview image and infrared thermal imaging image progress When successive image processing or fusion, preferable picture quality can be obtained.
Step 302, the face location information for obtaining target reference object described in the preview image.
Optionally, described the step of obtaining the face location information of target reference object described in the preview image, bag Include:Facial image identification is carried out to the preview image;Obtain the face location information of the facial image of identification.
Above-mentioned can identify the preview by face recognition technology to preview image progress facial image identification Whether there is facial image feature in image, if recognizing facial image feature, confirm that the preview image includes face figure Picture.
On the basis of facial image feature is recognized, the face location information of the facial image recognized is obtained, wherein, Each pixel is in x and y directions coordinate value in region where the facial image that the face location information includes recognizing.
In the embodiment of the present invention, by carrying out recognition of face to the preview image, the colored shooting can be identified through The face location of the target reference object of head collection, so as to get the face location information of the target reference object. So, the character image and other images in the preview image can be tentatively distinguished, and the shifting can also prevented to a certain degree Other images in the infrared thermal imaging image are mistakenly identified as infrared personage corresponding to the target reference object by dynamic terminal Image.
The human body contour outline information of target reference object described in step 303, the extraction infrared thermal imaging image.
Optionally, the step of the human body contour outline information of target reference object described in the extraction infrared thermal imaging image Suddenly, including:Obtain the temperature range of human body;According to the corresponding relation in default temperature range and false colour section, it is determined that with it is described False colour section corresponding to the temperature range of human body;By false colour in the infrared thermal imaging image in the temperature range pair of the human body The image in false colour section answered is defined as the infrared character image of the target reference object;Extract the infrared character image Human body contour outline information.
The temperature range of above-mentioned acquisition human body can obtain the temperature range of pre-defined human body, such as:Obtain pre- The temperature range of the human body first defined is 36 DEG C~38 DEG C.
In the embodiment of the present invention, temperature range and the corresponding relation in false colour section can be pre-set, the corresponding pass System can be the corresponding false colour of a temperature value, and a temperature range then corresponds to a false colour section, such as:36 DEG C of correspondences are dark Red, 37 DEG C corresponding red, and 38 DEG C correspond to peony etc., so, corresponding with false colour section according to default temperature range Relation, it may be determined that false colour section corresponding with the temperature range of the human body is kermesinus to peony.
Because infrared thermal imaging camera is the temperature according to object radiation, it is imaged using corresponding false colour, therefore institute State infrared thermal imaging image can be shown by a variety of false colours or only display and the temperature of the human body Spend the image in false colour section corresponding to scope.So, can be by false colour in the infrared thermal imaging image in institute in the step The infrared character image that the image corresponding to the temperature range of human body in false colour section is defined as the target reference object is stated, or If person's infrared thermal imaging image only shows the image in false colour section corresponding with the temperature range of the human body, can be straight Connect the infrared character image that the image for determining to be shown in the infrared thermal imaging image is the target reference object.Such as:Ginseng It is to show the schematic diagram of infrared character image, it is necessary to say in infrared thermal imaging image provided in an embodiment of the present invention to see Fig. 4, Fig. 4 It is bright, represent false colour section corresponding with the temperature range of the human body with oblique line in figure.
It is determined that after the infrared character image of the target reference object, the position of the infrared character image can be obtained Information, and the seat of the edge pixel point of the infrared character image can be extracted from the positional information of the infrared character image Scale value, so as to obtain the human body contour outline information of the infrared character image.
In the embodiment of the present invention, the temperature range of the pre-defined human body of acquisition, and default temperature range can be passed through With the corresponding relation in false colour section, to determine false colour section corresponding with the temperature range of the human body, and then determine described red The positional information where infrared character image in outer graphic images, and therefrom extract the human body of the infrared character image Profile information.So, the mobile terminal can be made not disturbed by environment, distinguishes infrared character image and other figures exactly Picture.
Step 304, according to the face location information and the human body contour outline information, determine described in the preview image Object region where target reference object.
Optionally, it is described according to the face location information and the human body contour outline information, determine in the preview image The step of object region where the target reference object, including:Obtain the preview image and it is described it is infrared heat into As the Coordinate Conversion amount between image;According to the Coordinate Conversion amount, the human body contour outline information is subjected to Coordinate Conversion, obtained Transformed profile information in the preview image;Judge the face location information whether the transformed profile information scope It is interior;If the face location information is identified and changed described in the preview image in the range of the transformed profile information Image-region corresponding to profile information;Image-region corresponding to the transformed profile information is defined as the target reference object The object region at place.
In the embodiment of the present invention, can according between the colour imagery shot and the infrared thermal imaging camera away from From the Formula of Coordinate System Transformation between the preview image and the infrared thermal imaging image being derived, so as to obtain Coordinate Conversion Amount.Below so that the colour imagery shot and the infrared thermal imaging camera are in the same horizontal position as an example, to described in acquisition The implementation of Coordinate Conversion amount is illustrated, and referring to Fig. 5, Fig. 5 is dual camera coordinate provided in an embodiment of the present invention Convert reference illustraton of model.
If the focal length of the colour imagery shot and the infrared thermal imaging camera is f, the colour imagery shot and institute It is T to state the distance between infrared thermal imaging camera, and the object distances of P points is Z, the preview graph that P points gather in the colour imagery shot The coordinate value in x directions is x as inL, P points x directions in the infrared thermal imaging image that the infrared thermal imaging camera obtains Coordinate value is xR, wherein, put the coordinate value in y directions and point y in the infrared thermal imaging image in the preview image The coordinate value in direction is identical, therefore need to only obtain the Coordinate Conversion in x directions in the preview image and the infrared thermal imaging image Amount.
According to Similar Principle of Triangle, can obtain
Understand that the Coordinate Conversion amount between the preview image and the infrared thermal imaging image is fT/ (Z according to the formula +f)+xR
It should be noted that it is in same vertical position for the colour imagery shot and the infrared thermal imaging camera Situation, the seat in P the points coordinate value in x directions and this x directions in the infrared thermal imaging image in the preview image Scale value is identical, now need to only obtain the Coordinate Conversion amount in y directions in the preview image and the infrared thermal imaging image, The specific calculating process that derives is similar with the derivation calculating process in the citing, to avoid repeating, repeats no more here.
, can be with the human body wheel using the Coordinate Conversion amount to the infrared character image it is determined that after Coordinate Conversion amount Wide information carries out Coordinate Conversion, obtains the transformed profile information in the preview image, wherein, the conversion in the preview image Profile information can be understood as moving to the infrared character image in the infrared thermal imaging image by the Coordinate Conversion amount The profile information of image determined by after in the preview image.
In the embodiment of the present invention, according to the transformed profile information face location information can be judged whether in institute In the range of stating transformed profile information, with the face location information for determining to get in step 302 according to the result of judgement Whether be exactly the target reference object face location information.Wherein, it is described to judge the face location information whether in institute Can judge whether each pixel in region corresponding to the face location information is big in the range of stating transformed profile information It is partially in the region by the transformed profile information structure.
If the face location information can be identified in the preview image in the range of the transformed profile information Image-region corresponding to the transformed profile information, and the region is defined as to the target image where the target reference object Region.If it should be noted that the face location information can determine to walk not in the range of the transformed profile information The face location information got in rapid 302 is not the face location information of the target reference object, it is possible to determine that this When misrecognition be present, the face location information got is not determined to the face of the target reference object Positional information.
Such as:Referring to Fig. 6, after Fig. 6 is the facial image and conversioning wheel in preview image provided in an embodiment of the present invention The position view of human body contour outline, region B and region E in the preview image are the facial image determined according to step 302 Region, region D is the human body contour outline region determined according to transformed profile information, as illustrated, region B is located in the D of region, and area Domain E is located at outside the D of region, thus can recognize that region B is exactly face figure corresponding to the face location information of the target reference object As region, and then image-region corresponding to the transformed profile information can be defined as to the mesh where the target reference object Logo image region, and the image in the E of region is then identified as background image.
In the embodiment of the present invention, by the way that the human body contour outline information is carried out into Coordinate Conversion, obtain in the preview image Transformed profile information, and by the face location information and the transformed profile information, further determine that the mesh The object region where reference object is marked, target shooting personage and the back of the body are distinguished so as to drastically increase the mobile terminal The accuracy rate of scape, and can effectively be solved under low light environment by the colour imagery shot by the infrared thermal imaging camera The problem of None- identified is to facial image.
Optionally, it is described according to the face location information and the human body contour outline information, determine in the preview image After the step of object region where the target reference object, methods described also includes:If the preview image bag At least two object regions are included, then void are carried out to the non-face image-region of at least two object region respectively Change is handled;Wherein, the non-face image-region of at least two object region is at least two object-image region All image-regions in domain in addition to facial image region.
If the above-mentioned preview image includes at least two object regions, it can be understood as passes through the colored shooting At least two targets that head and the infrared thermal imaging camera determine when being shot at least two target reference objects Image-region.
, can be respectively to the inhuman of at least two object regions described in the preview image in the embodiment of the present invention Face image region carries out virtualization processing, wherein, the non-face image-region of at least two object region for it is described extremely All image-regions in few two object regions in addition to facial image region.Such as:It is real for the present invention referring to Fig. 7, Fig. 7 The facial image region of two object regions and the signal of the position of non-face image-region in the preview image of example offer are provided Figure, the region B1 shown in figure are the facial image region of first object image-region, and region B2 is first object image-region Non-face image-region, region C1 be the second object region facial image region, region C2 is the second target image The non-face image-region in region.
Specific virtualization processing mode can be by the people of at least two object regions described in the preview image Face image is removed, and the non-face image-region in the preview image after removal is blurred according to default virtualization Processing Algorithm Processing, then by the facial image of at least two object region of removal and virtualization processing after non-face image-region Image carry out image synthesis.So, the face that can shoot personage to prominent target with preview by the colour imagery shot is special The image of sign, it is hereby achieved that preferably image taking quality.
Optionally, it is described that virtualization processing is carried out to the non-face image-region of at least two object region respectively The step of, including:Reference man's face region in the facial image region of each object region is set;According to each non-face Each pixel and the distance in facial image region, are divided at least two by each non-face image-region respectively in image-region Individual non-face image region;According to reference man's face region in each non-face image region and facial image region away from From it is determined that the virtualization grade of each non-face image region;Facial image region in each object region is removed; According to the virtualization grade of each non-face image region, virtualization processing is carried out to each non-face image region respectively;Will Non-face image region progress image after the facial image region of removal is handled with virtualization synthesizes.
Reference man's face region in the facial image region of the above-mentioned each object region of setting, can be according to each Several edge pixel points determine a central point in the facial image region of object region, and using the central point as the center of circle Determine respectively in corresponding reference man's face region or facial image region according to each object region The depth of field of several pixels determines that refers to a depth of field, then determines that one refers to face accordingly respectively with reference to the depth of field with this Subregion, wherein, the depth of field of each pixel matches with corresponding with reference to the depth of field in each reference man's face region.
, can also be according to each pixel in each non-face image-region and facial image region in the embodiment of the present invention Distance determine specifically to blur grade, specifically, can be by each pixel in each non-face image-region away from corresponding The distance in facial image region, and default divided rank, it is non-that each non-face image-region is divided at least two respectively Facial image subregion.Wherein, the subregion quantity of the division of each non-face image-region can be according to default division etc. Level determination, such as:Default divided rank is 0, then each non-face image-region is divided into upper part of the body region with respectively Half body region, it is 1 to preset divided rank, then on the basis of divided rank is 0, then respectively by each non-face image-region Upper part of the body region and lower part of the body region are each divided into two regions up and down.
Referring to Fig. 8, Fig. 8 is the region division schematic diagram of two non-face image-regions provided in an embodiment of the present invention, if Default divided rank is 1, correspondingly presets first of pixel in each non-face image-region away from corresponding facial image region Distance range, second distance scope, the 3rd distance range and the 4th distance range.As shown in figure 8, B1 regions are first object figure As the facial image region in region, B2 regions are the facial image region of the second object region, according to default division etc. Level and distance range, the non-face image-region B2 of first object image-region can be divided into four sub-regions B21, B22, B23 and B24, the non-face image-region C2 of the second object region can be divided into four sub-regions C21, C22, C23 And C24.
Reference man's face region in each facial image region is being set and region is carried out to each non-face image-region After division, can according to the distance in reference man face region of each non-face image region away from corresponding facial image region, To determine the virtualization grade of each non-face image region, specifically, the virtualization grade of each non-face image region can With with reference man face region of each non-face image region away from corresponding facial image region apart from respective change, example Such as:Distance is nearer, and virtualization grade is smaller, and distance is more remote, and virtualization bigger grade.So, can be according to each non-face image subsection The distance in reference man face region of the domain away from corresponding facial image region carries out different grades of virtualization, possesses stereovision to obtain Virtualization shooting effect.
In the embodiment of the present invention, the preview image can be entered according to the virtualization grade of each non-face image region Row virtualization is handled, and specifically, can be removed the facial image region of each object region in the preview image, and press The virtualization grade of determination carries out virtualization processing to each non-face image region respectively.
After virtualization processing is carried out to each non-face image region, by each facial image region of removal and virtualization Non-face image region after processing carries out image synthesis, wherein, described image synthesis can be understood as each of removal It is residing before removal that facial image region is correspondingly put back into corresponding facial image region in the preview image after virtualization processing Position.
So, in the embodiment of the present invention, when the mobile terminal passes through the colour imagery shot and the infrared thermal imaging , can be to the preview graph that is gathered by the colour imagery shot when camera is shot at least two targets shooting personage The non-face image region of each object region carries out different grades of virtualization processing as in, it is hereby achieved that prominent Target shoots character facial feature, and possesses the virtualization shooting effect of preferable stereovision.
It should be noted that void is carried out to the non-face image region of each object region in the embodiment of the present invention The scene shot just for a target reference object can also be applied to by changing the mode of processing, and specific virtualization grade is true It is similar with the implementation detail in the embodiment to determine mode, to avoid repeating, repeats no more here.
Optionally, the step of reference man face region in the facial image region that each object region is set, Including:The depth of view information of each pixel in the facial image region of each object region is calculated respectively;Calculate respectively every The average depth of field in individual facial image region;Son by the depth of field in each facial image region for the average depth of field in facial image region Region is defined as reference man's face region;Wherein, the average depth of field in each facial image region be facial image region at least The average depth of field of two pixels.
, can be according to the scape of each pixel in the facial image region of each object region in the embodiment of the present invention Breath is deeply convinced to determine corresponding reference man's face region, therefore can calculate the facial image region of each object region respectively In each pixel the depth of field, wherein, the depth of field refers to that in camera lens or other imager forward positions picture rich in detail can be obtained The subject longitudinal separation scope that is determined of imaging.Referring to Fig. 9, Fig. 9 is that the depth of field provided in an embodiment of the present invention calculates mould Type schematic diagram, the calculation of the depth of field is illustrated below in conjunction with Fig. 9.
As shown in figure 9, the depth of field Δ L of certain pixel includes preceding depth of field Δ L1With rear depth of field Δ L2, wherein, depth of field Δ L with more Scattered circular diameter δ, lens focus f, the shooting f-number F of camera lens are related to focal distance L, and the calculation formula of the depth of field is
The depth of field of each pixel in the facial image region of each object region can be calculated according to the formula.
The average depth of field in the above-mentioned each facial image region of calculating, can be first from each facial image region determine to Few two pixels, the average depth of field is then calculated according to the depth of view information of described two pixels.Such as:Everyone can be calculated The average value of the depth of view information of two edge pixel points up and down in face image region, the average depth of field is obtained, or calculated each The average value of the depth of view information of all pixels point in facial image region, obtain the average depth of field.
It is above-mentioned to be defined as joining by subregion of the depth of field in each facial image region for the average depth of field in facial image region Face subregion is examined, can be the depth of field letter of each pixel in the facial image region according to each object region of acquisition Breath, the region that the depth of field in corresponding facial image region is equal to the pixel composition of the average depth of field are defined as corresponding face figure As reference man's face region in region.
So, in the embodiment of the present invention, according to each pixel in the facial image region of each object region Depth of view information, by the way that reference man's face region in corresponding facial image region is calculated, so as to according to each reference man Position where face region, it is determined that each distance of the non-face image region away from corresponding reference man's face region, and then really The different virtualization grade of fixed each non-face image region, to obtain the virtualization shooting effect for possessing preferable stereovision.
Step 305, the reference picture subregion that the object region is set.
The reference picture subregion of the above-mentioned setting object region, can be according in the object region Several edge pixel points determine a central point, and determine a reference picture subregion by the center of circle of the central point, also may be used To be to determine an average reference depth of field according to the depth of field of several pixels in the object region, it is flat to be then based on this A reference picture subregion is determined with reference to the depth of field, wherein, the depth of field of each pixel and institute in the reference picture subregion State the matching of the average reference depth of field.
So, by setting the reference picture subregion of the object region, can make to remove institute in the preview image State each pixel in the overseas region in object-image region and may be referred to the distance away from the reference picture subregion, to set Different virtualization grade, to obtain the background blurring shooting effect for possessing preferable stereovision.
Optionally, the step of reference picture subregion of the setting object region, including:Institute is calculated respectively State the depth of view information of each pixel in object region;Calculate the flat of at least two pixels in the object region The equal depth of field;Subregion of the depth of field in the object region for the average depth of field is defined as reference picture subregion.
In the embodiment of the present invention, it can determine to join according to the depth of view information of each pixel in the object region Image region is examined, therefore the depth of field of each pixel in the object region can be calculated, wherein, the object-image region The calculation formula of the depth of field of each pixel may refer to the related description in previous embodiment in domain, not repeat herein.
The average depth of field of at least two pixels in the above-mentioned calculating object region, can be first from the target At least two pixels are determined in image-region, average scape is then calculated according to the depth of view information of at least two pixel It is deep.Such as:The average value of the depth of view information of two edge pixel points up and down in the object region can be calculated, is obtained The average depth of field, or the average value of the depth of view information of all pixels point in the object region is calculated, obtain the average depth of field.
It is above-mentioned that subregion of the depth of field in the object region for the average depth of field is defined as reference picture sub-district Domain, can be the depth of view information of each pixel in the object region according to acquisition, by the object region The region that the pixel that the depth of field is equal to the average depth of field is formed is defined as reference picture subregion.
So, in the embodiment of the present invention, according to the depth of view information of each pixel in the object region, meter is passed through Calculation obtains the reference picture subregion of the object region, so as to according to the position where the reference picture subregion Put, determine to remove the different virtualization grade of each pixel in the overseas region in the object-image region in the preview image, with Obtain the background blurring shooting effect for possessing preferable stereovision.
Step 306, the distance according to each pixel in non-object image region and the reference picture subregion, it is determined that The virtualization grade of each pixel;Wherein, the non-object image region is to remove the object-image region in the preview image All image-regions outside domain.
Above-mentioned non-object image region is all image districts in addition to the object region in the preview image Domain, such as:Referring to Figure 10, Figure 10 is object region and non-object image area in preview image provided in an embodiment of the present invention The position view in domain, the region B shown in figure are object region, and region F is non-object image region.
In the step, can according to each pixel in non-object image region away from the reference picture subregion away from From to determine the virtualization grade of each pixel in the non-object image region, specifically, in the non-object image region The virtualization grade of each pixel can be with each pixel in the non-object image region away from the reference picture subregion Apart from respective change, such as:Distance is nearer, and virtualization grade is smaller, and distance is more remote, and virtualization bigger grade.So, can basis Distance of each pixel away from the reference picture subregion carries out different grades of virtualization in the non-object image region, with Obtain the background blurring shooting effect for possessing stereovision.
It should be noted that the non-object image region and each non-face image region are blurred at the same time When, the virtualization grade of each non-face image region can be made to be respectively less than the void of each pixel in the non-object image region Change grade, so, after virtualization processing is carried out to the preview image, can obtain from non-face image region to non-targeted figure The shooting effect progressively blurred as region.
Step 307, by the preview image the object region remove.
, can be by described in order to carry out virtualization processing to the non-object image region in the preview image in the step The object region in preview image removes, and specifically, can carry the image in the object region Take out, obtain only including the preview image of the image in the non-object image region so, can according in step 306 really The virtualization grade of each pixel, virtualization processing is carried out to the preview image after removal in the fixed non-object image region, And it can effectively prevent that the image in the object region from being blurred by mistake.
Step 308, the virtualization grade by each pixel in non-object image region, are carried out at virtualization to each pixel Reason.
, can be by the non-object image region determined in step 306 after the object region is removed in the step In each pixel virtualization grade, virtualization processing is carried out to each pixel, make each pixel image obtain it is corresponding etc. The virtualization of level, it is hereby achieved that target shoots the virtualization shooting effect that character image is clear and background image is fuzzy.
Step 309, by the object region of removal and virtualization processing after non-object image region carry out image Synthesis.
In the step, after corresponding virtualization processing is carried out to non-object image region, just by the object-image region of removal Non-object image region progress image after domain is handled with virtualization synthesizes, wherein, described image synthesis can be understood as removing Object region to be correspondingly put back into object region described in the preview image after virtualization processing residing before removal Position.So, the preview image after synthesis can possess the clearly shooting of background image virtualization of target shooting character image Effect.
The image processing method of the embodiment of the present invention, applied to mobile terminal, the mobile terminal includes colour imagery shot And infrared thermal imaging camera, the colour imagery shot is obtained respectively and the infrared thermal imaging camera is shot to same target The preview image and infrared thermal imaging image that object is gathered;Obtain the face of target reference object described in the preview image Positional information;Extract the human body contour outline information of target reference object described in the infrared thermal imaging image;According to the face Positional information and the human body contour outline information, determine the object-image region where target reference object described in the preview image Domain;According to the object region, virtualization processing is carried out to the preview image.Because infrared thermal imaging camera can detect The temperature taken the photograph to object spoke so that mobile terminal can still lead under the weaker environment of the photographed scene or light of complexity Infrared thermal imaging camera is crossed to determine the human body contour outline information of target reference object, such mobile terminal can not be by environment Interference, target shooting character image and background image are distinguished exactly, and according to the target image where target reference object Region carries out virtualization processing to preview image, it is hereby achieved that preferably blurring shooting effect.
Referring to Figure 11, Figure 11 is the structure chart for the mobile terminal that the present invention implements offer, and the mobile terminal can realize Fig. 1 With the details of the image processing method in Fig. 3 embodiment of the method, and reach identical effect.The mobile terminal includes colour Camera and infrared thermal imaging camera, as shown in figure 11, mobile terminal 1100 also obtain including the first acquisition module 1110, second Modulus block 1120, extraction module 1130, determining module 1140 and first blur processing module 1150, wherein, the first acquisition module 1110 are connected with the second acquisition module 1120, and the first acquisition module 1110 is also connected with extraction module 1130, the second acquisition module 1120 are also connected with determining module 1140, and extraction module 1130 is also connected with determining module 1140, and determining module 1140 is also with One virtualization processing module 1150 connects, wherein:
First acquisition module 1110, the colour imagery shot and the infrared thermal imaging camera are obtained respectively to same mesh The preview image and infrared thermal imaging image that mark reference object is gathered;
Second acquisition module 1120, the face location for obtaining target reference object described in the preview image are believed Breath;
Extraction module 1130, the human body contour outline for extracting target reference object described in the infrared thermal imaging image are believed Breath;
Determining module 1140, for according to the face location information and the human body contour outline information, determining the preview Object region where target reference object described in image;
First virtualization processing module 1150, for according to the object region, being blurred to the preview image Processing.
Optionally, as shown in figure 12, the extraction module 1130 includes:
First acquisition unit 1131, for obtaining the temperature range of human body;
First determining unit 1132, according to the corresponding relation in default temperature range and false colour section, it is determined that with the people False colour section corresponding to the temperature range of body;
Second determining unit 1133, for by false colour in the infrared thermal imaging image in the temperature range pair of the human body The image in false colour section answered is defined as the infrared character image of the target reference object;
Extraction unit 1134, for extracting the human body contour outline information of the infrared character image.
Optionally, as shown in figure 13, the determining module 1140 includes:
Second acquisition unit 1141, turn for obtaining the coordinate between the preview image and the infrared thermal imaging image The amount of changing;
Converting unit 1142, for according to the Coordinate Conversion amount, the human body contour outline information being carried out into Coordinate Conversion, obtained To the transformed profile information in the preview image;
Judging unit 1143, for judging the face location information whether in the range of the transformed profile information;
First recognition unit 1144, if for the face location information in the range of the transformed profile information, Identify image-region corresponding to transformed profile information described in the preview image;
3rd determining unit 1145, clapped for image-region corresponding to the transformed profile information to be defined as into the target The object region taken the photograph where object.
Optionally, as shown in figure 14, the mobile terminal 1100 also includes:
Second virtualization processing module 1160, if including at least two object regions for the preview image, divides The other non-face image-region at least two object region carries out virtualization processing;
Wherein, the non-face image-region of at least two object region is at least two object-image region All image-regions in domain in addition to facial image region.
Optionally, as shown in figure 15, the second virtualization processing module 1160 includes:
First setting unit 1161, reference man's face area in the facial image region for setting each object region Domain;
Division unit 1162, for according to each pixel in each non-face image-region and facial image region away from From each non-face image-region is divided into at least two non-face image regions respectively;
4th determining unit 1163, for according to each non-face image region and the reference face in facial image region The distance of subregion, it is determined that the virtualization grade of each non-face image region;
First removes unit 1164, for the facial image region in each object region to be removed;
First virtualization processing unit 1165, for the virtualization grade according to each non-face image region, respectively to every Individual non-face image region carries out virtualization processing;
First image composing unit 1166, for the inhuman face image after handling the facial image region of removal and virtualization Subregion carries out image synthesis.
Optionally, as shown in figure 16, first setting unit 1161 includes:
First computation subunit 11611, it is each in the facial image region for calculating each object region respectively The depth of view information of pixel;
Second computation subunit 11612, for calculating the average depth of field in each facial image region respectively;
First determination subelement 11613, for being averaged for facial image region by the depth of field in each facial image region The subregion of the depth of field is defined as reference man's face region;
Wherein, the average depth of field in each facial image region is the average scape of at least two pixels in facial image region It is deep.
Optionally, as shown in figure 17, the first virtualization processing module 1150, including:
Second setting unit 1151, for setting the reference picture subregion of the object region;
5th determining unit 1152, for according to each pixel in non-object image region and the reference picture sub-district The distance in domain, it is determined that the virtualization grade of each pixel;
Second removes unit 1153, for the object region in the preview image to be removed;
Second virtualization processing unit 1154, for the virtualization grade by each pixel in non-object image region, to every Individual pixel carries out virtualization processing;
Second image composing unit 1155, for non-targeted after the object region of removal is handled with virtualization Image-region carries out image synthesis;
Wherein, the non-object image region is all figures in addition to the object region in the preview image As region.
Optionally, as shown in figure 18, second setting unit 1151 includes:
3rd computation subunit 11511, the depth of field for calculating each pixel in the object region respectively are believed Breath;
4th computation subunit 11512, for calculating the average scape of at least two pixels in the object region It is deep;
Second determination subelement 11513, for the sub-district by the depth of field in the object region for the average depth of field Domain is defined as reference picture subregion.
Optionally, as shown in figure 19, second acquisition module 1120 includes:
Second recognition unit 1121, for carrying out facial image identification to the preview image;
3rd acquiring unit 1122, the face location information of the facial image for obtaining identification.
The mobile terminal of the embodiment of the present invention, including colour imagery shot and infrared thermal imaging camera, respectively described in acquisition Preview image that colour imagery shot and the infrared thermal imaging camera are gathered to same target reference object and infrared heat into As image;Obtain the face location information of target reference object described in the preview image;Extract the infrared thermal imaging figure The human body contour outline information of target reference object as described in;According to the face location information and the human body contour outline information, really Object region described in the fixed preview image where target reference object;According to the object region, to institute State preview image and carry out virtualization processing.Because infrared thermal imaging camera can detect the temperature that object spoke takes the photograph so that it is mobile eventually Hold under the weaker environment of the photographed scene or light of complexity, still target can be determined by infrared thermal imaging camera The human body contour outline information of reference object, such mobile terminal can not be disturbed by environment, distinguish target shooting people exactly Object image and background image, and virtualization processing is carried out to preview image according to the object region where target reference object, It is hereby achieved that preferably blur shooting effect.
The embodiment of the present invention also provides a kind of mobile terminal, including processor, memory, is stored on the memory simultaneously The computer program that can be run on the processor, above-mentioned image is realized when the computer program is by the computing device Each process of processing method embodiment, and identical technique effect can be reached, to avoid repeating, repeat no more here.
The embodiment of the present invention also provides a kind of computer-readable recording medium, and meter is stored with computer-readable recording medium Calculation machine program, the computer program realize each process of above-mentioned image processing method embodiment when being executed by processor, and Identical technique effect can be reached, to avoid repeating, repeated no more here.Wherein, described computer-readable recording medium, such as Read-only storage (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation RAM), magnetic disc or CD etc..
Referring to Figure 20, Figure 20 is the structure chart for the mobile terminal that the present invention implements offer, and the mobile terminal can realize Fig. 1 With the details of the image processing method in Fig. 3 embodiment of the method, and reach identical effect.As shown in figure 20, the movement Terminal includes colour imagery shot 2010 and infrared thermal imaging camera 2020, and mobile terminal 2000 also includes:At least one processing Device 2001, memory 2002, at least one network interface 2004 and other users interface 2003.It is each in mobile terminal 2000 Component is coupled by bus system 2005.It is understood that bus system 2005 is used to realize the connection between these components Communication.Bus system 2005 is in addition to including data/address bus, in addition to power bus, controlling bus and status signal bus in addition.But It is for the sake of clear explanation, various buses is all designated as bus system 2005 in fig. 20.
Wherein, user interface 2003 can include display, keyboard or pointing device (for example, mouse, trace ball (trackball), touch-sensitive plate or touch-screen etc.).
It is appreciated that the memory 2002 in the embodiment of the present invention can be volatile memory or non-volatile memories Device, or may include both volatibility and nonvolatile memory.Wherein, nonvolatile memory can be read-only storage (Read-Only Memory, ROM), programmable read only memory (Programmable ROM, PROM), erasable programmable are only Read memory (Erasable PROM, EPROM), Electrically Erasable Read Only Memory (Electrically EPROM, ) or flash memory EEPROM.Volatile memory can be random access memory (Random Access Memory, RAM), and it is used Make External Cache.By exemplary but be not restricted explanation, the RAM of many forms can use, such as static random-access Memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random-access Memory (Synchronous DRAM, SDRAM), double data speed synchronous dynamic RAM (Double Data Rate SDRAM, DDRSDRAM), it is enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronous Connect dynamic random access memory (Synch link DRAM, SLDRAM) and direct rambus random access memory (Direct Rambus DRAM, DRDRAM).The memory 2002 of system and method described herein is intended to include but is not limited to The memory of these and any other suitable type.
In some embodiments, memory 2002 stores following element, can perform module or data structure, or Their subset of person, or their superset:Operating system 20021 and application program 20022.
Wherein, operating system 20021, comprising various system programs, such as ccf layer, core library layer, driving layer etc., it is used for Realize various basic businesses and the hardware based task of processing.Application program 20022, include various application programs, such as matchmaker Body player (Media Player), browser (Browser) etc., for realizing various applied business.Realize that the present invention is implemented The program of example method may be embodied in application program 20022.
In embodiments of the present invention, mobile terminal 2000 also includes:It is stored on memory 2002 and can be in processor The computer program run on 2001, computer program realize following steps when being performed by processor 2001:Described in obtaining respectively Preview image that colour imagery shot and the infrared thermal imaging camera are gathered to same target reference object and infrared heat into As image;Obtain the face location information of target reference object described in the preview image;Extract the infrared thermal imaging figure The human body contour outline information of target reference object as described in;According to the face location information and the human body contour outline information, really Object region described in the fixed preview image where target reference object;According to the object region, to institute State preview image and carry out virtualization processing.
The method that the embodiments of the present invention disclose can apply in processor 2001, or real by processor 2001 It is existing.Processor 2001 is probably a kind of IC chip, has the disposal ability of signal.In implementation process, the above method Each step can be completed by the instruction of the integrated logic circuit of the hardware in processor 2001 or software form.Above-mentioned Processor 2001 can be general processor, digital signal processor (Digital Signal Processor, DSP), special Integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field Programmable Gate Array, FPGA) either other PLDs, discrete gate or transistor logic, Discrete hardware components.It can realize or perform disclosed each method, step and the logic diagram in the embodiment of the present invention.It is general Processor can be microprocessor or the processor can also be any conventional processor etc..With reference to institute of the embodiment of the present invention The step of disclosed method, can be embodied directly in hardware decoding processor and perform completion, or with the hardware in decoding processor And software module combination performs completion.Software module can be located at random access memory, flash memory, read-only storage, may be programmed read-only In the ripe computer-readable recording medium in this area such as memory or electrically erasable programmable memory, register.The meter Calculation machine readable storage medium storing program for executing is located at memory 2002, and processor 2001 reads the information in memory 2002, complete with reference to its hardware The step of into the above method.
It is understood that embodiments described herein can use hardware, software, firmware, middleware, microcode or its Combine to realize.Realized for hardware, processing unit can be realized in one or more application specific integrated circuit (Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal Processing, DSP), digital signal processing appts (DSP Device, DSPD), programmable logic device (Programmable Logic Device, PLD), field programmable gate array (Field-Programmable Gate Array, FPGA), general processor, In controller, microcontroller, microprocessor, other electronic units for performing herein described function or its combination.
Realize, can be realized herein by performing the module (such as process, function etc.) of function described herein for software Described technology.Software code is storable in memory and passes through computing device.Memory can within a processor or Realized outside processor.
Optionally, following steps can be also realized when computer program is performed by processor 2001:Obtain the temperature model of human body Enclose;According to default temperature range and the corresponding relation in false colour section, it is determined that false colour corresponding with the temperature range of the human body Section;Image corresponding to temperature range of the false colour in the human body in the infrared thermal imaging image in false colour section is determined For the infrared character image of the target reference object;Extract the human body contour outline information of the infrared character image.
Optionally, following steps can be also realized when computer program is performed by processor 2001:Obtain the preview image Coordinate Conversion amount between the infrared thermal imaging image;According to the Coordinate Conversion amount, the human body contour outline information is entered Row Coordinate Conversion, obtain the transformed profile information in the preview image;Judge the face location information whether at described turn Change in the range of profile information;If the face location information in the range of the transformed profile information, identifies described pre- Look at image-region corresponding to transformed profile information described in image;Image-region corresponding to the transformed profile information is defined as Object region where the target reference object.
Optionally, following steps can be also realized when computer program is performed by processor 2001:If the preview image bag At least two object regions are included, then void are carried out to the non-face image-region of at least two object region respectively Change is handled;Wherein, the non-face image-region of at least two object region is at least two object-image region All image-regions in domain in addition to facial image region.
Optionally, following steps can be also realized when computer program is performed by processor 2001:Each target image is set Reference man's face region in the facial image region in region;According to each pixel in each non-face image-region and face figure As the distance in region, each non-face image-region is divided at least two non-face image regions respectively;According to each Non-face image region and the distance in reference man's face region in facial image region, it is determined that each non-face image region Virtualization grade;Facial image region in each object region is removed;According to each non-face image region Grade is blurred, virtualization processing is carried out to each non-face image region respectively;At the facial image region of removal and virtualization Non-face image region after reason carries out image synthesis.
Optionally, following steps can be also realized when computer program is performed by processor 2001:Each target is calculated respectively The depth of view information of each pixel in the facial image region of image-region;The average scape in each facial image region is calculated respectively It is deep;Subregion of the depth of field in each facial image region for the average depth of field in facial image region is defined as reference man's face area Domain;Wherein, the average depth of field in each facial image region is the average depth of field of at least two pixels in facial image region.
Optionally, following steps can be also realized when computer program is performed by processor 2001:The target image is set The reference picture subregion in region;According to each pixel in non-object image region and the reference picture subregion away from From it is determined that the virtualization grade of each pixel;The object region in the preview image is removed;By non-targeted figure As the virtualization grade of each pixel in region, virtualization processing is carried out to each pixel;By the object-image region of removal Non-object image region progress image after domain is handled with virtualization synthesizes;Wherein, the non-object image region is the preview All image-regions in image in addition to the object region.
Optionally, following steps can be also realized when computer program is performed by processor 2001:The target is calculated respectively The depth of view information of each pixel in image-region;Calculate the average scape of at least two pixels in the object region It is deep;Subregion of the depth of field in the object region for the average depth of field is defined as reference picture subregion.
Optionally, following steps can be also realized when computer program is performed by processor 2001:The preview image is entered Pedestrian's face image identifies;Obtain the face location information of the facial image of identification.
The mobile terminal of the embodiment of the present invention, including colour imagery shot and infrared thermal imaging camera, respectively described in acquisition Preview image that colour imagery shot and the infrared thermal imaging camera are gathered to same target reference object and infrared heat into As image;Obtain the face location information of target reference object described in the preview image;Extract the infrared thermal imaging figure The human body contour outline information of target reference object as described in;According to the face location information and the human body contour outline information, really Object region described in the fixed preview image where target reference object;According to the object region, to institute State preview image and carry out virtualization processing.Because infrared thermal imaging camera can detect the temperature that object spoke takes the photograph so that it is mobile eventually Hold under the weaker environment of the photographed scene or light of complexity, still target can be determined by infrared thermal imaging camera The human body contour outline information of reference object, such mobile terminal can not be disturbed by environment, distinguish target shooting people exactly Object image and background image, and virtualization processing is carried out to preview image according to the object region where target reference object, It is hereby achieved that preferably blur shooting effect.
Referring to Figure 21, Figure 21 is the structure chart of mobile terminal provided in an embodiment of the present invention, and the mobile terminal can be realized The details of image processing method in Fig. 1 and Fig. 3 embodiment of the method, and reach identical effect.As shown in figure 21, the shifting Dynamic terminal includes colour imagery shot 2110 and infrared thermal imaging camera 2120, and mobile terminal 2100 also includes:Radio frequency (Radio Frequency, RF) circuit 2101, memory 2102, input block 2103, display unit 2104, processor 2106, audio-frequency electric Road 2107, WiFi (Wireless Fidelity) modules 2108 and power supply 2109.
Wherein, input block 2103 is available for the numeral or character information for receiving user's input, and generation and movement are eventually The signal input that the user at end 2100 is set and function control is relevant.Specifically, in the embodiment of the present invention, the input block 2103 can include contact panel 21031.Contact panel 21031, also referred to as touch-screen, user is collected on or near it Touch operation (for example user uses the operations of any suitable object or annex on contact panel 21031 such as finger, stylus), And corresponding attachment means are driven according to formula set in advance.Optionally, contact panel 21031 may include touch detecting apparatus With two parts of touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect what touch operation was brought Signal, transmit a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and it is changed Into contact coordinate, then the processor 2106 is given, and the order sent of reception processing device 2106 and can be performed.In addition, can To realize contact panel 21031 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except touch surface Plate 21031, input block 2103 can also include other input equipments 21032, and other input equipments 21032 can be included but not The one kind being limited in physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc. It is or a variety of.
Wherein, display unit 2104 can be used for display by the information of user's input or be supplied to information and the movement of user The various menu interfaces of terminal 2100.Display unit 2104 may include display panel 21041, optionally, can use LCD or have The forms such as machine light emitting diode (Organic Light-Emitting Diode, OLED) configure display panel 21041.
It should be noted that contact panel 21031 can cover display panel 21041, touch display screen is formed, when the touch is shown After screen detects the touch operation on or near it, processor 2106 is sent to determine the type of touch event, then place Reason device 2106 provides corresponding visual output according to the type of touch event in touch display screen.
Touch display screen includes Application Program Interface viewing area and conventional control viewing area.The Application Program Interface viewing area And arrangement mode of the conventional control viewing area does not limit, can be arranged above and below, left-right situs etc. can distinguish two it is aobvious Show the arrangement mode in area.The Application Program Interface viewing area is displayed for the interface of application program.Each interface can be with The interface element such as the icon comprising at least one application program and/or widget desktop controls.The Application Program Interface viewing area It can also be the empty interface not comprising any content.The conventional control viewing area is used to show the higher control of utilization rate, for example, Application icons such as settings button, interface numbering, scroll bar, phone directory icon etc..
Wherein processor 2106 is the control centre of mobile terminal 2100, utilizes various interfaces and connection whole mobile phone Various pieces, by running or performing the software program and/or module that are stored in first memory 21021, and call The data being stored in second memory 21022, the various functions and processing data of mobile terminal 2100 are performed, so as to movement Terminal 2100 carries out integral monitoring.Optionally, processor 2106 may include one or more processing units.
In embodiments of the present invention, by calling the software program and/or module that store in the first memory 21021 And/or the data in second memory 21022, processor 2106 are used for:The colour imagery shot and described infrared is obtained respectively The preview image and infrared thermal imaging image that thermal imaging camera is gathered to same target reference object;Obtain the preview graph The face location information of target reference object as described in;Extract target reference object described in the infrared thermal imaging image Human body contour outline information;According to the face location information and the human body contour outline information, mesh described in the preview image is determined Mark the object region where reference object;According to the object region, virtualization processing is carried out to the preview image.
Optionally, software program in the first memory 21021 and/or module are stored and/or second deposit by calling Data in reservoir 21022, processor 2106 are additionally operable to:Obtain the temperature range of human body;According to default temperature range and puppet The corresponding relation in color section, it is determined that false colour section corresponding with the temperature range of the human body;By the infrared thermal imaging image Image corresponding to temperature range of the middle false colour in the human body in false colour section is defined as the infrared of the target reference object Character image;Extract the human body contour outline information of the infrared character image.
Optionally, software program in the first memory 21021 and/or module are stored and/or second deposit by calling Data in reservoir 21022, processor 2106 are additionally operable to:Obtain between the preview image and the infrared thermal imaging image Coordinate Conversion amount;According to the Coordinate Conversion amount, the human body contour outline information is subjected to Coordinate Conversion, obtains the preview image In transformed profile information;Judge the face location information whether in the range of the transformed profile information;If the people Face positional information then identifies that transformed profile information described in the preview image is corresponding in the range of the transformed profile information Image-region;Target figure image-region corresponding to the transformed profile information being defined as where the target reference object As region.
Optionally, software program in the first memory 21021 and/or module are stored and/or second deposit by calling Data in reservoir 21022, processor 2106 are additionally operable to:If the preview image includes at least two object regions, Virtualization processing is carried out to the non-face image-region of at least two object region respectively;Wherein, described at least two The non-face image-region of object region is the institute at least two object region in addition to facial image region There is image-region.
Optionally, software program in the first memory 21021 and/or module are stored and/or second deposit by calling Data in reservoir 21022, processor 2106 are additionally operable to:The reference man in the facial image region of each object region is set Face region;, respectively will be each non-according to each pixel in each non-face image-region and the distance in facial image region Facial image region division is at least two non-face image regions;According to each non-face image region and facial image The distance in reference man's face region in region, it is determined that the virtualization grade of each non-face image region;By each target image Facial image region in region removes;According to the virtualization grade of each non-face image region, respectively to each non-face Image region carries out virtualization processing;Non-face image region behind the facial image region of removal and virtualization processing is carried out Image synthesizes.
Optionally, software program in the first memory 21021 and/or module are stored and/or second deposit by calling Data in reservoir 21022, processor 2106 are additionally operable to:Calculate respectively every in the facial image region of each object region The depth of view information of individual pixel;The average depth of field in each facial image region is calculated respectively;By scape in each facial image region Depth is defined as reference man's face region for the subregion of the average depth of field in facial image region;Wherein, each facial image region The average depth of field be facial image region at least two pixels the average depth of field.
Optionally, software program in the first memory 21021 and/or module are stored and/or second deposit by calling Data in reservoir 21022, processor 2106 are used for:The reference picture subregion of the object region is set;According to non- The distance of each pixel and the reference picture subregion in object region, it is determined that the virtualization grade of each pixel; The object region in the preview image is removed;By virtualization of each pixel etc. in non-object image region Level, virtualization processing is carried out to each pixel;By the non-object image after the object region of removal and virtualization processing Region carries out image synthesis;Wherein, the non-object image region be the preview image in except the object region it Outer all image-regions.
Optionally, software program in the first memory 21021 and/or module are stored and/or second deposit by calling Data in reservoir 21022, processor 2106 are used for:The depth of field letter of each pixel in the object region is calculated respectively Breath;Calculate the average depth of field of at least two pixels in the object region;It is by the depth of field in the object region The subregion of the average depth of field is defined as reference picture subregion.
Optionally, software program in the first memory 21021 and/or module are stored and/or second deposit by calling Data in reservoir 21022, processor 2106 are used for:Facial image identification is carried out to the preview image;Acquisition knows others The face location information of face image.
The mobile terminal of the embodiment of the present invention, including colour imagery shot and infrared thermal imaging camera, respectively described in acquisition Preview image that colour imagery shot and the infrared thermal imaging camera are gathered to same target reference object and infrared heat into As image;Obtain the face location information of target reference object described in the preview image;Extract the infrared thermal imaging figure The human body contour outline information of target reference object as described in;According to the face location information and the human body contour outline information, really Object region described in the fixed preview image where target reference object;According to the object region, to institute State preview image and carry out virtualization processing.Because infrared thermal imaging camera can detect the temperature that object spoke takes the photograph so that it is mobile eventually Hold under the weaker environment of the photographed scene or light of complexity, still target can be determined by infrared thermal imaging camera The human body contour outline information of reference object, such mobile terminal can not be disturbed by environment, distinguish target shooting people exactly Object image and background image, and virtualization processing is carried out to preview image according to the object region where target reference object, It is hereby achieved that preferably blur shooting effect.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein Member and algorithm steps, it can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually Performed with hardware or software mode, application-specific and design constraint depending on technical scheme.Professional and technical personnel Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed The scope of the present invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, the corresponding process in preceding method embodiment is may be referred to, will not be repeated here.
In embodiment provided herein, it should be understood that disclosed apparatus and method, others can be passed through Mode is realized.For example, device embodiment described above is only schematical, for example, the division of the unit, is only A kind of division of logic function, can there is an other dividing mode when actually realizing, for example, multiple units or component can combine or Person is desirably integrated into another system, or some features can be ignored, or does not perform.Another, shown or discussed is mutual Between coupling or direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some interfaces, device or unit Connect, can be electrical, mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs 's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also That unit is individually physically present, can also two or more units it is integrated in a unit.
If the function is realized in the form of SFU software functional unit and is used as independent production marketing or in use, can be with It is stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words The part to be contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter Calculation machine software product is stored in a storage medium, including some instructions are causing a computer equipment (can be People's computer, server, or network equipment etc.) perform all or part of step of each embodiment methods described of the present invention. And foregoing storage medium includes:USB flash disk, mobile hard disk, ROM, RAM, magnetic disc or CD etc. are various can be with store program codes Medium.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all be contained Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be defined by scope of the claims.

Claims (20)

  1. A kind of 1. image processing method, applied to mobile terminal, it is characterised in that the mobile terminal include colour imagery shot and Infrared thermal imaging camera, methods described include:
    Obtain respectively the colour imagery shot and the infrared thermal imaging camera same target reference object is gathered it is pre- Look at image and infrared thermal imaging image;
    Obtain the face location information of target reference object described in the preview image;
    Extract the human body contour outline information of target reference object described in the infrared thermal imaging image;
    According to the face location information and the human body contour outline information, target reference object described in the preview image is determined The object region at place;
    According to the object region, virtualization processing is carried out to the preview image.
  2. 2. according to the method for claim 1, it is characterised in that target described in the extraction infrared thermal imaging image The step of human body contour outline information of reference object, including:
    Obtain the temperature range of human body;
    According to default temperature range and the corresponding relation in false colour section, it is determined that false colour corresponding with the temperature range of the human body Section;
    Image corresponding to temperature range of the false colour in the human body in the infrared thermal imaging image in false colour section is determined For the infrared character image of the target reference object;
    Extract the human body contour outline information of the infrared character image.
  3. 3. according to the method for claim 1, it is characterised in that described according to the face location information and the human body wheel Wide information, the step of determining described in the preview image object region where target reference object, including:
    Obtain the Coordinate Conversion amount between the preview image and the infrared thermal imaging image;
    According to the Coordinate Conversion amount, the human body contour outline information is subjected to Coordinate Conversion, obtains turning in the preview image Change profile information;
    Judge the face location information whether in the range of the transformed profile information;
    If the face location information is identified and changed described in the preview image in the range of the transformed profile information Image-region corresponding to profile information;
    Object region image-region corresponding to the transformed profile information being defined as where the target reference object.
  4. 4. according to the method for claim 3, it is characterised in that described according to the face location information and the human body wheel Wide information, after the step of determining described in the preview image object region where target reference object, the side Method also includes:
    If the preview image includes at least two object regions, respectively at least two object region Non-face image-region carries out virtualization processing;
    Wherein, the non-face image-region of at least two object region is at least two object region All image-regions in addition to facial image region.
  5. 5. according to the method for claim 4, it is characterised in that described respectively at least two object region Non-face image-region is carried out the step of virtualization processing, including:
    Reference man's face region in the facial image region of each object region is set;
    , respectively will each non-face figure according to each pixel in each non-face image-region and the distance in facial image region As region division is at least two non-face image regions;
    According to each non-face image region and the distance in reference man's face region in facial image region, it is determined that each inhuman The virtualization grade of face image subregion;
    Facial image region in each object region is removed;
    According to the virtualization grade of each non-face image region, each non-face image region is carried out at virtualization respectively Reason;
    Non-face image region after the facial image region of removal is handled with virtualization carries out image and synthesized.
  6. 6. according to the method for claim 5, it is characterised in that the facial image area that each object region is set The step of reference man face region in domain, including:
    The depth of view information of each pixel in the facial image region of each object region is calculated respectively;
    The average depth of field in each facial image region is calculated respectively;
    Subregion of the depth of field in each facial image region for the average depth of field in facial image region is defined as reference man's face Region;
    Wherein, the average depth of field in each facial image region is the average depth of field of at least two pixels in facial image region.
  7. 7. method according to any one of claim 1 to 6, it is characterised in that it is described according to the object region, The step of virtualization processing is carried out to the preview image, including:
    The reference picture subregion of the object region is set;
    According to the distance of each pixel in non-object image region and the reference picture subregion, it is determined that each pixel Blur grade;
    The object region in the preview image is removed;
    By the virtualization grade of each pixel in non-object image region, virtualization processing is carried out to each pixel;
    Non-object image region after the object region of removal is handled with virtualization carries out image and synthesized;
    Wherein, the non-object image region is all image districts in addition to the object region in the preview image Domain.
  8. 8. method as claimed in claim 7, it is characterised in that the reference picture subregion that the object region is set The step of, including:
    The depth of view information of each pixel in the object region is calculated respectively;
    Calculate the average depth of field of at least two pixels in the object region;
    Subregion of the depth of field in the object region for the average depth of field is defined as reference picture subregion.
  9. 9. the method described in method according to any one of claim 1 to 6, it is characterised in that the acquisition is described pre- Look at target reference object described in image face location information the step of, including:
    Facial image identification is carried out to the preview image;
    Obtain the face location information of the facial image of identification.
  10. A kind of 10. mobile terminal, it is characterised in that the mobile terminal includes colour imagery shot and infrared thermal imaging camera, Institute's mobile terminal also includes:
    First acquisition module, the colour imagery shot and the infrared thermal imaging camera are obtained respectively to the shooting pair of same target As the preview image and infrared thermal imaging image that are gathered;
    Second acquisition module, for obtaining the face location information of target reference object described in the preview image;
    Extraction module, for extracting the human body contour outline information of target reference object described in the infrared thermal imaging image;
    Determining module, for according to the face location information and the human body contour outline information, determining institute in the preview image State the object region where target reference object;
    First virtualization processing module, for according to the object region, virtualization processing to be carried out to the preview image.
  11. 11. mobile terminal according to claim 10, it is characterised in that the extraction module includes:
    First acquisition unit, for obtaining the temperature range of human body;
    First determining unit, according to default temperature range and the corresponding relation in false colour section, it is determined that the temperature with the human body False colour section corresponding to scope;
    Second determining unit, for by false colour corresponding to temperature range of the false colour in the human body in the infrared thermal imaging image Image in section is defined as the infrared character image of the target reference object;
    Extraction unit, for extracting the human body contour outline information of the infrared character image.
  12. 12. mobile terminal according to claim 10, it is characterised in that the determining module includes:
    Second acquisition unit, for obtaining the Coordinate Conversion amount between the preview image and the infrared thermal imaging image;
    Converting unit, for according to the Coordinate Conversion amount, the human body contour outline information being carried out into Coordinate Conversion, obtained described pre- The transformed profile information look in image;
    Judging unit, for judging the face location information whether in the range of the transformed profile information;
    First recognition unit, if for the face location information in the range of the transformed profile information, described in identification Image-region corresponding to transformed profile information described in preview image;
    3rd determining unit, for image-region corresponding to the transformed profile information to be defined as into the target reference object institute Object region.
  13. 13. mobile terminal according to claim 12, it is characterised in that the mobile terminal also includes:
    Second virtualization processing module, if including at least two object regions for the preview image, respectively to described The non-face image-region of at least two object regions carries out virtualization processing;
    Wherein, the non-face image-region of at least two object region is at least two object region All image-regions in addition to facial image region.
  14. 14. mobile terminal according to claim 13, it is characterised in that the second virtualization processing module includes:
    First setting unit, reference man's face region in the facial image region for setting each object region;
    Division unit, for according to each pixel in each non-face image-region and the distance in facial image region, difference Each non-face image-region is divided at least two non-face image regions;
    4th determining unit, for according to each non-face image region and reference man's face region in facial image region Distance, it is determined that the virtualization grade of each non-face image region;
    First removes unit, for the facial image region in each object region to be removed;
    First virtualization processing unit, for the virtualization grade according to each non-face image region, respectively to each non-face Image region carries out virtualization processing;
    First image composing unit, enter for the non-face image region after the facial image region of removal is handled with virtualization Row image synthesizes.
  15. 15. mobile terminal according to claim 14, it is characterised in that first setting unit includes:
    First computation subunit, the scape of each pixel in the facial image region for calculating each object region respectively Deeply convince breath;
    Second computation subunit, for calculating the average depth of field in each facial image region respectively;
    First determination subelement, for the sub-district by the depth of field in each facial image region for the average depth of field in facial image region Domain is defined as reference man's face region;
    Wherein, the average depth of field in each facial image region is the average depth of field of at least two pixels in facial image region.
  16. 16. the mobile terminal according to any one of claim 10 to 15, it is characterised in that the first virtualization processing mould Block, including:
    Second setting unit, for setting the reference picture subregion of the object region;
    5th determining unit, for according to each pixel in non-object image region and the reference picture subregion away from From it is determined that the virtualization grade of each pixel;
    Second removes unit, for the object region in the preview image to be removed;
    Second virtualization processing unit, for the virtualization grade by each pixel in non-object image region, to each pixel Carry out virtualization processing;
    Second image composing unit, for the non-object image region after handling the object region of removal and virtualization Carry out image synthesis;
    Wherein, the non-object image region is all image districts in addition to the object region in the preview image Domain.
  17. 17. mobile terminal according to claim 16, it is characterised in that second setting unit includes:
    3rd computation subunit, for calculating the depth of view information of each pixel in the object region respectively;
    4th computation subunit, for calculating the average depth of field of at least two pixels in the object region;
    Second determination subelement, for subregion of the depth of field in the object region for the average depth of field to be defined as joining Examine image region.
  18. 18. the mobile terminal according to any one of claim 10 to 15, it is characterised in that the second acquisition module bag Include:
    Second recognition unit, for carrying out facial image identification to the preview image;
    3rd acquiring unit, the face location information of the facial image for obtaining identification.
  19. 19. a kind of mobile terminal, including:Memory, processor and storage are on a memory and the meter that can run on a processor Calculation machine program, it is characterised in that realized described in the computing device during computer program such as any one of claim 1 to 9 Step in described image processing method.
  20. 20. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the computer program quilt The step in image processing method as claimed in any one of claims 1-9 wherein is realized during computing device.
CN201710576627.5A 2017-07-14 2017-07-14 A kind of image processing method and mobile terminal Active CN107395965B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710576627.5A CN107395965B (en) 2017-07-14 2017-07-14 A kind of image processing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710576627.5A CN107395965B (en) 2017-07-14 2017-07-14 A kind of image processing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN107395965A true CN107395965A (en) 2017-11-24
CN107395965B CN107395965B (en) 2019-11-29

Family

ID=60340297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710576627.5A Active CN107395965B (en) 2017-07-14 2017-07-14 A kind of image processing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN107395965B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107995425A (en) * 2017-12-11 2018-05-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108053363A (en) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 Background blurring processing method, device and equipment
CN108093181A (en) * 2018-01-16 2018-05-29 奇酷互联网络科技(深圳)有限公司 Picture shooting method, device, readable storage medium storing program for executing and mobile terminal
CN108769505A (en) * 2018-03-30 2018-11-06 联想(北京)有限公司 A kind of image procossing set method and electronic equipment
CN108848300A (en) * 2018-05-08 2018-11-20 百度在线网络技术(北京)有限公司 Method and apparatus for output information
CN109241947A (en) * 2018-10-15 2019-01-18 盎锐(上海)信息科技有限公司 Information processing unit and method for the monitoring of stream of people's momentum
WO2019105158A1 (en) * 2017-11-30 2019-06-06 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for blurring preview picture and storage medium
CN110047126A (en) * 2019-04-25 2019-07-23 北京字节跳动网络技术有限公司 Render method, apparatus, electronic equipment and the computer readable storage medium of image
CN110855868A (en) * 2019-11-25 2020-02-28 李峥炜 Human image enhanced camera
CN110996078A (en) * 2019-11-25 2020-04-10 深圳市创凯智能股份有限公司 Image acquisition method, terminal and readable storage medium
CN111614932A (en) * 2019-05-14 2020-09-01 北京精准沟通传媒科技股份有限公司 Data processing method and device
CN112217992A (en) * 2020-09-29 2021-01-12 Oppo(重庆)智能科技有限公司 Image blurring method, image blurring device, mobile terminal, and storage medium
CN112351268A (en) * 2019-08-07 2021-02-09 杭州海康微影传感科技有限公司 Thermal imaging camera burn detection method and device and electronic equipment
CN112907890A (en) * 2020-12-08 2021-06-04 泰州市朗嘉馨网络科技有限公司 Automatic change protection platform
CN113014791A (en) * 2019-12-20 2021-06-22 中兴通讯股份有限公司 Image generation method and device
CN113138387A (en) * 2020-01-17 2021-07-20 北京小米移动软件有限公司 Image acquisition method and device, mobile terminal and storage medium
CN113301233A (en) * 2021-05-21 2021-08-24 南阳格瑞光电科技股份有限公司 Double-spectrum imaging system
CN113965695A (en) * 2021-09-07 2022-01-21 福建库克智能科技有限公司 Method, system, device, display unit and medium for image display
CN114088207A (en) * 2020-07-17 2022-02-25 北京京东尚科信息技术有限公司 Temperature detection method and system
CN114885101A (en) * 2022-05-31 2022-08-09 维沃移动通信有限公司 Image generation method and device
WO2022198821A1 (en) * 2021-03-25 2022-09-29 深圳市商汤科技有限公司 Method and apparatus for performing matching between human face and human body, and electronic device, storage medium and program
CN116469519A (en) * 2023-03-23 2023-07-21 北京鹰之眼智能健康科技有限公司 Human body acupoint obtaining method based on infrared image

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751405A (en) * 2015-03-11 2015-07-01 百度在线网络技术(北京)有限公司 Method and device for blurring image
CN104780313A (en) * 2015-03-26 2015-07-15 广东欧珀移动通信有限公司 Image processing method and mobile terminal
CN104966266A (en) * 2015-06-04 2015-10-07 福建天晴数码有限公司 Method and system to automatically blur body part
CN105516586A (en) * 2015-12-01 2016-04-20 小米科技有限责任公司 Picture shooting method, device and system
CN105678310A (en) * 2016-02-03 2016-06-15 北京京东方多媒体科技有限公司 Infrared thermal image contour extraction method and device
CN105933589A (en) * 2016-06-28 2016-09-07 广东欧珀移动通信有限公司 Image processing method and terminal
CN106101544A (en) * 2016-06-30 2016-11-09 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN106331492A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Image processing method and terminal
CN106454118A (en) * 2016-11-18 2017-02-22 上海传英信息技术有限公司 Picture blurring method and mobile terminal
CN106446873A (en) * 2016-11-03 2017-02-22 北京旷视科技有限公司 Face detection method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751405A (en) * 2015-03-11 2015-07-01 百度在线网络技术(北京)有限公司 Method and device for blurring image
CN104780313A (en) * 2015-03-26 2015-07-15 广东欧珀移动通信有限公司 Image processing method and mobile terminal
CN104966266A (en) * 2015-06-04 2015-10-07 福建天晴数码有限公司 Method and system to automatically blur body part
CN105516586A (en) * 2015-12-01 2016-04-20 小米科技有限责任公司 Picture shooting method, device and system
CN105678310A (en) * 2016-02-03 2016-06-15 北京京东方多媒体科技有限公司 Infrared thermal image contour extraction method and device
CN105933589A (en) * 2016-06-28 2016-09-07 广东欧珀移动通信有限公司 Image processing method and terminal
CN106101544A (en) * 2016-06-30 2016-11-09 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN106331492A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Image processing method and terminal
CN106446873A (en) * 2016-11-03 2017-02-22 北京旷视科技有限公司 Face detection method and device
CN106454118A (en) * 2016-11-18 2017-02-22 上海传英信息技术有限公司 Picture blurring method and mobile terminal

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3493523A3 (en) * 2017-11-30 2019-08-07 Guangdong Oppo Mobile Telecommunications Corp., Ltd Method and apparatus for blurring preview picture and storage medium
WO2019105158A1 (en) * 2017-11-30 2019-06-06 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for blurring preview picture and storage medium
US10674069B2 (en) 2017-11-30 2020-06-02 Guangdong Oppo Mobile Telecommunications Corp. Ltd. Method and apparatus for blurring preview picture and storage medium
CN108053363A (en) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 Background blurring processing method, device and equipment
WO2019105261A1 (en) * 2017-11-30 2019-06-06 Oppo广东移动通信有限公司 Background blurring method and apparatus, and device
CN107995425A (en) * 2017-12-11 2018-05-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107995425B (en) * 2017-12-11 2019-08-20 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108093181A (en) * 2018-01-16 2018-05-29 奇酷互联网络科技(深圳)有限公司 Picture shooting method, device, readable storage medium storing program for executing and mobile terminal
CN108769505A (en) * 2018-03-30 2018-11-06 联想(北京)有限公司 A kind of image procossing set method and electronic equipment
CN108848300A (en) * 2018-05-08 2018-11-20 百度在线网络技术(北京)有限公司 Method and apparatus for output information
CN109241947A (en) * 2018-10-15 2019-01-18 盎锐(上海)信息科技有限公司 Information processing unit and method for the monitoring of stream of people's momentum
CN110047126A (en) * 2019-04-25 2019-07-23 北京字节跳动网络技术有限公司 Render method, apparatus, electronic equipment and the computer readable storage medium of image
CN110047126B (en) * 2019-04-25 2023-11-24 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN111614932A (en) * 2019-05-14 2020-09-01 北京精准沟通传媒科技股份有限公司 Data processing method and device
CN112351268A (en) * 2019-08-07 2021-02-09 杭州海康微影传感科技有限公司 Thermal imaging camera burn detection method and device and electronic equipment
CN112351268B (en) * 2019-08-07 2022-09-02 杭州海康微影传感科技有限公司 Thermal imaging camera burn detection method and device and electronic equipment
CN110855868A (en) * 2019-11-25 2020-02-28 李峥炜 Human image enhanced camera
CN110996078A (en) * 2019-11-25 2020-04-10 深圳市创凯智能股份有限公司 Image acquisition method, terminal and readable storage medium
CN113014791A (en) * 2019-12-20 2021-06-22 中兴通讯股份有限公司 Image generation method and device
CN113014791B (en) * 2019-12-20 2023-09-19 中兴通讯股份有限公司 Image generation method and device
CN113138387A (en) * 2020-01-17 2021-07-20 北京小米移动软件有限公司 Image acquisition method and device, mobile terminal and storage medium
CN113138387B (en) * 2020-01-17 2024-03-08 北京小米移动软件有限公司 Image acquisition method and device, mobile terminal and storage medium
CN114088207A (en) * 2020-07-17 2022-02-25 北京京东尚科信息技术有限公司 Temperature detection method and system
CN112217992A (en) * 2020-09-29 2021-01-12 Oppo(重庆)智能科技有限公司 Image blurring method, image blurring device, mobile terminal, and storage medium
CN112907890A (en) * 2020-12-08 2021-06-04 泰州市朗嘉馨网络科技有限公司 Automatic change protection platform
WO2022198821A1 (en) * 2021-03-25 2022-09-29 深圳市商汤科技有限公司 Method and apparatus for performing matching between human face and human body, and electronic device, storage medium and program
CN113301233A (en) * 2021-05-21 2021-08-24 南阳格瑞光电科技股份有限公司 Double-spectrum imaging system
CN113301233B (en) * 2021-05-21 2023-02-03 南阳格瑞光电科技股份有限公司 Double-spectrum imaging system
CN113965695A (en) * 2021-09-07 2022-01-21 福建库克智能科技有限公司 Method, system, device, display unit and medium for image display
CN114885101A (en) * 2022-05-31 2022-08-09 维沃移动通信有限公司 Image generation method and device
CN116469519B (en) * 2023-03-23 2024-01-26 北京鹰之眼智能健康科技有限公司 Human body acupoint obtaining method based on infrared image
CN116469519A (en) * 2023-03-23 2023-07-21 北京鹰之眼智能健康科技有限公司 Human body acupoint obtaining method based on infrared image

Also Published As

Publication number Publication date
CN107395965B (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN107395965B (en) A kind of image processing method and mobile terminal
CN105847674B (en) A kind of preview image processing method and mobile terminal based on mobile terminal
CN107507239B (en) A kind of image partition method and mobile terminal
CN107197170A (en) A kind of exposal control method and mobile terminal
CN106060419B (en) A kind of photographic method and mobile terminal
CN102843509B (en) Image processing device and image processing method
CN106027900A (en) Photographing method and mobile terminal
CN107679482A (en) Solve lock control method and Related product
CN105933589A (en) Image processing method and terminal
CN107172296A (en) A kind of image capturing method and mobile terminal
CN106027952B (en) Camera chain is estimated relative to the automatic direction of vehicle
CN107277481A (en) A kind of image processing method and mobile terminal
CN107832675A (en) Processing method of taking pictures and Related product
CN110139033A (en) Camera control method and Related product
CN106664465A (en) System for creating and reproducing augmented reality contents, and method using same
CN106506962A (en) A kind of image processing method and mobile terminal
JP5756322B2 (en) Information processing program, information processing method, information processing apparatus, and information processing system
CN106096043B (en) A kind of photographic method and mobile terminal
CN108776822B (en) Target area detection method, device, terminal and storage medium
CN107172361A (en) The method and mobile terminal of a kind of pan-shot
CN108495032A (en) Image processing method, device, storage medium and electronic equipment
CN106791438A (en) A kind of photographic method and mobile terminal
JP2012221261A (en) Information processing program, information processing method, information processor and information processing system
CN112532881B (en) Image processing method and device and electronic equipment
CN107277351A (en) The processing method and mobile terminal of a kind of view data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant