CN107395965B - A kind of image processing method and mobile terminal - Google Patents

A kind of image processing method and mobile terminal Download PDF

Info

Publication number
CN107395965B
CN107395965B CN201710576627.5A CN201710576627A CN107395965B CN 107395965 B CN107395965 B CN 107395965B CN 201710576627 A CN201710576627 A CN 201710576627A CN 107395965 B CN107395965 B CN 107395965B
Authority
CN
China
Prior art keywords
image
region
face
virtualization
thermal imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710576627.5A
Other languages
Chinese (zh)
Other versions
CN107395965A (en
Inventor
耿筝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710576627.5A priority Critical patent/CN107395965B/en
Publication of CN107395965A publication Critical patent/CN107395965A/en
Application granted granted Critical
Publication of CN107395965B publication Critical patent/CN107395965B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present invention provides a kind of image processing method and mobile terminal, the mobile terminal includes colour imagery shot and infrared thermal imaging camera, this method comprises: obtaining the colour imagery shot and the infrared thermal imaging camera respectively to same target reference object preview image collected and infrared thermal imaging image;Obtain the face location information of target reference object described in the preview image;Extract the human body contour outline information of target reference object described in the infrared thermal imaging image;According to the face location information and the human body contour outline information, the object region where target reference object described in the preview image is determined;According to the object region, virtualization processing is carried out to the preview image.Since infrared thermal imaging camera is that the temperature taken the photograph according to object spoke is imaged, so as to not be protected from environmental, target shooting character image and background image are accurately distinguished, and then obtains preferably virtualization shooting effect.

Description

A kind of image processing method and mobile terminal
Technical field
The present invention relates to field of communication technology more particularly to a kind of image processing methods and mobile terminal.
Background technique
With becoming increasingly popular for mobile terminal and being constantly progressive for camera technology, user carries out commonly using mobile terminal It takes pictures, and the photo of shooting is shared by social network-i i-platform with friend, this makes the shooting effect of photo become user's Focus of attention.Wherein, virtualization style of shooting can get the clear and background blurring shooting effect of main body.Currently, mobile terminal It is the virtualization processing realized based on dual camera to photo, however in actual conditions, photographed scene is varied, for complexity Photographed scene or the weaker environment of light, existing mobile terminal exist distinguish the accuracy rate of target shooting figure and ground compared with It is low, so as to cause the problem that image virtualization effect when shooting is poor.
Summary of the invention
The embodiment of the present invention provides a kind of image processing method and mobile terminal, to solve existing mobile terminal for complexity Photographed scene or the weaker environment of light, exist distinguish target shooting figure and ground accuracy rate it is lower, so as to cause bat Image virtualization effect poor problem when taking the photograph.
In a first aspect, the embodiment of the present invention provides a kind of image processing method, it is applied to mobile terminal, the mobile terminal Including colour imagery shot and infrared thermal imaging camera, which comprises
The colour imagery shot and the infrared thermal imaging camera is obtained respectively to acquire same target reference object Preview image and infrared thermal imaging image;
Obtain the face location information of target reference object described in the preview image;
Extract the human body contour outline information of target reference object described in the infrared thermal imaging image;
According to the face location information and the human body contour outline information, determine that target described in the preview image is shot Object region where object;
According to the object region, virtualization processing is carried out to the preview image.
Second aspect, the embodiment of the present invention also provide a kind of mobile terminal, the mobile terminal include colour imagery shot and Infrared thermal imaging camera, the mobile terminal further include:
First obtains module, obtains the colour imagery shot respectively and the infrared thermal imaging camera claps same target Take the photograph object preview image collected and infrared thermal imaging image;
Second obtains module, for obtaining the face location information of target reference object described in the preview image;
Extraction module, for extracting the human body contour outline information of target reference object described in the infrared thermal imaging image;
Determining module, for determining the preview image according to the face location information and the human body contour outline information Described in object region where target reference object;
First virtualization processing module, for carrying out virtualization processing to the preview image according to the object region.
The third aspect, the embodiment of the present invention also provide a kind of mobile terminal, comprising: memory, processor and are stored in On reservoir and the computer program that can run on a processor, the processor realize the present invention when executing the computer program The step in image processing method in embodiment.
Fourth aspect, the embodiment of the present invention also provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence realizes the step in the image processing method in the embodiment of the present invention when computer program is executed by processor.
In this way, obtaining the colour imagery shot and the infrared thermal imaging camera respectively to same in the embodiment of the present invention One target reference object preview image collected and infrared thermal imaging image;Obtain the shooting of target described in the preview image The face location information of object;Extract the human body contour outline information of target reference object described in the infrared thermal imaging image;Root According to the face location information and the human body contour outline information, where determining target reference object described in the preview image Object region;According to the object region, virtualization processing is carried out to the preview image.Since infrared thermal imaging is taken the photograph As the temperature that object spoke is taken the photograph can be detected in head, even if so that the mobile terminal environment weaker in complicated photographed scene or light Under, the human body contour outline information of target reference object can be still determined by infrared thermal imaging camera, such mobile terminal can By the interference of environment, not distinguish target shooting character image and background image accurately, and according to target reference object institute Object region virtualization processing is carried out to preview image, it is hereby achieved that preferably blur shooting effect.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be in embodiment or description of the prior art Required attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is only some realities of the invention Example is applied, it for those of ordinary skill in the art, without any creative labor, can also be attached according to these Figure obtains other attached drawings.
Fig. 1 is one of the flow chart of image processing method provided in an embodiment of the present invention;
Fig. 2 is the human body contour outline schematic diagram of target reference object in infrared thermal imaging image provided in an embodiment of the present invention;
Fig. 3 is the two of the flow chart of image processing method provided in an embodiment of the present invention;
Fig. 4 is the schematic diagram that infrared character image is shown in infrared thermal imaging image provided in an embodiment of the present invention;
Fig. 5 is dual camera coordinate convert reference illustraton of model provided in an embodiment of the present invention;
Fig. 6 is that the position of the facial image in preview image provided in an embodiment of the present invention and the human body contour outline after conversion shows It is intended to;
Fig. 7 is the facial image region of two object regions and inhuman in preview image provided in an embodiment of the present invention The position view in face image region;
Fig. 8 is the region division schematic diagram of two non-face image-regions provided in an embodiment of the present invention;
Fig. 9 is depth of field computation model schematic diagram provided in an embodiment of the present invention;
Figure 10 is the position of object region and non-object image region in preview image provided in an embodiment of the present invention Schematic diagram;
Figure 11 is one of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 12 is the two of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 13 is the three of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 14 is the four of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 15 is the five of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 16 is the six of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 17 is the seven of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 18 is the eight of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 19 is the nine of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 20 is the ten of the structure chart of mobile terminal provided in an embodiment of the present invention;
Figure 21 is the 11 of the structure chart of mobile terminal provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
It is the flow chart of image processing method provided in an embodiment of the present invention referring to Fig. 1, Fig. 1, is applied to mobile terminal, institute Stating mobile terminal includes colour imagery shot and infrared thermal imaging camera, as shown in Figure 1, the described method comprises the following steps:
Step 101 obtains the colour imagery shot and the infrared thermal imaging camera to the shooting pair of same target respectively As preview image collected and infrared thermal imaging image.
Above-mentioned preview image is by the collected preview screen of above-mentioned colour imagery shot, due to being taken the photograph by the colour As head can thereby may be ensured that the photo shot by the colour imagery shot has preferably with preview to relatively clear picture Picture quality.
Above-mentioned infrared thermal imaging camera is to utilize infrared detector and optical imagery object using infrared thermal imaging technique Mirror receives in the infrared radiation energy distribution pattern reflection to the light-sensitive element of infrared detector of measured target, to obtain infrared Graphic images, this graphic images are corresponding with the heat distribution field of body surface.
In the embodiment of the present invention, by reflecting the temperature of object radiation onto image, and every kind of temperature is all used one False colour shows, these false colours picture to be formed that is stitched together finally is formd the infrared thermal imaging image.With it is visible Light is compared, and infrared thermal imaging is not influenced by ambient light is strong and weak, so even if in complicated scene or compared with still may be used under dark situation Form clearly infrared thermal imaging image.
It should be noted that the position between the colour imagery shot and the infrared thermal imaging camera is relatively fixed, And required distance between the two is small as far as possible, with guarantee can be with by the colour imagery shot and the infrared thermal imaging camera It is shot for same target reference object, in this way, in conjunction with the preview image and infrared thermal imaging image progress When subsequent image processing or fusion, preferable picture quality can be obtained.
Step 102, the face location information for obtaining target reference object described in the preview image.
The face location information of target reference object described in the above-mentioned acquisition preview image can be described in first identification Facial image in preview image, so that it is determined that the position where the face of the target reference object, then obtains described again The face location information of target reference object, wherein the face location information includes in the region where the facial image Each pixel is in the direction x and the direction y coordinate value.
In this way, in the step, it, can by obtaining the face location information of target reference object described in the preview image The character image and other images in the preview image are tentatively distinguished, and can also prevent to a certain degree the mobile terminal from inciting somebody to action Other images in the infrared thermal imaging image are mistakenly identified as the corresponding infrared character image of the target reference object.
Step 103, the human body contour outline information for extracting target reference object described in the infrared thermal imaging image.
In the embodiment of the present invention, since the temperature of human body radiation is different from the temperature of other object radiations, it can incite somebody to action The temperature range of human body radiation is shown with specific false colour or false colour section, so as to by identifying the infrared thermal imaging The corresponding false colour of human body temperature range or false colour section determine the human body contour outline information of the target reference object in image.
Specifically, since the temperature of people is generally at 36.5 DEG C or so, and the temperature of human body different parts is also not quite similar, Therefore, the temperature range of human body can be pre-defined, such as: the temperature range of pre-defined human body is 36 DEG C~38 DEG C.And Every kind of temperature can be pre-defined and correspond to different false colour or the corresponding false colour of human body temperature range, in this way, according to pre-defined Human body temperature range and temperature and false colour corresponding relationship, determine the corresponding false colour of human body temperature range or false colour section.
The human body contour outline information of target reference object described in infrared thermal imaging image described in said extracted, can be basis Corresponding false colour or false colour section are extracted in the corresponding false colour of human body temperature range or false colour section from the infrared thermal imaging image Infrared character image of the image as the target reference object, may thereby determine that the human body wheel of the target reference object Wide information, wherein the human body contour outline information can be by extracting edge from the location information of the infrared character image The coordinate value of pixel obtains.Such as: referring to fig. 2, Fig. 2 is that target is clapped in infrared thermal imaging image provided in an embodiment of the present invention The human body contour outline schematic diagram of object is taken the photograph, it is described to determine which can be the coordinate value of the extraction edge pixel point from the A of region The human body contour outline information of target reference object.
In this way, the target shooting pair can be determined in the step by the corresponding false colour of human body temperature or false colour section It is quasi- so as to make the mobile terminal not by the interference of environment as the human body contour outline information in the infrared thermal imaging image Really distinguish infrared character image and other images.Step 104 is believed according to the face location information and the human body contour outline Breath, determines the object region where target reference object described in the preview image.
In the embodiment of the present invention, according to the distance between the colour imagery shot and the infrared thermal imaging camera, with And focal length, the coordinate amount of translation between the preview image and the infrared thermal imaging image can be calculated.Therefore, above-mentioned basis The face location information and the human body contour outline information, determine the mesh where target reference object described in the preview image Logo image region can be the human body contour outline information of target reference object described in the infrared thermal imaging image being converted to institute Profile information corresponding with the infrared character image in preview image is stated, and determines that corresponding personage takes turns according to the profile information Exterior feature is mutually calibrated in the preview image further according to the face location information of target reference object described in the preview image The character contour and face location of the target reference object finally determine the target reference object according to the profile after calibration The object region at place.
Such as: if carrying out preview just for a target reference object by the colour imagery shot, but drawn in preview More than one facial image is recognized in face, such as recognizes model's facial image on background poster, it at this time can be according to described The human body contour outline information determined in infrared thermal imaging image is to the non-targeted reference object misidentified in the preview image Facial image is corrected;Or when the infrared thermal imaging camera accidentally will test with human body temperature relatively its For his object when for example animal is imaged as human body, the human body contour outline information of extraction may include the profile of animal, at this time may be used To calibrate the human body contour outline information extracted in conjunction with the facial image position determined in the preview image.In this way, the step In, by the face location for determining target reference object described in the preview image in conjunction with the infrared thermal imaging camera Facial image in the background area of the preview image can be also identified as target shooting to avoid the mobile terminal by information The facial image of object distinguishes the problem of inaccuracy so as to cause background and target shooting who object.And for low light environment Under, it is understood that there may be the case where can not recognizing facial image by the colour imagery shot, in the embodiment of the present invention, pass through knot The temperature that the infrared thermal imaging camera detection human body spoke is taken the photograph is closed to determine the people of target reference object in the preview image Body profile, so as to solve because being influenced accurately identify asking for character image by the colour imagery shot by light Topic.
Step 105, according to the object region, virtualization processing is carried out to the preview image.
Object region in the embodiment of the present invention, where determining target reference object described in the preview image Afterwards, virtualization processing can be carried out to the preview image, it specifically, can be by the object-image region in the preview image Domain removes, and carries out virtualization processing to the preview image after removal by certain virtualization grade.Such as: it can be according to non-targeted figure Distance as pixel each in region away from certain central point in the object region is arranged each in non-object image region The virtualization grade of pixel, and virtualization processing is carried out to each pixel according to virtualization grade.
After carrying out virtualization processing to the preview image after removal, just by the image of the object region of removal with Virtualization treated preview image carries out image synthesis, it can is interpreted as the image for the object region that will be removed correspondingly Virtualization treated object region described in preview image is put back into the location of before removal.
In this way, in the embodiment of the present invention, when the mobile terminal passes through the colour imagery shot and the infrared thermal imaging When camera shoots who object, can in the preview image acquired by the colour imagery shot except the object-image region Overseas background area carries out different grades of virtualization processing, it is hereby achieved that target shoots character image clearly Background As fuzzy virtualization shooting effect.
In the present embodiment, above-mentioned mobile terminal can be any equipment with storaging medium, such as: computer (Computer), mobile phone, tablet computer (Tablet Personal Computer), laptop computer (Laptop Computer), personal digital assistant (personal digital assistant, abbreviation PDA), mobile Internet access device The terminal devices such as (Mobile Internet Device, MID) or wearable device (Wearable Device).
The image processing method of the embodiment of the present invention is applied to mobile terminal, and the mobile terminal includes colour imagery shot And infrared thermal imaging camera, the colour imagery shot is obtained respectively and the infrared thermal imaging camera shoots same target Object preview image collected and infrared thermal imaging image;Obtain the face of target reference object described in the preview image Location information;Extract the human body contour outline information of target reference object described in the infrared thermal imaging image;According to the face Location information and the human body contour outline information, determine the object-image region where target reference object described in the preview image Domain;According to the object region, virtualization processing is carried out to the preview image.Since infrared thermal imaging camera is detectable The temperature taken the photograph to object spoke, even if so that mobile terminal can still lead under the weaker environment of complicated photographed scene or light Infrared thermal imaging camera is crossed to determine the human body contour outline information of target reference object, such mobile terminal can not be by environment Target shooting character image and background image are accurately distinguished in interference, and according to the target image where target reference object Region carries out virtualization processing to preview image, it is hereby achieved that preferably blurring shooting effect.
It is the flow chart of image processing method provided in an embodiment of the present invention referring to Fig. 3, Fig. 3, is applied to mobile terminal, institute Stating mobile terminal includes colour imagery shot and infrared thermal imaging camera.On the basis of embodiment embodiment shown in Fig. 1, The step of human body contour outline information for extracting target reference object described in the infrared thermal imaging image, is refined.Such as figure Shown in 3, it the described method comprises the following steps:
Step 301 obtains the colour imagery shot and the infrared thermal imaging camera to the shooting pair of same target respectively As preview image collected and infrared thermal imaging image.
Above-mentioned preview image is by the collected preview screen of above-mentioned colour imagery shot, due to being taken the photograph by the colour As head can thereby may be ensured that the photo shot by the colour imagery shot has preferably with preview to relatively clear picture Picture quality.
Above-mentioned infrared thermal imaging camera is to utilize infrared detector and optical imagery object using infrared thermal imaging technique Mirror receives in the infrared radiation energy distribution pattern reflection to the light-sensitive element of infrared detector of measured target, to obtain infrared Graphic images, this graphic images are corresponding with the heat distribution field of body surface.
In the embodiment of the present invention, by reflecting the temperature of object radiation onto image, and every kind of temperature is all used one False colour shows, these false colours picture to be formed that is stitched together finally is formd the infrared thermal imaging image.With it is visible Light is compared, and infrared thermal imaging is not influenced by ambient light is strong and weak, so even if in complicated scene or compared with still may be used under dark situation Form clearly infrared thermal imaging image.
It should be noted that the position between the colour imagery shot and the infrared thermal imaging camera is relatively fixed, And required distance between the two is small as far as possible, with guarantee can be with by the colour imagery shot and the infrared thermal imaging camera It is shot for same target reference object, in this way, in conjunction with the preview image and infrared thermal imaging image progress When subsequent image processing or fusion, preferable picture quality can be obtained.
Step 302, the face location information for obtaining target reference object described in the preview image.
Optionally, the step of face location information for obtaining target reference object described in the preview image, packet It includes: facial image identification is carried out to the preview image;Obtain the face location information of the facial image of identification.
Above-mentioned can be to preview image progress facial image identification identifies the preview by face recognition technology Whether there is facial image feature in image, if recognizing facial image feature, confirms to include face figure in the preview image Picture.
On the basis of recognizing facial image feature, the face location information of the facial image recognized is obtained, wherein The face location information include in region where the facial image recognized each pixel in the direction x and y coordinate value.
In the embodiment of the present invention, by carrying out recognition of face to the preview image, the colored camera shooting can be identified through The face location of the target reference object of head acquisition, to get the face location information of the target reference object. In this way, the character image and other images in the preview image can be distinguished tentatively, and the shifting can also prevented to a certain degree Other images in the infrared thermal imaging image are mistakenly identified as the corresponding infrared personage of the target reference object by dynamic terminal Image.
Step 303, the human body contour outline information for extracting target reference object described in the infrared thermal imaging image.
Optionally, the step of the human body contour outline information for extracting target reference object described in the infrared thermal imaging image Suddenly, comprising: obtain the temperature range of human body;According to the corresponding relationship of preset temperature range and false colour section, it is determining with it is described The corresponding false colour section of the temperature range of human body;By false colour in the infrared thermal imaging image in the temperature range pair of the human body The image in false colour section answered is determined as the infrared character image of the target reference object;Extract the infrared character image Human body contour outline information.
The temperature range of above-mentioned acquisition human body can be the temperature range for obtaining human body predetermined, such as: it obtains pre- The temperature range of the human body first defined is 36 DEG C~38 DEG C.
In the embodiment of the present invention, the corresponding relationship in temperature range and false colour section can be pre-set, correspondence pass System can be the corresponding false colour of a temperature value, and a temperature range then corresponds to a false colour section, such as: 36 DEG C of correspondences are dark Red, 37 DEG C corresponding red, 38 DEG C of corresponding peonys etc., in this way, corresponding with false colour section according to preset temperature range Relationship can determine false colour corresponding with the temperature range of human body section for kermesinus to peony.
Since infrared thermal imaging camera is the temperature according to object radiation, it is imaged using corresponding false colour, therefore institute It states infrared thermal imaging image and can be and shown by a variety of different false colours, be also possible to only to show the temperature with the human body Spend the image in the corresponding false colour section of range.In this way, in the step, it can be by false colour in the infrared thermal imaging image in institute The infrared character image that the image in the corresponding false colour section of temperature range of human body is determined as the target reference object is stated, or If person's infrared thermal imaging image only shows the image in false colour corresponding with the temperature range of human body section, can be straight Connect the infrared character image for determining that the image shown in the infrared thermal imaging image is the target reference object.Such as: ginseng See that Fig. 4, Fig. 4 are the schematic diagram for showing infrared character image in infrared thermal imaging image provided in an embodiment of the present invention, needs It is bright, false colour corresponding with the temperature range of human body section is represented with oblique line in figure.
After the infrared character image for determining the target reference object, the position of the infrared character image can be obtained Information, and the seat of the edge pixel point of the infrared character image can be extracted from the location information of the infrared character image Scale value, to obtain the human body contour outline information of the infrared character image.
In the embodiment of the present invention, the temperature range and preset temperature range of acquisition human body predetermined can be passed through With the corresponding relationship in false colour section, to determine false colour corresponding with the temperature range of human body section, and then determination is described red The location information where infrared character image in outer graphic images, and therefrom extract the human body of the infrared character image Profile information.In this way, can make the mobile terminal not by the interference of environment, infrared character image and other figures are accurately distinguished Picture.
Step 304, according to the face location information and the human body contour outline information, determine described in the preview image Object region where target reference object.
Optionally, described according to the face location information and the human body contour outline information, it determines in the preview image The step of object region where the target reference object, comprising: obtain the preview image and the infrared heat at As the coordinate amount of translation between image;According to the coordinate amount of translation, the human body contour outline information is subjected to coordinate conversion, is obtained Transformed profile information in the preview image;Judge the face location information whether the transformed profile information range It is interior;If the face location information identifies conversion described in the preview image in the range of transformed profile information The corresponding image-region of profile information;The corresponding image-region of the transformed profile information is determined as the target reference object The object region at place.
In the embodiment of the present invention, can according between the colour imagery shot and the infrared thermal imaging camera away from From, derive the Formula of Coordinate System Transformation between the preview image and the infrared thermal imaging image, thus obtain coordinate conversion Amount.Below by taking the colour imagery shot and the infrared thermal imaging camera are in the same horizontal position as an example, described in acquisition The implementation of coordinate amount of translation is illustrated, and referring to Fig. 5, Fig. 5 is dual camera coordinate provided in an embodiment of the present invention Convert reference illustraton of model.
If the focal length of the colour imagery shot and the infrared thermal imaging camera is f, the colour imagery shot and institute Stating the distance between infrared thermal imaging camera is T, and the object distance of P point is Z, the preview graph that P point is acquired in the colour imagery shot The coordinate value in the direction x is x as inL, P point direction x in the infrared thermal imaging image that the infrared thermal imaging camera obtains Coordinate value is xR, wherein the point coordinate value in the direction y and point y in the infrared thermal imaging image in the preview image The coordinate value in direction is identical, therefore need to only obtain the coordinate conversion in the direction x in the preview image and the infrared thermal imaging image Amount.
According to Similar Principle of Triangle, can obtain
It is fT/ (Z according to the coordinate amount of translation between the formula preview image and the infrared thermal imaging image +f)+xR
It should be noted that being in same vertical position for the colour imagery shot and the infrared thermal imaging camera The case where, the seat of P point coordinate value in the direction x and this direction x in the infrared thermal imaging image in the preview image Scale value is identical, need to only obtain the coordinate amount of translation in the direction y in the preview image and the infrared thermal imaging image at this time, Specific derivation calculating process is similar with the derivation calculating process in the citing, and to avoid repeating, which is not described herein again.
It, can be using the coordinate amount of translation to the human body wheel of the infrared character image after determining coordinate amount of translation Wide information carries out coordinate conversion, obtains the transformed profile information in the preview image, wherein the conversion in the preview image Profile information can be understood as moving to the infrared character image in the infrared thermal imaging image by the coordinate amount of translation The profile information of image determined by after in the preview image.
In the embodiment of the present invention, the face location information can be judged whether in institute according to the transformed profile information In the range of stating transformed profile information, the face location information got in step 302 is determined with the result according to judgement Whether be exactly the target reference object face location information.Wherein, described to judge the face location information whether in institute In the range of stating transformed profile information, it can be and judge whether each pixel in the corresponding region of the face location information is big It is partially in the region being made of the transformed profile information.
If the face location information can identify in the preview image in the range of transformed profile information The corresponding image-region of the transformed profile information, and the region is determined as to the target image where the target reference object Region.It should be noted that if the face location information can then determine step not in the range of the transformed profile information The face location information got in rapid 302 is not the face location information of the target reference object, it is possible to determine that this When exist misrecognition the case where, the face location information got is not determined to the face of the target reference object Location information.
Such as: referring to Fig. 6, after Fig. 6 is the facial image and conversioning wheel in preview image provided in an embodiment of the present invention The position view of human body contour outline, region B and region E in the preview image are the facial image determined according to step 302 Region, region D is the human body contour outline region determined according to transformed profile information, as shown, region B is located in the D of region, and area Domain E is located at outside the D of region, thus can recognize that region B is exactly the corresponding face figure of face location information of the target reference object As region, and then the corresponding image-region of the transformed profile information can be determined as to the mesh where the target reference object Logo image region, and the image in the E of region is then identified as background image.
In the embodiment of the present invention, by the way that the human body contour outline information is carried out coordinate conversion, obtain in the preview image Transformed profile information further determine that the mesh and by comparing the face location information and the transformed profile information The object region where reference object is marked, so that greatly improving the mobile terminal distinguishes target shooting personage and back The accuracy rate of scape, and can effectively be solved under low light environment by the infrared thermal imaging camera through the colour imagery shot The problem of facial image can not be recognized.
Optionally, described according to the face location information and the human body contour outline information, it determines in the preview image After the step of object region where the target reference object, the method also includes: if the preview image packet At least two object regions are included, then the non-face image-region of at least two object region are carried out respectively empty Change processing;Wherein, the non-face image-region of at least two object region is at least two object-image region All image-regions in domain in addition to facial image region.
If the above-mentioned preview image includes at least two object regions, it can be understood as pass through the colored camera shooting At least two targets that head and the infrared thermal imaging camera determine when being shot at least two target reference objects Image-region.
It, can be respectively to the inhuman of at least two object regions described in the preview image in the embodiment of the present invention Face image region carries out virtualization processing, wherein the non-face image-region of at least two object region be it is described extremely All image-regions in few two object regions in addition to facial image region.Such as: it is that the present invention is real referring to Fig. 7, Fig. 7 The facial image region of two object regions and the signal of the position of non-face image-region in the preview image of example offer are provided Figure, region B1 shown in figure are the facial image region of first object image-region, and region B2 is first object image-region Non-face image-region, region C1 be the second object region facial image region, region C2 be the second target image The non-face image-region in region.
Specific virtualization processing mode can be the people of at least two object regions described in the preview image Face image removes, and blurs according to preset virtualization Processing Algorithm to the non-face image-region in the preview image after removal Processing, then by the facial image of at least two object region of removal and virtualization treated non-face image-region Image carry out image synthesis.In this way, special by the face that the colour imagery shot can shoot personage to prominent target with preview The image of sign, it is hereby achieved that preferably image taking quality.
Optionally, described that virtualization processing is carried out to the non-face image-region of at least two object region respectively The step of, comprising: reference man's face region in the facial image region of each object region is set;According to each non-face Each non-face image-region is divided at least two respectively at a distance from facial image region by each pixel in image-region A non-face image region;According to reference man's face region in each non-face image region and facial image region away from From determining the virtualization grade of each non-face image region;Facial image region in each object region is removed; According to the virtualization grade of each non-face image region, virtualization processing is carried out to each non-face image region respectively;It will Treated that non-face image region carries out image synthesizes for the facial image region of removal and virtualization.
Reference man's face region in the facial image region of the above-mentioned each object region of setting can be according to each Several edge pixel points determine a central point in the facial image region of object region, and using the central point as the center of circle Corresponding reference man's face region is determined respectively, is also possible in the facial image region according to each object region The depth of field of several pixels determines one with reference to the depth of field, then determines one respectively with reference to the depth of field with this and refers to face accordingly Subregion, wherein the depth of field of each pixel is matched with corresponding with reference to the depth of field in each reference man's face region.
It, can also be according to each pixel in each non-face image-region and facial image region in the embodiment of the present invention Distance determine that specific virtualization grade specifically can be by each pixel in each non-face image-region away from corresponding It is non-to be divided at least two respectively by the distance in facial image region and preset divided rank for each non-face image-region Facial image subregion.Wherein, the subregion quantity of the division of each non-face image-region can be according to preset division etc. Grade is determining, such as: default divided rank is 0, then each non-face image-region is divided into upper part of the body region under respectively Half body region, presetting divided rank is 1, then on the basis of divided rank is 0, then respectively by each non-face image-region Upper part of the body region and lower part of the body region are each divided into two regions up and down.
Referring to Fig. 8, Fig. 8 is the region division schematic diagram of two non-face image-regions provided in an embodiment of the present invention, if Default divided rank is 1, correspondingly presets first of pixel in each non-face image-region away from corresponding facial image region Distance range, second distance range, third distance range and the 4th distance range.As shown in figure 8, the region B1 is first object figure As the facial image region in region, the region B2 is the facial image region of the second object region, according to preset division etc. Grade and distance range, the non-face image-region B2 of first object image-region can be divided into four sub-regions B21, B22, The non-face image-region C2 of second object region can be divided into four sub-regions C21, C22, C23 by B23 and B24 And C24.
Reference man's face region in each facial image region is being set and region is carried out to each non-face image-region After division, can reference man's face region according to each non-face image region away from corresponding facial image region distance, Determine the virtualization grade of each non-face image region, specifically, the virtualization grade of each non-face image region can With reference man's face region with each non-face image region away from corresponding facial image region apart from corresponding change, example Such as: distance is closer, and virtualization grade is smaller, and distance is remoter, and virtualization bigger grade.In this way, can be according to each non-face image subsection The distance in reference man face region of the domain away from corresponding facial image region carries out different grades of virtualization, has stereovision to obtain Virtualization shooting effect.
In the embodiment of the present invention, can according to the virtualization grade of each non-face image region to the preview image into Row virtualization processing, specifically, the facial image region of object region each in the preview image can be removed, and press Determining virtualization grade carries out virtualization processing to each non-face image region respectively.
After carrying out virtualization processing to each non-face image region, by each facial image region of removal and virtualization Treated, and non-face image region carries out image synthesis, wherein described image synthesis each of can be understood as removing It is locating before removal that facial image region is correspondingly put back into corresponding facial image region in virtualization treated preview image Position.
In this way, in the embodiment of the present invention, when the mobile terminal passes through the colour imagery shot and the infrared thermal imaging It, can be to the preview graph acquired by the colour imagery shot when camera is shot at least two targets shooting personage The non-face image region of each object region carries out different grades of virtualization processing as in, it is hereby achieved that prominent Target shoots character facial feature, and has the virtualization shooting effect of preferable stereovision.
It should be noted that being carried out in the embodiment of the present invention to the non-face image region of each object region empty The mode for changing processing also can be applied to the scene shot just for a target reference object, and specific virtualization grade is true It is similar with the implementation detail in the embodiment to determine mode, to avoid repeating, which is not described herein again.
Optionally, the step of reference man face region in the facial image region of each object region of setting, It include: the depth of view information of each pixel in the facial image region for calculate separately each object region;It calculates separately every The average depth of field in a facial image region;By the son for the average depth of field that the depth of field in each facial image region is facial image region Region is determined as reference man's face region;Wherein, the average depth of field in each facial image region be facial image region at least The average depth of field of two pixels.
It, can be according to the scape of each pixel in the facial image region of each object region in the embodiment of the present invention Breath is deeply convinced to determine corresponding reference man's face region, therefore can calculate separately the facial image region of each object region In each pixel the depth of field, wherein the depth of field, which refers to, can obtain clear image in camera lens or other imager forward positions The subject longitudinal separation range that is measured of imaging.Referring to Fig. 9, Fig. 9 is that the depth of field provided in an embodiment of the present invention calculates mould Type schematic diagram is illustrated below in conjunction with calculation of the Fig. 9 to the depth of field.
As shown in figure 9, the depth of field Δ L of certain pixel includes preceding depth of field Δ L1With rear depth of field Δ L2, wherein depth of field Δ L with more Scattered circular diameter δ, lens focus f, the shooting f-number F of camera lens are related to focal distance L, and the calculation formula of the depth of field is
The depth of field of each pixel in the facial image region of each object region can be calculated according to the formula.
The average depth of field in the above-mentioned each facial image region of calculating, can be first from each facial image region determine to Then few two pixels calculate the average depth of field according to the depth of view information of described two pixels.Such as: everyone can be calculated The average value of the depth of view information of two edge pixel points up and down in face image region obtains the average depth of field, or calculates each The average value of the depth of view information of all pixels point in facial image region obtains the average depth of field.
The above-mentioned subregion by the average depth of field that the depth of field in each facial image region is facial image region is determined as joining Face subregion is examined, can be the depth of field letter of each pixel in the facial image region according to each object region of acquisition The region that the pixel that the depth of field in corresponding facial image region is equal to the average depth of field is constituted is determined as corresponding face figure by breath As reference man's face region in region.
In this way, in the embodiment of the present invention, according to each pixel in the facial image region of each object region Reference man's face region in corresponding facial image region is obtained by calculation, so as to according to each reference man in depth of view information Position where face region determines each distance of the non-face image region away from corresponding reference man's face region, and then really The different virtualization grade of fixed each non-face image region, to obtain the virtualization shooting effect for having preferable stereovision.
The reference picture subregion of step 305, the setting object region.
The reference picture subregion of the above-mentioned setting object region, can be according in the object region Several edge pixel points determine a central point, and determine a reference picture subregion by the center of circle of the central point, can also To be to determine an average reference depth of field according to the depth of field of several pixels in the object region, it is flat to be then based on this A reference picture subregion is determined with reference to the depth of field, wherein the depth of field of each pixel and institute in the reference picture subregion State the matching of the average reference depth of field.
In this way, can be made in the preview image by the reference picture subregion that the object region is arranged except institute The each pixel stated in the overseas region in object-image region can be with reference to the distance away from the reference picture subregion, to set Different virtualization grade, to obtain the background blurring shooting effect for having preferable stereovision.
Optionally, the step of reference picture subregion of the setting object region, comprising: calculate separately institute State the depth of view information of each pixel in object region;Calculate the flat of at least two pixels in the object region The equal depth of field;The subregion that the depth of field in the object region is the average depth of field is determined as reference picture subregion.
In the embodiment of the present invention, ginseng can be determined according to the depth of view information of pixel each in the object region Image region is examined, therefore the depth of field of each pixel in the object region can be calculated, wherein the object-image region The calculation formula of the depth of field of each pixel may refer to the related description in previous embodiment in domain, not repeat herein.
The average depth of field of at least two pixels in the above-mentioned calculating object region can be first from the target At least two pixels are determined in image-region, and average scape is then calculated according to the depth of view information of at least two pixel It is deep.Such as: the average value of the depth of view information of two edge pixel points up and down in the object region can be calculated, is obtained The average depth of field, or the average value of the depth of view information of all pixels point in the object region is calculated, obtain the average depth of field.
It is above-mentioned that the subregion that the depth of field in the object region is the average depth of field is determined as reference picture sub-district Domain can be the depth of view information of each pixel in the object region according to acquisition, will be in the object region The region that the pixel that the depth of field is equal to the average depth of field is constituted is determined as reference picture subregion.
In this way, according to the depth of view information of pixel each in the object region, passing through meter in the embodiment of the present invention Calculation obtains the reference picture subregion of the object region, so as to according to the position where the reference picture subregion It sets, determines the virtualization grade for removing that each pixel is different in the overseas region in the object-image region in the preview image, with Obtain the background blurring shooting effect for having preferable stereovision.
Step 306, according to pixel each in non-object image region at a distance from the reference picture subregion, determine The virtualization grade of each pixel;Wherein, the non-object image region is that the object-image region is removed in the preview image All image-regions except domain.
Above-mentioned non-object image region is all image districts in the preview image in addition to the object region Domain, such as: referring to Figure 10, Figure 10 is object region and non-object image area in preview image provided in an embodiment of the present invention The position view in domain, region B shown in figure are object region, and region F is non-object image region.
In the step, can according to pixel each in non-object image region away from the reference picture subregion away from From to determine the virtualization grade of each pixel in the non-object image region, specifically, in the non-object image region The virtualization grade of each pixel can be with each pixel in the non-object image region away from the reference picture subregion Apart from corresponding change, such as: distance it is closer, virtualization grade it is smaller, distance it is remoter, virtualization bigger grade.In this way, can basis Distance of each pixel away from the reference picture subregion carries out different grades of virtualization in the non-object image region, with Obtain the background blurring shooting effect for having stereovision.
It should be noted that being blurred at the same time to the non-object image region and each non-face image region When, the virtualization grade of each non-face image region can be made to be respectively less than the void of each pixel in the non-object image region Change grade, in this way, can get after carrying out virtualization processing to the preview image from non-face image region to non-targeted figure The shooting effect gradually blurred as region.
Step 307 removes the object region in the preview image.
It, can will be described in order to carry out virtualization processing to the non-object image region in the preview image in the step The object region in preview image removes, and specifically, the image being located in the object region can be mentioned It takes out, obtains the preview image for only including the image in the non-object image region in this way, can be according to true in step 306 The virtualization grade of each pixel, carries out virtualization processing to the preview image after removal in the fixed non-object image region, And it can effectively prevent the image in the object region and accidentally blurred.
Step 308, by the virtualization grade of each pixel in non-object image region, each pixel is carried out at virtualization Reason.
It, can be by the non-object image region determined in step 306 after removing the object region in the step In each pixel virtualization grade, virtualization processing is carried out to each pixel, obtains the image of each pixel corresponding etc. The virtualization of grade, it is hereby achieved that target shoots the virtualization shooting effect that character image is clear and background image is fuzzy.
The object region of removal and virtualization treated non-object image region are carried out image by step 309 Synthesis.
In the step, after carrying out corresponding virtualization processing to non-object image region, just by the object-image region of removal Domain carries out image with virtualization treated non-object image region and synthesizes, wherein described image synthesis can be understood as to remove Object region to be correspondingly put back into virtualization treated object region described in preview image locating before removal Position.In this way, the preview image after synthesis can have the clearly shooting of background image virtualization of target shooting character image Effect.
The image processing method of the embodiment of the present invention is applied to mobile terminal, and the mobile terminal includes colour imagery shot And infrared thermal imaging camera, the colour imagery shot is obtained respectively and the infrared thermal imaging camera shoots same target Object preview image collected and infrared thermal imaging image;Obtain the face of target reference object described in the preview image Location information;Extract the human body contour outline information of target reference object described in the infrared thermal imaging image;According to the face Location information and the human body contour outline information, determine the object-image region where target reference object described in the preview image Domain;According to the object region, virtualization processing is carried out to the preview image.Since infrared thermal imaging camera is detectable The temperature taken the photograph to object spoke, even if so that mobile terminal can still lead under the weaker environment of complicated photographed scene or light Infrared thermal imaging camera is crossed to determine the human body contour outline information of target reference object, such mobile terminal can not be by environment Target shooting character image and background image are accurately distinguished in interference, and according to the target image where target reference object Region carries out virtualization processing to preview image, it is hereby achieved that preferably blurring shooting effect.
It is the structure chart that the present invention implements the mobile terminal provided referring to Figure 11, Figure 11, which can be realized Fig. 1 With the details of the image processing method in the embodiment of the method for Fig. 3, and reach identical effect.The mobile terminal includes colour Camera and infrared thermal imaging camera, as shown in figure 11, mobile terminal 1100 further include the first acquisition module 1110, second obtain Modulus block 1120, extraction module 1130, determining module 1140 and the first virtualization processing module 1150, wherein first obtains module 1110 connect with the second acquisition module 1120, and the first acquisition module 1110 is also connect with extraction module 1130, and second obtains module 1120 also connect with determining module 1140, and extraction module 1130 is also connect with determining module 1140, and determining module 1140 is also with One virtualization processing module 1150 connects, in which:
First obtains module 1110, obtains the colour imagery shot and the infrared thermal imaging camera respectively to same mesh Mark reference object preview image collected and infrared thermal imaging image;
Second obtains module 1120, and the face location for obtaining target reference object described in the preview image is believed Breath;
Extraction module 1130, the human body contour outline for extracting target reference object described in the infrared thermal imaging image are believed Breath;
Determining module 1140, for determining the preview according to the face location information and the human body contour outline information Object region where target reference object described in image;
First virtualization processing module 1150, for being blurred to the preview image according to the object region Processing.
Optionally, as shown in figure 12, the extraction module 1130 includes:
First acquisition unit 1131, for obtaining the temperature range of human body;
First determination unit 1132, according to the corresponding relationship of preset temperature range and false colour section, the determining and people The corresponding false colour section of the temperature range of body;
Second determination unit 1133, for by false colour in the infrared thermal imaging image in the temperature range pair of the human body The image in false colour section answered is determined as the infrared character image of the target reference object;
Extraction unit 1134, for extracting the human body contour outline information of the infrared character image.
Optionally, as shown in figure 13, the determining module 1140 includes:
Second acquisition unit 1141 turns for obtaining the coordinate between the preview image and the infrared thermal imaging image The amount of changing;
Converting unit 1142, for the human body contour outline information being carried out coordinate conversion, is obtained according to the coordinate amount of translation To the transformed profile information in the preview image;
Judging unit 1143, for judging the face location information whether in the range of the transformed profile information;
First recognition unit 1144, if for the face location information in the range of transformed profile information, Identify the corresponding image-region of transformed profile information described in the preview image;
Third determination unit 1145 is clapped for the corresponding image-region of the transformed profile information to be determined as the target Take the photograph the object region where object.
Optionally, as shown in figure 14, the mobile terminal 1100 further include:
Second virtualization processing module 1160 is divided if including at least two object regions for the preview image The other non-face image-region at least two object region carries out virtualization processing;
Wherein, the non-face image-region of at least two object region is at least two object-image region All image-regions in domain in addition to facial image region.
Optionally, as shown in figure 15, the second virtualization processing module 1160 includes:
First setting unit 1161, reference man's face area in the facial image region for each object region to be arranged Domain;
Division unit 1162, for according to each pixel in each non-face image-region and facial image region away from From each non-face image-region is divided at least two non-face image regions respectively;
4th determination unit 1163, for the reference face according to each non-face image region and facial image region The distance of subregion determines the virtualization grade of each non-face image region;
First removes unit 1164, for removing the facial image region in each object region;
First virtualization processing unit 1165, for the virtualization grade according to each non-face image region, respectively to every A non-face image region carries out virtualization processing;
First image composing unit 1166, facial image region for that will remove and virtualization treated inhuman face image Subregion carries out image synthesis.
Optionally, as shown in figure 16, first setting unit 1161 includes:
First computation subunit 11611, it is each in the facial image region for calculating separately each object region The depth of view information of pixel;
Second computation subunit 11612, for calculating separately the average depth of field in each facial image region;
First determines subelement 11613, for being being averaged for facial image region by the depth of field in each facial image region The subregion of the depth of field is determined as reference man's face region;
Wherein, the average depth of field in each facial image region is the average scape of at least two pixels in facial image region It is deep.
Optionally, as shown in figure 17, the first virtualization processing module 1150, comprising:
Second setting unit 1151, for the reference picture subregion of the object region to be arranged;
5th determination unit 1152, for according to pixel each in non-object image region and the reference picture sub-district The distance in domain determines the virtualization grade of each pixel;
Second removes unit 1153, for removing the object region in the preview image;
Second virtualization processing unit 1154, for the virtualization grade by each pixel in non-object image region, to every A pixel carries out virtualization processing;
Second image composing unit 1155, that treated is non-targeted for the object region for that will remove and virtualization Image-region carries out image synthesis;
Wherein, the non-object image region is all figures in the preview image in addition to the object region As region.
Optionally, as shown in figure 18, second setting unit 1151 includes:
Third computation subunit 11511, the depth of field for calculating separately each pixel in the object region are believed Breath;
4th computation subunit 11512, for calculating the average scape of at least two pixels in the object region It is deep;
Second determines subelement 11513, for being the sub-district of the average depth of field by the depth of field in the object region Domain is determined as reference picture subregion.
Optionally, as shown in figure 19, the second acquisition module 1120 includes:
Second recognition unit 1121, for carrying out facial image identification to the preview image;
Third acquiring unit 1122, the face location information of the facial image for obtaining identification.
The mobile terminal of the embodiment of the present invention, including colour imagery shot and infrared thermal imaging camera, respectively described in acquisition Colour imagery shot and the infrared thermal imaging camera to same target reference object preview image collected and infrared heat at As image;Obtain the face location information of target reference object described in the preview image;Extract the infrared thermal imaging figure The human body contour outline information of the target reference object as described in;According to the face location information and the human body contour outline information, really Object region where target reference object described in the fixed preview image;According to the object region, to institute It states preview image and carries out virtualization processing.Since the temperature that object spoke is taken the photograph can be detected in infrared thermal imaging camera, so that mobile whole Even if holding under the weaker environment of complicated photographed scene or light, still target can be determined by infrared thermal imaging camera The human body contour outline information of reference object, such mobile terminal can not accurately be distinguished target shooting people by the interference of environment Object image and background image, and virtualization processing is carried out to preview image according to the object region where target reference object, It is hereby achieved that preferably blurring shooting effect.
The embodiment of the present invention also provides a kind of mobile terminal, including processor, and memory is stored on the memory simultaneously The computer program that can be run on the processor, the computer program realize above-mentioned image when being executed by the processor Each process of processing method embodiment, and identical technical effect can be reached, to avoid repeating, which is not described herein again.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium Calculation machine program, the computer program realize each process of above-mentioned image processing method embodiment when being executed by processor, and Identical technical effect can be reached, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as Read-only memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation RAM), magnetic or disk etc..
0, Figure 20 is the structure chart that the present invention implements the mobile terminal provided referring to fig. 2, which can be realized Fig. 1 With the details of the image processing method in the embodiment of the method for Fig. 3, and reach identical effect.As shown in figure 20, the movement Terminal includes colour imagery shot 2010 and infrared thermal imaging camera 2020, mobile terminal 2000 further include: at least one processing Device 2001, memory 2002, at least one network interface 2004 and other users interface 2003.It is each in mobile terminal 2000 Component is coupled by bus system 2005.It is understood that bus system 2005 is for realizing the connection between these components Communication.Bus system 2005 further includes power bus, control bus and status signal bus in addition in addition to including data/address bus.But It is that various buses are all designated as bus system 2005 in Figure 20 for the sake of clear explanation.
Wherein, user interface 2003 may include display, keyboard or pointing device (for example, mouse, trace ball (trackball), touch-sensitive plate or touch screen etc.).
It is appreciated that the memory 2002 in the embodiment of the present invention can be volatile memory or non-volatile memories Device, or may include both volatile and non-volatile memories.Wherein, nonvolatile memory can be read-only memory (Read-Only Memory, ROM), programmable read only memory (Programmable ROM, PROM), erasable programmable are only Read memory (Erasable PROM, EPROM), electrically erasable programmable read-only memory (Electrically EPROM, ) or flash memory EEPROM.Volatile memory can be random access memory (Random Access Memory, RAM), use Make External Cache.By exemplary but be not restricted explanation, the RAM of many forms is available, such as static random-access Memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random-access Memory (Synchronous DRAM, SDRAM), double data speed synchronous dynamic RAM (Double Data Rate SDRAM, DDRSDRAM), it is enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronous Connect dynamic random access memory (Synch link DRAM, SLDRAM) and direct rambus random access memory (Direct Rambus DRAM, DRDRAM).The memory 2002 of system and method described herein is intended to include but is not limited to The memory of these and any other suitable type.
In some embodiments, memory 2002 stores following element, executable modules or data structures, or Their subset of person or their superset: operating system 20021 and application program 20022.
Wherein, operating system 20021 include various system programs, such as ccf layer, core library layer, driving layer etc., are used for Realize various basic businesses and the hardware based task of processing.Application program 20022 includes various application programs, such as matchmaker Body player (Media Player), browser (Browser) etc., for realizing various applied business.Realize that the present invention is implemented The program of example method may be embodied in application program 20022.
In embodiments of the present invention, mobile terminal 2000 further include: be stored on memory 2002 and can be in processor The computer program run on 2001, computer program realize following steps when being executed by processor 2001: described in obtaining respectively Colour imagery shot and the infrared thermal imaging camera to same target reference object preview image collected and infrared heat at As image;Obtain the face location information of target reference object described in the preview image;Extract the infrared thermal imaging figure The human body contour outline information of the target reference object as described in;According to the face location information and the human body contour outline information, really Object region where target reference object described in the fixed preview image;According to the object region, to institute It states preview image and carries out virtualization processing.
The method that the embodiments of the present invention disclose can be applied in processor 2001, or real by processor 2001 It is existing.Processor 2001 may be a kind of IC chip, the processing capacity with signal.During realization, the above method Each step can be completed by the instruction of the integrated logic circuit of the hardware in processor 2001 or software form.Above-mentioned Processor 2001 can be general processor, digital signal processor (Digital Signal Processor, DSP), dedicated Integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components.It may be implemented or execute disclosed each method, step and the logic diagram in the embodiment of the present invention.It is general Processor can be microprocessor or the processor is also possible to any conventional processor etc..In conjunction with institute of the embodiment of the present invention The step of disclosed method, can be embodied directly in hardware decoding processor and execute completion, or with the hardware in decoding processor And software module combination executes completion.Software module can be located at random access memory, and flash memory, read-only memory may be programmed read-only In the computer readable storage medium of this fields such as memory or electrically erasable programmable memory, register maturation.The meter Calculation machine readable storage medium storing program for executing is located at memory 2002, and processor 2001 reads the information in memory 2002, complete in conjunction with its hardware The step of at the above method.
It is understood that embodiments described herein can with hardware, software, firmware, middleware, microcode or its Combination is to realize.For hardware realization, processing unit be may be implemented in one or more specific integrated circuit (Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal Processing, DSP), digital signal processing appts (DSP Device, DSPD), programmable logic device (Programmable Logic Device, PLD), field programmable gate array (Field-Programmable Gate Array, FPGA), general processor, In controller, microcontroller, microprocessor, other electronic units for executing herein described function or combinations thereof.
For software implementations, it can be realized herein by executing the module (such as process, function etc.) of function described herein The technology.Software code is storable in memory and is executed by processor.Memory can in the processor or It is realized outside processor.
Optionally, following steps be can also be achieved when computer program is executed by processor 2001: obtaining the temperature model of human body It encloses;According to the corresponding relationship of preset temperature range and false colour section, false colour corresponding with the temperature range of the human body is determined Section;Image of the false colour in the infrared thermal imaging image in the corresponding false colour section of temperature range of the human body is determined For the infrared character image of the target reference object;Extract the human body contour outline information of the infrared character image.
Optionally, following steps be can also be achieved when computer program is executed by processor 2001: obtaining the preview image Coordinate amount of translation between the infrared thermal imaging image;According to the coordinate amount of translation, by the human body contour outline information into The conversion of row coordinate, obtains the transformed profile information in the preview image;Judge the face location information whether at described turn It changes in the range of profile information;If the face location information in the range of transformed profile information, identifies described pre- Look at the corresponding image-region of transformed profile information described in image;The corresponding image-region of the transformed profile information is determined as Object region where the target reference object.
Optionally, following steps be can also be achieved when computer program is executed by processor 2001: if the preview image packet At least two object regions are included, then the non-face image-region of at least two object region are carried out respectively empty Change processing;Wherein, the non-face image-region of at least two object region is at least two object-image region All image-regions in domain in addition to facial image region.
Optionally, following steps be can also be achieved when computer program is executed by processor 2001: each target image is set Reference man's face region in the facial image region in region;According to each pixel in each non-face image-region and face figure As the distance in region, each non-face image-region is divided at least two non-face image regions respectively;According to each Non-face image region determines each non-face image region at a distance from reference man's face region in facial image region Virtualization grade;Facial image region in each object region is removed;According to each non-face image region Grade is blurred, virtualization processing is carried out to each non-face image region respectively;At the facial image region of removal and virtualization Non-face image region after reason carries out image synthesis.
Optionally, following steps be can also be achieved when computer program is executed by processor 2001: calculating separately each target The depth of view information of each pixel in the facial image region of image-region;Calculate separately the average scape in each facial image region It is deep;The subregion for the average depth of field that the depth of field in each facial image region is facial image region is determined as reference man's face area Domain;Wherein, the average depth of field in each facial image region is the average depth of field of at least two pixels in facial image region.
Optionally, following steps be can also be achieved when computer program is executed by processor 2001: the target image is set The reference picture subregion in region;According to pixel each in non-object image region and the reference picture subregion away from From determining the virtualization grade of each pixel;The object region in the preview image is removed;By non-targeted figure As the virtualization grade of pixel each in region, virtualization processing is carried out to each pixel;By the object-image region of removal Domain carries out image with virtualization treated non-object image region and synthesizes;Wherein, the non-object image region is the preview All image-regions in image in addition to the object region.
Optionally, following steps be can also be achieved when computer program is executed by processor 2001: calculating separately the target The depth of view information of each pixel in image-region;Calculate the average scape of at least two pixels in the object region It is deep;The subregion that the depth of field in the object region is the average depth of field is determined as reference picture subregion.
Optionally, computer program by processor 2001 execute when can also be achieved following steps: to the preview image into The identification of pedestrian's face image;Obtain the face location information of the facial image of identification.
The mobile terminal of the embodiment of the present invention, including colour imagery shot and infrared thermal imaging camera, respectively described in acquisition Colour imagery shot and the infrared thermal imaging camera to same target reference object preview image collected and infrared heat at As image;Obtain the face location information of target reference object described in the preview image;Extract the infrared thermal imaging figure The human body contour outline information of the target reference object as described in;According to the face location information and the human body contour outline information, really Object region where target reference object described in the fixed preview image;According to the object region, to institute It states preview image and carries out virtualization processing.Since the temperature that object spoke is taken the photograph can be detected in infrared thermal imaging camera, so that mobile whole Even if holding under the weaker environment of complicated photographed scene or light, still target can be determined by infrared thermal imaging camera The human body contour outline information of reference object, such mobile terminal can not accurately be distinguished target shooting people by the interference of environment Object image and background image, and virtualization processing is carried out to preview image according to the object region where target reference object, It is hereby achieved that preferably blurring shooting effect.
1, Figure 21 is the structure chart of mobile terminal provided in an embodiment of the present invention referring to fig. 2, which can be realized The details of image processing method in the embodiment of the method for Fig. 1 and Fig. 3, and reach identical effect.As shown in figure 21, the shifting Dynamic terminal includes colour imagery shot 2110 and infrared thermal imaging camera 2120, mobile terminal 2100 further include: radio frequency (Radio Frequency, RF) circuit 2101, memory 2102, input unit 2103, display unit 2104, processor 2106, audio-frequency electric Road 2107, WiFi (Wireless Fidelity) module 2108 and power supply 2109.
Wherein, input unit 2103 can be used for receiving the number or character information of user's input, and generate with movement eventually The related signal input of the user setting and function control at end 2100.Specifically, in the embodiment of the present invention, the input unit 2103 may include touch panel 21031.Touch panel 21031, also referred to as touch screen collect user on it or nearby Touch operation (for example user uses the operations of any suitable object or attachment on touch panel 21031 such as finger, stylus), And corresponding attachment device is driven according to preset formula.Optionally, touch panel 21031 may include touch detecting apparatus With two parts of touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation bring Signal transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and it is converted At contact coordinate, then the processor 2106 is given, and order that processor 2106 is sent can be received and executed.In addition, can To realize touch panel 21031 using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touch surface Plate 21031, input unit 2103 can also include other input equipments 21032, other input equipments 21032 may include but not It is limited to one of physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc. Or it is a variety of.
Wherein, display unit 2104 can be used for showing information input by user or be supplied to the information and movement of user The various menu interfaces of terminal 2100.Display unit 2104 may include display panel 21041, optionally, using LCD or can have The forms such as machine light emitting diode (Organic Light-Emitting Diode, OLED) configure display panel 21041.
It should be noted that touch panel 21031 can cover display panel 21041, touch display screen is formed, when the touch is shown After screen detects touch operation on it or nearby, processor 2106 is sent to determine the type of touch event, is then located It manages device 2106 and provides corresponding visual output in touch display screen according to the type of touch event.
Touch display screen includes Application Program Interface viewing area and common control viewing area.The Application Program Interface viewing area And arrangement mode of the common control viewing area does not limit, can be arranged above and below, left-right situs etc. can distinguish two it is aobvious Show the arrangement mode in area.The Application Program Interface viewing area is displayed for the interface of application program.Each interface can be with The interface elements such as the icon comprising at least one application program and/or widget desktop control.The Application Program Interface viewing area Or the empty interface not comprising any content.This commonly uses control viewing area for showing the higher control of utilization rate, for example, Application icons such as button, interface number, scroll bar, phone directory icon etc. are set.
Wherein processor 2106 is the control centre of mobile terminal 2100, utilizes various interfaces and connection whole mobile phone Various pieces, by running or execute the software program and/or module that are stored in first memory 21021, and call The data being stored in second memory 21022 execute the various functions and processing data of mobile terminal 2100, thus to movement Terminal 2100 carries out integral monitoring.Optionally, processor 2106 may include one or more processing units.
In embodiments of the present invention, by calling the software program and/or module that store in the first memory 21021 And/or the data in second memory 21022, processor 2106 are used for: obtaining the colour imagery shot and described infrared respectively Thermal imaging camera is to same target reference object preview image collected and infrared thermal imaging image;Obtain the preview graph The face location information of the target reference object as described in;Extract target reference object described in the infrared thermal imaging image Human body contour outline information;According to the face location information and the human body contour outline information, mesh described in the preview image is determined Mark the object region where reference object;According to the object region, virtualization processing is carried out to the preview image.
Optionally, it software program in the first memory 21021 and/or module is stored and/or second deposits by calling Data in reservoir 21022, processor 2106 are also used to: obtaining the temperature range of human body;According to preset temperature range and puppet The corresponding relationship in color section determines false colour corresponding with the temperature range of human body section;By the infrared thermal imaging image Image of the middle false colour in the corresponding false colour section of temperature range of the human body is determined as the infrared of the target reference object Character image;Extract the human body contour outline information of the infrared character image.
Optionally, it software program in the first memory 21021 and/or module is stored and/or second deposits by calling Data in reservoir 21022, processor 2106 are also used to: being obtained between the preview image and the infrared thermal imaging image Coordinate amount of translation;According to the coordinate amount of translation, the human body contour outline information is subjected to coordinate conversion, obtains the preview image In transformed profile information;Judge the face location information whether in the range of the transformed profile information;If the people Face location information then identifies that transformed profile information described in the preview image is corresponding in the range of the transformed profile information Image-region;The target figure corresponding image-region of the transformed profile information being determined as where the target reference object As region.
Optionally, it software program in the first memory 21021 and/or module is stored and/or second deposits by calling Data in reservoir 21022, processor 2106 are also used to: if the preview image includes at least two object regions, Virtualization processing is carried out to the non-face image-region of at least two object region respectively;Wherein, described at least two The non-face image-region of object region is the institute at least two object region in addition to facial image region There is image-region.
Optionally, it software program in the first memory 21021 and/or module is stored and/or second deposits by calling Data in reservoir 21022, processor 2106 are also used to: the reference man in the facial image region of each object region is arranged Face region;It, respectively will be each non-according to each pixel in each non-face image-region at a distance from facial image region Facial image region division is at least two non-face image regions;According to each non-face image region and facial image The distance in reference man's face region in region, determines the virtualization grade of each non-face image region;By each target image Facial image region in region removes;According to the virtualization grade of each non-face image region, respectively to each non-face Image region carries out virtualization processing;By the facial image region of removal and virtualization, treated that non-face image region carries out Image synthesis.
Optionally, it software program in the first memory 21021 and/or module is stored and/or second deposits by calling Data in reservoir 21022, processor 2106 are also used to: being calculated separately every in the facial image region of each object region The depth of view information of a pixel;Calculate separately the average depth of field in each facial image region;By scape in each facial image region Depth is that the subregion of the average depth of field in facial image region is determined as reference man's face region;Wherein, each facial image region The average depth of field be facial image region at least two pixels the average depth of field.
Optionally, it software program in the first memory 21021 and/or module is stored and/or second deposits by calling Data in reservoir 21022, processor 2106 are used for: the reference picture subregion of the object region is arranged;According to non- Each pixel determines the virtualization grade of each pixel at a distance from the reference picture subregion in object region; The object region in the preview image is removed;By the virtualization etc. of each pixel in non-object image region Grade, carries out virtualization processing to each pixel;By the object region of removal and virtualization treated non-object image Region carries out image synthesis;Wherein, the non-object image region be the preview image in except the object region it Outer all image-regions.
Optionally, it software program in the first memory 21021 and/or module is stored and/or second deposits by calling Data in reservoir 21022, processor 2106 are used for: calculating separately the depth of field letter of each pixel in the object region Breath;Calculate the average depth of field of at least two pixels in the object region;It is by the depth of field in the object region The subregion of the average depth of field is determined as reference picture subregion.
Optionally, it software program in the first memory 21021 and/or module is stored and/or second deposits by calling Data in reservoir 21022, processor 2106 are used for: carrying out facial image identification to the preview image;Acquisition knows others The face location information of face image.
The mobile terminal of the embodiment of the present invention, including colour imagery shot and infrared thermal imaging camera, respectively described in acquisition Colour imagery shot and the infrared thermal imaging camera to same target reference object preview image collected and infrared heat at As image;Obtain the face location information of target reference object described in the preview image;Extract the infrared thermal imaging figure The human body contour outline information of the target reference object as described in;According to the face location information and the human body contour outline information, really Object region where target reference object described in the fixed preview image;According to the object region, to institute It states preview image and carries out virtualization processing.Since the temperature that object spoke is taken the photograph can be detected in infrared thermal imaging camera, so that mobile whole Even if holding under the weaker environment of complicated photographed scene or light, still target can be determined by infrared thermal imaging camera The human body contour outline information of reference object, such mobile terminal can not accurately be distinguished target shooting people by the interference of environment Object image and background image, and virtualization processing is carried out to preview image according to the object region where target reference object, It is hereby achieved that preferably blurring shooting effect.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In embodiment provided herein, it should be understood that disclosed device and method can pass through others Mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of device or unit It connects, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention. And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, ROM, RAM, magnetic or disk etc. are various can store program code Medium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.

Claims (16)

1. a kind of image processing method, be applied to mobile terminal, which is characterized in that the mobile terminal include colour imagery shot and Infrared thermal imaging camera, which comprises
It is collected pre- to same target reference object that the colour imagery shot and the infrared thermal imaging camera are obtained respectively Look at image and infrared thermal imaging image;
Obtain the face location information of target reference object described in the preview image;
Extract the human body contour outline information of target reference object described in the infrared thermal imaging image;
According to the face location information and the human body contour outline information, target reference object described in the preview image is determined The object region at place;
According to the object region, virtualization processing is carried out to the preview image;
It is described according to the object region, the step of virtualization processing is carried out to the preview image, comprising:
The reference picture subregion of the object region is set;
According to pixel each in non-object image region at a distance from the reference picture subregion, each pixel is determined Blur grade;
The object region in the preview image is removed;
By the virtualization grade of each pixel in non-object image region, virtualization processing is carried out to each pixel;
The object region of removal is carried out image with virtualization treated non-object image region to synthesize;
Wherein, the non-object image region is all image districts in the preview image in addition to the object region Domain;
The step of reference picture subregion of the setting object region, comprising:
Calculate separately the depth of view information of each pixel in the object region;
Calculate the average depth of field of at least two pixels in the object region;
The subregion that the depth of field in the object region is the average depth of field is determined as reference picture subregion.
2. the method according to claim 1, wherein described extract target described in the infrared thermal imaging image The step of human body contour outline information of reference object, comprising:
Obtain the temperature range of human body;
According to the corresponding relationship of preset temperature range and false colour section, false colour corresponding with the temperature range of the human body is determined Section;
Image of the false colour in the infrared thermal imaging image in the corresponding false colour section of temperature range of the human body is determined For the infrared character image of the target reference object;
Extract the human body contour outline information of the infrared character image.
3. the method according to claim 1, wherein described according to the face location information and the human body wheel Wide information, the step of determining the object region where target reference object described in the preview image, comprising:
Obtain the coordinate amount of translation between the preview image and the infrared thermal imaging image;
According to the coordinate amount of translation, the human body contour outline information is subjected to coordinate conversion, obtains turning in the preview image Change profile information;
Judge the face location information whether in the range of the transformed profile information;
If the face location information identifies conversion described in the preview image in the range of transformed profile information The corresponding image-region of profile information;
The object region corresponding image-region of the transformed profile information being determined as where the target reference object.
4. according to the method described in claim 3, it is characterized in that, described according to the face location information and the human body wheel Wide information, after the step of determining the object region where target reference object described in the preview image, the side Method further include:
If the preview image includes at least two object regions, respectively at least two object region Non-face image-region carries out virtualization processing;
Wherein, the non-face image-region of at least two object region is at least two object region All image-regions in addition to facial image region.
5. according to the method described in claim 4, it is characterized in that, described respectively at least two object region Non-face image-region carries out the step of virtualization processing, comprising:
Reference man's face region in the facial image region of each object region is set;
According to each pixel in each non-face image-region at a distance from facial image region, respectively by each non-face figure As region division is at least two non-face image regions;
According to each non-face image region at a distance from reference man's face region in facial image region, determine each inhuman The virtualization grade of face image subregion;
Facial image region in each object region is removed;
According to the virtualization grade of each non-face image region, each non-face image region is carried out at virtualization respectively Reason;
By the facial image region of removal and virtualization, treated that non-face image region carries out image synthesizes.
6. according to the method described in claim 5, it is characterized in that, the facial image area of each object region of setting The step of reference man face region in domain, comprising:
Calculate separately the depth of view information of each pixel in the facial image region of each object region;
Calculate separately the average depth of field in each facial image region;
The subregion for the average depth of field that the depth of field in each facial image region is facial image region is determined as reference man's face Region;
Wherein, the average depth of field in each facial image region is the average depth of field of at least two pixels in facial image region.
7. method described in method according to any one of claim 1 to 6, which is characterized in that the acquisition is described pre- Look at target reference object described in image face location information the step of, comprising:
Facial image identification is carried out to the preview image;
Obtain the face location information of the facial image of identification.
8. a kind of mobile terminal, which is characterized in that the mobile terminal includes colour imagery shot and infrared thermal imaging camera, institute State mobile terminal further include:
First obtains module, obtains the colour imagery shot and the infrared thermal imaging camera respectively to the shooting pair of same target As preview image collected and infrared thermal imaging image;
Second obtains module, for obtaining the face location information of target reference object described in the preview image;
Extraction module, for extracting the human body contour outline information of target reference object described in the infrared thermal imaging image;
Determining module, for determining institute in the preview image according to the face location information and the human body contour outline information State the object region where target reference object;
First virtualization processing module, for carrying out virtualization processing to the preview image according to the object region;
The first virtualization processing module, comprising:
Second setting unit, for the reference picture subregion of the object region to be arranged;
5th determination unit, for according to pixel each in non-object image region and the reference picture subregion away from From determining the virtualization grade of each pixel;
Second removes unit, for removing the object region in the preview image;
Second virtualization processing unit, for the virtualization grade by each pixel in non-object image region, to each pixel Carry out virtualization processing;
Second image composing unit, the object region for that will remove and virtualization treated non-object image region Carry out image synthesis;
Wherein, the non-object image region is all image districts in the preview image in addition to the object region Domain;
Second setting unit includes:
Third computation subunit, for calculating separately the depth of view information of each pixel in the object region;
4th computation subunit, for calculating the average depth of field of at least two pixels in the object region;
Second determines subelement, for the subregion that the depth of field in the object region is the average depth of field to be determined as joining Examine image region.
9. mobile terminal according to claim 8, which is characterized in that the extraction module includes:
First acquisition unit, for obtaining the temperature range of human body;
First determination unit, according to the corresponding relationship of preset temperature range and false colour section, the determining temperature with the human body The corresponding false colour section of range;
Second determination unit, for by false colour in the infrared thermal imaging image the human body the corresponding false colour of temperature range Image in section is determined as the infrared character image of the target reference object;
Extraction unit, for extracting the human body contour outline information of the infrared character image.
10. mobile terminal according to claim 8, which is characterized in that the determining module includes:
Second acquisition unit, for obtaining the coordinate amount of translation between the preview image and the infrared thermal imaging image;
Converting unit, for the human body contour outline information being carried out coordinate conversion, is obtained described pre- according to the coordinate amount of translation The transformed profile information look in image;
Judging unit, for judging the face location information whether in the range of the transformed profile information;
First recognition unit, if for the face location information in the range of transformed profile information, described in identification The corresponding image-region of transformed profile information described in preview image;
Third determination unit, for the corresponding image-region of the transformed profile information to be determined as the target reference object institute Object region.
11. mobile terminal according to claim 10, which is characterized in that the mobile terminal further include:
Second virtualization processing module, if including at least two object regions for the preview image, respectively to described The non-face image-region of at least two object regions carries out virtualization processing;
Wherein, the non-face image-region of at least two object region is at least two object region All image-regions in addition to facial image region.
12. mobile terminal according to claim 11, which is characterized in that described second, which blurs processing module, includes:
First setting unit, reference man's face region in the facial image region for each object region to be arranged;
Division unit, at a distance from facial image region, being distinguished according to each pixel in each non-face image-region Each non-face image-region is divided at least two non-face image regions;
4th determination unit, for according to each non-face image region and reference man's face region in facial image region Distance determines the virtualization grade of each non-face image region;
First removes unit, for removing the facial image region in each object region;
First virtualization processing unit, for the virtualization grade according to each non-face image region, respectively to each non-face Image region carries out virtualization processing;
First image composing unit, facial image region for that will remove and virtualization treated non-face image region into The synthesis of row image.
13. mobile terminal according to claim 12, which is characterized in that first setting unit includes:
First computation subunit, the scape of each pixel in the facial image region for calculating separately each object region Deeply convince breath;
Second computation subunit, for calculating separately the average depth of field in each facial image region;
First determines subelement, for the sub-district by the depth of field in each facial image region for the average depth of field in facial image region Domain is determined as reference man's face region;
Wherein, the average depth of field in each facial image region is the average depth of field of at least two pixels in facial image region.
14. the mobile terminal according to any one of claim 8 to 13, which is characterized in that described second obtains module packet It includes:
Second recognition unit, for carrying out facial image identification to the preview image;
Third acquiring unit, the face location information of the facial image for obtaining identification.
15. a kind of mobile terminal, comprising: memory, processor and storage are on a memory and the meter that can run on a processor Calculation machine program, which is characterized in that the processor realizes such as any one of claims 1 to 7 when executing the computer program Step in the image processing method.
16. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program quilt The step in the image processing method as described in any one of claims 1 to 7 is realized when processor executes.
CN201710576627.5A 2017-07-14 2017-07-14 A kind of image processing method and mobile terminal Active CN107395965B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710576627.5A CN107395965B (en) 2017-07-14 2017-07-14 A kind of image processing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710576627.5A CN107395965B (en) 2017-07-14 2017-07-14 A kind of image processing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN107395965A CN107395965A (en) 2017-11-24
CN107395965B true CN107395965B (en) 2019-11-29

Family

ID=60340297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710576627.5A Active CN107395965B (en) 2017-07-14 2017-07-14 A kind of image processing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN107395965B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053363A (en) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 Background blurring processing method, device and equipment
CN107948517B (en) * 2017-11-30 2020-05-15 Oppo广东移动通信有限公司 Preview picture blurring processing method, device and equipment
CN107995425B (en) * 2017-12-11 2019-08-20 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108093181B (en) * 2018-01-16 2021-03-30 奇酷互联网络科技(深圳)有限公司 Picture shooting method and device, readable storage medium and mobile terminal
CN108769505A (en) * 2018-03-30 2018-11-06 联想(北京)有限公司 A kind of image procossing set method and electronic equipment
CN108848300A (en) * 2018-05-08 2018-11-20 百度在线网络技术(北京)有限公司 Method and apparatus for output information
CN109241947A (en) * 2018-10-15 2019-01-18 盎锐(上海)信息科技有限公司 Information processing unit and method for the monitoring of stream of people's momentum
CN110047126B (en) * 2019-04-25 2023-11-24 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN111614932A (en) * 2019-05-14 2020-09-01 北京精准沟通传媒科技股份有限公司 Data processing method and device
CN112351268B (en) * 2019-08-07 2022-09-02 杭州海康微影传感科技有限公司 Thermal imaging camera burn detection method and device and electronic equipment
CN110996078A (en) * 2019-11-25 2020-04-10 深圳市创凯智能股份有限公司 Image acquisition method, terminal and readable storage medium
CN110855868A (en) * 2019-11-25 2020-02-28 李峥炜 Human image enhanced camera
CN113014791B (en) * 2019-12-20 2023-09-19 中兴通讯股份有限公司 Image generation method and device
CN113138387B (en) * 2020-01-17 2024-03-08 北京小米移动软件有限公司 Image acquisition method and device, mobile terminal and storage medium
CN114088207A (en) * 2020-07-17 2022-02-25 北京京东尚科信息技术有限公司 Temperature detection method and system
CN112217992A (en) * 2020-09-29 2021-01-12 Oppo(重庆)智能科技有限公司 Image blurring method, image blurring device, mobile terminal, and storage medium
CN112907890B (en) * 2020-12-08 2021-11-09 温岭市山市金德利电器配件厂 Automatic change protection platform
CN112949568A (en) * 2021-03-25 2021-06-11 深圳市商汤科技有限公司 Method and device for matching human face and human body, electronic equipment and storage medium
CN113301233B (en) * 2021-05-21 2023-02-03 南阳格瑞光电科技股份有限公司 Double-spectrum imaging system
CN113965695B (en) * 2021-09-07 2024-06-21 福建库克智能科技有限公司 Image display method, system, device, display unit and medium
CN116469519B (en) * 2023-03-23 2024-01-26 北京鹰之眼智能健康科技有限公司 Human body acupoint obtaining method based on infrared image

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751405A (en) * 2015-03-11 2015-07-01 百度在线网络技术(北京)有限公司 Method and device for blurring image
CN104780313A (en) * 2015-03-26 2015-07-15 广东欧珀移动通信有限公司 Image processing method and mobile terminal
CN104966266A (en) * 2015-06-04 2015-10-07 福建天晴数码有限公司 Method and system to automatically blur body part
CN105516586A (en) * 2015-12-01 2016-04-20 小米科技有限责任公司 Picture shooting method, device and system
CN105678310A (en) * 2016-02-03 2016-06-15 北京京东方多媒体科技有限公司 Infrared thermal image contour extraction method and device
CN105933589A (en) * 2016-06-28 2016-09-07 广东欧珀移动通信有限公司 Image processing method and terminal
CN106101544A (en) * 2016-06-30 2016-11-09 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN106331492A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Image processing method and terminal
CN106446873A (en) * 2016-11-03 2017-02-22 北京旷视科技有限公司 Face detection method and device
CN106454118A (en) * 2016-11-18 2017-02-22 上海传英信息技术有限公司 Picture blurring method and mobile terminal

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751405A (en) * 2015-03-11 2015-07-01 百度在线网络技术(北京)有限公司 Method and device for blurring image
CN104780313A (en) * 2015-03-26 2015-07-15 广东欧珀移动通信有限公司 Image processing method and mobile terminal
CN104966266A (en) * 2015-06-04 2015-10-07 福建天晴数码有限公司 Method and system to automatically blur body part
CN105516586A (en) * 2015-12-01 2016-04-20 小米科技有限责任公司 Picture shooting method, device and system
CN105678310A (en) * 2016-02-03 2016-06-15 北京京东方多媒体科技有限公司 Infrared thermal image contour extraction method and device
CN105933589A (en) * 2016-06-28 2016-09-07 广东欧珀移动通信有限公司 Image processing method and terminal
CN106101544A (en) * 2016-06-30 2016-11-09 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN106331492A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Image processing method and terminal
CN106446873A (en) * 2016-11-03 2017-02-22 北京旷视科技有限公司 Face detection method and device
CN106454118A (en) * 2016-11-18 2017-02-22 上海传英信息技术有限公司 Picture blurring method and mobile terminal

Also Published As

Publication number Publication date
CN107395965A (en) 2017-11-24

Similar Documents

Publication Publication Date Title
CN107395965B (en) A kind of image processing method and mobile terminal
CN105847674B (en) A kind of preview image processing method and mobile terminal based on mobile terminal
CN105933607B (en) A kind of take pictures effect method of adjustment and the mobile terminal of mobile terminal
CN107507239B (en) A kind of image partition method and mobile terminal
CN106254682B (en) A kind of photographic method and mobile terminal
CN105933589B (en) A kind of image processing method and terminal
CN106060419B (en) A kind of photographic method and mobile terminal
CN105827965B (en) A kind of image processing method and mobile terminal based on mobile terminal
CN107423699B (en) Biopsy method and Related product
CN107181913B (en) A kind of photographic method and mobile terminal
CN107679482A (en) Solve lock control method and Related product
CN107527034B (en) A kind of face contour method of adjustment and mobile terminal
CN104574397B (en) The method and mobile terminal of a kind of image procossing
CN110300264B (en) Image processing method, image processing device, mobile terminal and storage medium
CN106603928B (en) A kind of image pickup method and mobile terminal
WO2019001152A1 (en) Photographing method and mobile terminal
CN107832675A (en) Processing method of taking pictures and Related product
CN106027952B (en) Camera chain is estimated relative to the automatic direction of vehicle
CN106027900A (en) Photographing method and mobile terminal
CN109117725A (en) Face identification method and device
CN106161932B (en) A kind of photographic method and mobile terminal
CN106096043B (en) A kind of photographic method and mobile terminal
CN106506962A (en) A kind of image processing method and mobile terminal
CN107465874B (en) A kind of dark current processing method and mobile terminal
CN110113515A (en) Camera control method and Related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant