CN106960455A - Orient transaudient method and terminal - Google Patents

Orient transaudient method and terminal Download PDF

Info

Publication number
CN106960455A
CN106960455A CN201710161239.0A CN201710161239A CN106960455A CN 106960455 A CN106960455 A CN 106960455A CN 201710161239 A CN201710161239 A CN 201710161239A CN 106960455 A CN106960455 A CN 106960455A
Authority
CN
China
Prior art keywords
camera device
user
parameter
coordinate
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710161239.0A
Other languages
Chinese (zh)
Inventor
姜瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Original Assignee
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yulong Computer Telecommunication Scientific Shenzhen Co Ltd filed Critical Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority to CN201710161239.0A priority Critical patent/CN106960455A/en
Publication of CN106960455A publication Critical patent/CN106960455A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Studio Devices (AREA)
  • Telephone Function (AREA)

Abstract

The invention provides the transaudient method of one kind orientation and terminal, wherein, this method includes:The pixel coordinate of user's face characteristic point is obtained by the camera device of mobile terminal;According to the outer parameter acquiring of the intrinsic parameter of camera device and camera device world coordinates corresponding with the pixel coordinate of the user's face characteristic point;Wherein, the intrinsic parameter of camera device includes the optics and geometrical characteristic parameter of the camera device;The outer parameter of camera device is obtained using the camera device scaling method;Mobile terminal is controlled to transmit sound to the direction of the user's face characteristic point according to world coordinates.The relative distance and orientation between mobile terminal and user's head can not be obtained exactly in the prior art by being solved by the present invention, cause to orient the problem of transaudient precision is poor, and then improve the transaudient degree of accuracy of orientation.

Description

Orient transaudient method and terminal
Technical field
The present invention relates to communication technical field, and in particular to the transaudient method of one kind orientation and terminal.
Background technology
The principle of audio frequency directional early in begun to before half a century research, all achieved in theory and practice it is many into Really.In recent years, with the progress of society, the portable mobile equipment using mobile phone as representative is more and more applied to each social angle Fall.People often use audio and set when various occasions are operated the activity such as commercial affairs, amusement and leisure using mobile device It is standby.But traditional audio frequency apparatus can not ensure privacy in public using often causing noise, can be with one using earphone Determine to solve problem in degree, but it is unfavorable to human ear using earphone for a long time, hearing permanent damage can be caused.In such premise Under, the micro audio directional research based on audio frequency directional technology turns into the focus of attention of association area naturally.
Presently, for mobile device micro audio directional system also in the trial stage, association area is according to Some audio frequency directional achievements in research, it is proposed that some are used for the orientation acoustic transmission technique scheme of mobile device, but right in the prior art Setting accuracy between user's head and mobile device is not high.
The content of the invention
In view of this, the embodiments of the invention provide the transaudient method of one kind orientation and terminal, to solve in the prior art not The relative distance and orientation between mobile terminal and user's head can be obtained exactly, and then cause to orient transaudient precision poor Problem.
Therefore, the embodiments of the invention provide following technical scheme:
First aspect present invention orients transaudient method there is provided one kind, including:Obtained by the camera device of mobile terminal The pixel coordinate of user's face characteristic point;According to the outer parameter acquiring of the intrinsic parameter of the camera device and the camera device with The corresponding world coordinates of pixel coordinate of the user's face characteristic point;Wherein, the intrinsic parameter of the camera device includes described The optics and geometrical characteristic parameter of camera device;The outer parameter of the camera device is obtained using the camera device scaling method Take;The mobile terminal is controlled to transmit sound to the direction of the user's face characteristic point according to the world coordinates.
With reference to first aspect present invention, in first aspect present invention first embodiment, according in the camera device Parameter and the outer parameter acquiring of the camera device world coordinates bag corresponding with the pixel coordinate of the user's face characteristic point Include:Camera device coordinate corresponding with the pixel coordinate is obtained according to the intrinsic parameter of the camera device;According to the shooting The outer parameter acquiring of the device world coordinates corresponding with the camera device coordinate.
In first aspect present invention first embodiment, first aspect present invention second embodiment, the camera device The corresponding Intrinsic Matrix of intrinsic parameter be expressed as:Wherein, kxIt is expressed as X-axis amplification coefficient, kyRepresent Y Axle amplification coefficient, (uo, vo) represent the origin of image coordinate system corresponding with the user's face characteristic point in pixel coordinate system institute Corresponding coordinate;The corresponding outer parameter matrix of outer parameter of the camera device is expressed as:Wherein, ComponentRepresent each reference axis of world coordinate system in shooting respectively Direction vector under device coordinate system,For motion vector, represent world coordinate system origin in camera device Position in coordinate system.
In first aspect present invention second embodiment, the embodiment of first aspect present invention the 3rd, pass through equation below Camera device coordinate corresponding with the pixel coordinate is obtained according to the intrinsic parameter of the camera device:
By equation below according to the outer parameter acquiring of the camera device is corresponding with the camera device coordinate World coordinates:
Wherein, world coordinate system:(xW, yW, zW);Camera device coordinate system:(xC, yC, zC), figure As coordinate system:(x, y), pixel coordinate system:(u, v).
With reference to first aspect present invention, first aspect first embodiment, first aspect second embodiment or first It is described mobile whole according to world coordinates control in the embodiment of aspect the 3rd, the embodiment of first aspect present invention the 4th Hold to after the direction transmission sound of the user's face characteristic point, also include:Obtained by the sensor of the mobile terminal Running parameter of the user's face characteristic point relative to the mobile terminal;Obtained and the user in real time according to the running parameter The corresponding world coordinates of pixel coordinate of face feature point.
Second aspect present invention orients transaudient terminal there is provided one kind, including:First acquisition module, for passing through movement The camera device of terminal obtains the pixel coordinate of user's face characteristic point;Second acquisition module, for according to the camera device Intrinsic parameter and corresponding with the pixel coordinate of the user's face characteristic point world of outer parameter acquiring of the camera device sit Mark;Wherein, the intrinsic parameter of the camera device includes the optics and geometrical characteristic parameter of the camera device;The camera device Outer parameter using the camera device demarcation terminal obtain;Transport module, for being moved according to world coordinates control is described Dynamic terminal transmits sound to the direction of the user's face characteristic point.
With reference to second aspect present invention, in second aspect present invention first embodiment, second acquisition module includes: First acquisition unit, sits for obtaining camera device corresponding with the pixel coordinate according to the intrinsic parameter of the camera device Mark;Second acquisition unit, for the outer parameter acquiring according to the camera device it is corresponding with the camera device coordinate described in World coordinates.
With reference to second aspect present invention first embodiment, in second aspect present invention second embodiment, the shooting The corresponding Intrinsic Matrix of intrinsic parameter of device is expressed as:Wherein, kxIt is expressed as X-axis amplification coefficient, kyTable Show Y-axis amplification coefficient, (uo, vo) represent the origin of image coordinate system corresponding with the user's face characteristic point in pixel coordinate The corresponding coordinate of system;The corresponding outer parameter matrix of outer parameter of the camera device is expressed as:Its In, componentRepresent that each reference axis of world coordinate system is being taken the photograph respectively As the direction vector under device coordinate system,For motion vector, represent that world coordinate system origin is filled in shooting Put the position in coordinate system.
With reference to second aspect present invention second embodiment, in the embodiment of second aspect present invention the 3rd, described first Acquiring unit is additionally operable to obtain take the photograph corresponding with the pixel coordinate according to the intrinsic parameter of the camera device by equation below As device coordinate:
The second acquisition unit be additionally operable to by equation below according to the outer parameter acquiring of the camera device with it is described The corresponding world coordinates of camera device coordinate:
Wherein, world coordinate system:(xW, yW, zW);Camera device coordinate system:(xC, yC, zC), figure As coordinate system:(x, y), pixel coordinate system:(u, v).
With reference to second aspect present invention, second aspect first embodiment, second aspect second embodiment or second In the embodiment of aspect the 3rd, the embodiment of second aspect present invention the 4th, the terminal also includes:3rd acquisition module, is used After being controlled the mobile terminal to transmit sound to the direction of the user's face characteristic point according to the world coordinates, pass through The sensor of the mobile terminal obtains running parameter of the user's face characteristic point relative to the mobile terminal;4th obtains mould Block, for obtaining world coordinates corresponding with the pixel coordinate of the user's face characteristic point in real time according to the running parameter.
Third aspect present invention provides a kind of electronic equipment, including:At least one processor;And with described at least one The memory of individual processor communication connection;Wherein, have can be by the instruction of one computing device, institute for the memory storage Instruction is stated by least one described computing device, so that at least one described computing device following steps:By mobile whole The camera device at end obtains the pixel coordinate of user's face characteristic point;Filled according to the intrinsic parameter of the camera device and the shooting The outer parameter acquiring put world coordinates corresponding with the pixel coordinate of the user's face characteristic point;Wherein, the camera device Intrinsic parameter include the optics and geometrical characteristic parameter of the camera device;The outer parameter of the camera device uses the shooting Device normalization method is obtained;The mobile terminal is controlled to be passed to the direction of the user's face characteristic point according to the world coordinates Defeated sound.
Embodiment of the present invention technical scheme, has the following advantages that:
The embodiments of the invention provide the transaudient method of one kind orientation and terminal, obtained and used by the camera device of mobile terminal The pixel coordinate of family face feature point;According to the outer parameter acquiring of the intrinsic parameter of camera device and camera device and the user's face The corresponding world coordinates of pixel coordinate of characteristic point;Wherein, the optics of the intrinsic parameter of camera device including the camera device and several What characteristic parameter, optics and geometric properties such as can be focal length, image center;The outer parameter use of camera device this take the photograph As device normalization method is obtained, the scaling method can be the scaling method based on two-dimensional target, can be with by the world coordinates Further be accurately positioned the distance between mobile terminal and user's head and orientation, according to world coordinates control mobile terminal to this The direction transmission sound of user's face characteristic point.Mobile terminal can not be obtained exactly in the prior art by being solved by the present invention Relative distance and orientation between user's head, cause to orient the problem of transaudient precision is poor, and then it is transaudient to improve orientation The degree of accuracy.
Brief description of the drawings
, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical scheme of the prior art The accompanying drawing to be used needed for embodiment or description of the prior art is briefly described, it should be apparent that, in describing below Accompanying drawing is some embodiments of the present invention, for those of ordinary skill in the art, before creative work is not paid Put, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 shows the structure chart of mobile phone in the embodiment of the present invention;
Fig. 2 is the flow chart of the transaudient method of orientation according to embodiments of the present invention;
Fig. 3 is the transaudient schematic flow sheet of the adaptive directionality based on image ranging according to embodiments of the present invention;
Fig. 4 is national forest park in Xiaokeng schematic diagram according to embodiments of the present invention;
Fig. 5 is image coordinate system according to embodiments of the present invention and pixel coordinate system relation schematic diagram;
Fig. 6 is a structured flowchart of the transaudient terminal of orientation according to embodiments of the present invention;
Fig. 7 is the structured flowchart of the second acquisition module according to embodiments of the present invention;
Fig. 8 is another structured flowchart of the transaudient terminal of orientation according to embodiments of the present invention.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is A part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those skilled in the art are not having There is the every other embodiment made and obtained under the premise of creative work, belong to the scope of protection of the invention.
As shown in figure 1, being the application scenarios schematic diagram of embodiments of the invention.Mobile terminal can be mobile phone or flat board electricity The mobile devices such as brain, mobile terminal is by taking mobile phone as an example, and the part-structure block diagram of mobile phone is as shown in figure 1, mobile phone includes radio circuit 210th, memory 220, input block 230, display unit 240, sensor 250, voicefrequency circuit 260, wireless module 270, processing Device 280 and the grade of power supply 290 part.It will be understood by those skilled in the art that the handset structure shown in Fig. 1 does not constitute opponent The restriction of machine, can be included than illustrating more or less parts, either combine some parts or different parts arrangement.
Wherein RF circuits 210 be used for receive and send messages or communication process in, the reception and transmission of signal.Memory 220 is used for Software program and module are stored, processor 280 is stored in the software program and module of memory 220 by operation, so that Perform various function application and the data processing of mobile phone.Input block 230 is used for the numeral or character information for receiving input, with And the key signals that generation is set with the user of mobile phone and function control is relevant are inputted.Input block 230 may include contact panel 231 and other input equipments 232.Other input equipments 232 can include but is not limited to physical keyboard, function key, mouse, behaviour Make the one or more in bar.Display unit 240 be used for show by user input information or be supplied to user information and The various menus of mobile phone.Display unit 240 can include display panel 241.Contact panel 231 can cover display panel 241, when Contact panel 231 is detected after the touch operation on or near it, sends processor 280 to determine the class of touch event Type, corresponding visual output is provided with preprocessor 280 according to the type of touch event on display panel 241.
Mobile phone may also include at least one sensor 250, such as optical sensor, motion sensor and other sensors.Light Sensor may include ambient light sensor and proximity transducer, and environmental sensor can adjust display according to the light and shade of ambient light The brightness of panel 241, proximity transducer can close display panel 241 and/or backlight when mobile phone is moved in one's ear.This implementation Optical sensor can be arranged on the housing of the front and back of mobile phone in example, blocked area during for detecting that user holds mobile phone Domain.Pressure sensor can also be included herein, be arranged in the front of mobile phone or back housing, for the side by detecting pressure Formula obtains occlusion area during user's grip mobile phone.In addition, mobile phone can also configure gyroscope, barometer, hygrometer, temperature The other sensors such as meter, infrared ray sensor, are repeated no more.
Voicefrequency circuit 260, loudspeaker 261, microphone 262 can provide the COBBAIF between user and mobile phone.Wireless mould Block 270 can be WIFI module, provide the user wireless the Internet access service.
Processor 280 is the control centre of mobile phone, using various interfaces and the various pieces of connection whole mobile phone, is led to Cross operation or perform and be stored in software program and/or module in memory 220, and call and be stored in memory 220 Data, perform the various functions and processing data of mobile phone, so as to carry out integral monitoring to mobile phone.Optionally, processor 280 can be with Including one or more processing units.In addition, mobile phone also includes the power supply 290 powered of each part, by power-supply management system with Processor 280 is logically contiguous, so as to realize the functions such as management charging, electric discharge and power managed by power-supply management system.
Although not shown, mobile phone can also include camera, bluetooth module etc., will not be repeated here.
A kind of transaudient method of orientation is provided in the present embodiment, available for above-mentioned mobile terminal, such as mobile phone, flat board electricity Brain etc., Fig. 2 is the flow chart of the transaudient method of orientation according to embodiments of the present invention, as shown in Fig. 2 the flow includes following step Suddenly:
Step S201, the pixel coordinate of user's face characteristic point is obtained by the camera device of mobile terminal.
Step S202, according to the outer parameter acquiring of the intrinsic parameter of camera device and camera device and the user's face characteristic point The corresponding world coordinates of pixel coordinate;Wherein, the intrinsic parameter of camera device includes the optics and geometric properties ginseng of camera device Number, optics and geometric properties such as can be focal length, image center;The outer parameter of the camera device uses the camera device Scaling method is obtained, and the scaling method can be the scaling method based on two-dimensional target.And then can be obtained by the world coordinates The distance between mobile terminal and user's head and orientation are got, control mobile terminal transmits sound according to the range-azimuth.
Step S203, controls mobile terminal to transmit sound to the direction of user's face characteristic point according to world coordinates.With The gradually maturation of ultrasonic directional acoustic transmission technique, and relevant device develop to miniaturization, cost degradation, specific to mobile terminal Directional loudspeaker module, can using ultrasonic directional acoustical generator as directional loudspeaker, used cooperatively with conventional speakers, It is ultimately oriented transaudient.
By above-mentioned steps, on the basis of the pixel coordinate of user's face characteristic point is got, according to camera device The outer parameter acquiring of intrinsic parameter and camera device world coordinates corresponding with the pixel coordinate of the user's face characteristic point, by obtaining The world coordinates got can be with the distance between more accurate positioning mobile terminal and user's head and orientation, and then can control Mobile terminal processed is according to the localized delivery sound, so as to ensure that the transaudient degree of accuracy of mobile terminal directional.
Above-mentioned steps S202 is related to the outer parameter acquiring and user plane of the intrinsic parameter and camera device according to camera device The corresponding world coordinates of pixel coordinate of portion's characteristic point, in one alternate embodiment, is obtained according to the intrinsic parameter of camera device Camera device coordinate corresponding with the pixel coordinate, it is corresponding with the camera device coordinate according to the outer parameter acquiring of camera device The world coordinates.
In one alternate embodiment, the corresponding Intrinsic Matrix of the intrinsic parameter of camera device is expressed as:Wherein, kxIt is expressed as X-axis amplification coefficient, kyRepresent Y-axis amplification coefficient, (uo, vo) represent special with the user's face The origin of a little corresponding image coordinate system is levied in the coordinate corresponding to pixel coordinate system;The corresponding outer parameter of outer parameter of camera device Matrix is expressed as:Wherein, component Direction vector of each reference axis of world coordinate system under camera device coordinate system is represented respectively,For displacement to Amount, represents position of the world coordinate system origin in camera device coordinate system.Specifically, filled by equation below according to the shooting The intrinsic parameter put obtains camera device coordinate corresponding with the pixel coordinate:
Sat by equation below according to outer parameter acquiring world corresponding with the camera device coordinate of the camera device Mark:
Wherein, world coordinate system:(xW, yW, zW);Camera device coordinate system:(xC, yC, zC), figure As coordinate system:(x, y), pixel coordinate system:(u, v).
Because user is during using mobile terminal call, continuous mobile status is likely to be at, appearance is often converted Gesture, so as to cause the distance between mobile terminal and user's head and direction to change, therefore, in an alternative embodiment In, according to world coordinates control direction from the mobile terminal to user's face characteristic point transmit sound after, pass through mobile terminal Sensor obtain running parameter of the user's face characteristic point relative to mobile terminal, and the parameter that these have changed is accused in time Know the transaudient module of orientation, so that the latter restarts image ranging, reacquire the position of user's head, so as to change distance ginseng Number so that the directional loudspeaker of mobile terminal changes transaudient direction and the distance purpose transaudient to reach self adaptation.
In a complete orientation process of carrying sound, as shown in figure 3, using Identification of Images with being surveyed based on monocular image sequence Range-azimuth of the user's head with respect to mobile phone is determined away from the mode being combined.In the state of initialization, mobile phone is used Camera IMAQ is carried out to user's face and side face part, generate initial user's head feature database to calculate. Generate behind initial characteristicses storehouse, when user needs to use orientation transaudient, mobile phone opens the camera just to user, carries out quadratic diagram As collection, determine that user is now placed in the image range of collection using face recognition technology, ensure that user is in orientation with this In transaudient effective range.The image after collection and feature database are contrasted again, using recognition result differentiate the sex of active user with Substantially (because the people of different sexes and different age group, ear opposite faces position is different, therefore is oriented transaudient the age Module needs to provide the transaudient scheme for adaptation according to specifically used people) to determine transaudient scheme.
Process on obtaining the distance between mobile terminal and user's head and orientation by image ranging, can at one Select in embodiment, after carrying out secondary image collection and successfully identifying user, a crucial step is to utilize camera collection Image carry out ranging, calculate the distance between mobile phone and user's head and orientation.Mobile phone of today, generally equipped with high score The camera of resolution, this alternative embodiment uses monocular image sequence ranging technology, the high-definition image meter gathered using camera Calculate the relative distance and orientation between mobile phone and user's head.
Reconstructing three-dimensional scenic by two dimensional image needs to use four coordinate systems, is respectively:
World coordinate system:(XW, YW, ZW);
Camera coordinate system:(XC, YC, ZC), wherein ZCAxle and optical axis coincidence, whole coordinate system meet the right-hand rule;
Image coordinate system:(x, y), wherein coordinate origin are the intersection point of optical axis and imaging plane;
Pixel coordinate system:(u, v), wherein coordinate origin are the upper left corner of image, and u axles are computer screen x-axis (water It is flat), v axles are computer screen y-axis (vertical).
The monocular distance-finding method model of this alternative embodiment can be reduced to national forest park in Xiaokeng, as shown in Figure 4.OCIt is to take the photograph As the photocentre of head, ΠiiIt is imaging plane.In national forest park in Xiaokeng, on imaging plane as being to turn upside down with actual object, But the photo of actual photographed, scaling has been carried out to image and has been adjusted with direction.Therefore can be by ΠiIt is considered equivalent imaging Plane.
OCIt is the photocentre of camera, while being also the origin of camera coordinate system.Assuming that point P (the such as users of certain in scene Face some characteristic point) coordinate be P (x, y, z), P points are in imaging plane
ΠiOn subpoint be Pi(xi, yi, zi), then it can obtain under camera coordinate system between target point and imaging point Relation (relation 1), and the internal reference exponential model of camera is then described in target point and pixel coordinate system in camera coordinate system Transformational relation between subpoint.
Image coordinate system xoy and pixel coordinate system uov are distributed in the plane of delineation, after point (x, the y) correspondence in the former (u, v) in person, and the position in the origin respective pixel coordinate system of image coordinate system is designated as (uo, vo), its relation such as Fig. 5 institutes Show.Being tied to pixel coordinate system progress coordinate transform from image coordinate needs the conversion in physical quantity, can make αxAnd αyRepresent from Imaging plane to pixel planes amplification coefficient, otherwise can also be interpreted as 1/ αxWith 1/ αyRepresent respectively each in X and Y-direction The actual physical size that pixel is represented.It can then respectively obtain between the point in point and the pixel coordinate system in image coordinate system Transformational relation (relation 2).
Relation 2 is combined with relation 1, Intrinsic Matrix only relevant with camera this body structure can be obtainedIt is designated as Min, four parameters for constituting this matrix are respectively x-axis amplification coefficient kx, y-axis amplification coefficient kyAnd figure The origin of picture coordinate system is in the corresponding coordinate points of pixel coordinate system to (uo, vo).Also due in the presence of four parameters, therefore MinAgain It is referred to as four parameter model.In the difference between not considering the respective amplification coefficient of xy axles, k can be approximately consideredx=ky=k.
There is internal reference exponential model, accordingly also there is outer parameter model.It is used to describe world coordinate system and camera coordinate system Transformational relation.If certain coordinate of point in world coordinate system is (xw, yw, zw), the coordinate in camera coordinate system is (xc, yc, zc), there is transition matrix between the twoIt is designated ascMW, it is the outer parameter matrix of camera, represents world coordinates It is OWXWYWZWWith camera coordinate system OcXcYcZcRelative position.ComponentWorld coordinate system O is represented respectivelyWXWYWZWEach reference axis exists Camera coordinate system OcXcYcZcUnder direction vector,It is motion vector, represents that world coordinate system origin exists Position in camera coordinate system.
According to above-mentioned camera inside and outside parameter model, pixel can be obtained by forgoing relationship and intrinsic parameter model matrix and sat Relation between mark system and camera coordinate system:
And according to outer parameter model matrix, then can obtain the relation between world coordinate system and camera coordinate system:
The matrix constituted by the point of identical part, i.e. camera coordinate system in formula (1) and formula (2), can be by two Formula simultaneous, then can finally obtain the relation of brief introduction between world coordinate system and pixel coordinate system:
Formula (3) carries out coordinate transformation between providing the subpoint P ' in spatial scene in any point P and digital picture Function.(u, v) on the left of formula is the user's face characteristic point coordinate value on the image that camera is shot, and passes through inside and outside parameter The computing of participation, you can obtain user's face characteristic point (can also regard whole mobile phone as one point, together with mobile phone camera When, for convenience of understand, user's head is also reduced to a point here) for origin coordinate system in coordinate value (xW, yW, zW)。 Further according to European geometric space range formulaThe sky between mobile phone and user's head can be finally given Between actual range d.
By setting up the model of image ranging, the image ranging gathered based on mobile phone camera can be completed, measure is based on Locus where mobile phone, user's head, and then range-azimuth between the two can be calculated.Distance parameter etc. is passed Pass directional loudspeaker module, you can realize that orientation is transaudient.
A kind of transaudient terminal of orientation is additionally provided in the present embodiment, and the device is used to realize above-described embodiment and preferred real Mode is applied, repeating no more for explanation had been carried out.As used below, term " module " can realize the soft of predetermined function The combination of part and/or hardware.Although the device described by following examples is preferably realized with software, hardware, or The realization of the combination of software and hardware is also that may and be contemplated.
Fig. 6 is a structured flowchart of the transaudient terminal of orientation according to embodiments of the present invention, as shown in fig. 6, including:First Acquisition module 61, the pixel coordinate of user's face characteristic point is obtained for the camera device by mobile terminal;Second obtains mould Block 62, the pixel coordinate for the outer parameter acquiring and user's face characteristic point of the intrinsic parameter according to camera device and camera device Corresponding world coordinates;Wherein, the intrinsic parameter of camera device includes the optics and geometrical characteristic parameter of camera device;Camera device Outer parameter using camera device demarcation terminal obtain;Transport module 63, for according to world coordinates control mobile terminal to this The direction transmission sound of user's face characteristic point.
By above-mentioned terminal, on the basis of the pixel coordinate of user's face characteristic point is got, according to camera device The outer parameter acquiring of intrinsic parameter and camera device world coordinates corresponding with the pixel coordinate of the user's face characteristic point, by obtaining The world coordinates got can be with the distance between more accurate positioning mobile terminal and user's head and orientation, and then can control Mobile terminal processed is according to the localized delivery sound, so as to ensure that the transaudient degree of accuracy of mobile terminal directional.
Fig. 7 is the structured flowchart of the second acquisition module according to embodiments of the present invention, as shown in fig. 7, the second acquisition module 62 include:First acquisition unit 621, sits for obtaining camera device corresponding with pixel coordinate according to the intrinsic parameter of camera device Mark;Second acquisition unit 622, sits for outer parameter acquiring world corresponding with camera device coordinate according to camera device Mark.
Alternatively, the corresponding Intrinsic Matrix of the intrinsic parameter of camera device is expressed as:Wherein, kxTable It is shown as X-axis amplification coefficient, kyRepresent Y-axis amplification coefficient, (uo, vo) represent image coordinate corresponding with the user's face characteristic point The origin of system is in the coordinate corresponding to pixel coordinate system;The corresponding outer parameter matrix of outer parameter of the camera device is expressed as:Wherein, componentThe world is represented respectively Direction vector of each reference axis of coordinate system under camera device coordinate system,For motion vector, the world is represented Position of the coordinate origin in camera device coordinate system.
Alternatively, the first acquisition unit is additionally operable to be obtained with being somebody's turn to do according to the intrinsic parameter of the camera device by equation below The corresponding camera device coordinate of pixel coordinate:
The second acquisition unit is additionally operable to be filled with the shooting according to the outer parameter acquiring of the camera device by equation below Put the corresponding world coordinates of coordinate:
Wherein, world coordinate system:(xW, yW, zW);Camera device coordinate system:(xC, yC, zC), figure As coordinate system:(x, y), pixel coordinate system:(u, v).
Fig. 8 is another structured flowchart of the transaudient terminal of orientation according to embodiments of the present invention, as shown in figure 8, the terminal Also include:3rd acquisition module 81, for controlling the mobile terminal to the side of the user's face characteristic point according to the world coordinates To after transmission sound, change of the user's face characteristic point relative to the mobile terminal is obtained by the sensor of mobile terminal and joined Number;4th acquisition module 82, for obtaining corresponding with the pixel coordinate of user's face characteristic point in real time according to the running parameter World coordinates.
The transaudient terminal of orientation in the present embodiment is presented in the form of functional unit, and unit here refers to ASIC electricity Road, performs one or more softwares or the processor and memory of fixed routine, and/or other can provide the device of above-mentioned functions Part.
The further function description of above-mentioned modules is identical with above-mentioned correspondence embodiment, will not be repeated here.
It is to lead to it will be understood by those skilled in the art that realizing all or part of flow in above-described embodiment method Cross computer program to instruct the hardware of correlation to complete, described program can be stored in a computer read/write memory medium In, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage medium can be magnetic Dish, CD, read-only memory (ROM) or random access memory (RAM) etc..
Although being described in conjunction with the accompanying embodiments of the invention, those skilled in the art can not depart from the present invention Spirit and scope in the case of various modification can be adapted and modification, such modification and modification are each fallen within by appended claims institute Within the scope of restriction.

Claims (10)

1. the transaudient method of one kind orientation, it is characterised in that including:
The pixel coordinate of user's face characteristic point is obtained by the camera device of mobile terminal;
According to the outer parameter acquiring of the intrinsic parameter of the camera device and the camera device and the user's face characteristic point The corresponding world coordinates of pixel coordinate;Wherein, the intrinsic parameter of the camera device includes the optics and geometry of the camera device Characteristic parameter;The outer parameter of the camera device is obtained using the camera device scaling method;
The mobile terminal is controlled to transmit sound to the direction of the user's face characteristic point according to the world coordinates.
2. according to the method described in claim 1, it is characterised in that filled according to the intrinsic parameter of the camera device and the shooting The outer parameter acquiring put world coordinates corresponding with the pixel coordinate of the user's face characteristic point includes:
Camera device coordinate corresponding with the pixel coordinate is obtained according to the intrinsic parameter of the camera device;
According to the outer parameter acquiring of the camera device world coordinates corresponding with the camera device coordinate.
3. method according to claim 2, it is characterised in that the corresponding Intrinsic Matrix of intrinsic parameter of the camera device It is expressed as:Wherein, kxIt is expressed as X-axis amplification coefficient, kyRepresent Y-axis amplification coefficient, (uo, vo) represent with The origin of the corresponding image coordinate system of the user's face characteristic point is in the coordinate corresponding to pixel coordinate system;
The corresponding outer parameter matrix of outer parameter of the camera device is expressed as:Wherein, componentRepresent each reference axis of world coordinate system in camera device respectively Direction vector under coordinate system,For motion vector, represent world coordinate system origin in camera device coordinate Position in system.
4. method according to claim 3, it is characterised in that pass through intrinsic parameter of the equation below according to the camera device Obtain camera device coordinate corresponding with the pixel coordinate:
u v 1 = M i n x c / z c y c / z c 1 1 / z c ;
Pass through outer parameter acquiring with the camera device coordinate corresponding world of the equation below according to the camera device Coordinate:
Wherein, world coordinate system:(xW, yW, zW);Camera device coordinate system:(xC, yC, zC), image coordinate System:(x, y), pixel coordinate system:(u, v).
5. according to any described method in Claims 1-4, it is characterised in that moved according to world coordinates control is described After dynamic terminal transmits sound to the direction of the user's face characteristic point, also include:
Running parameter of the user's face characteristic point relative to the mobile terminal is obtained by the sensor of the mobile terminal;
World coordinates corresponding with the pixel coordinate of the user's face characteristic point is obtained according to the running parameter in real time.
6. the transaudient terminal of one kind orientation, it is characterised in that including:
First acquisition module, the pixel coordinate of user's face characteristic point is obtained for the camera device by mobile terminal;
Second acquisition module, for the intrinsic parameter according to the camera device and the camera device outer parameter acquiring with it is described The corresponding world coordinates of pixel coordinate of user's face characteristic point;Wherein, the intrinsic parameter of the camera device includes the shooting The optics and geometrical characteristic parameter of device;The outer parameter of the camera device is obtained using camera device demarcation terminal;
Transport module, for controlling the mobile terminal to be passed to the direction of the user's face characteristic point according to the world coordinates Defeated sound.
7. terminal according to claim 6, it is characterised in that second acquisition module includes:
First acquisition unit, for obtaining camera device corresponding with the pixel coordinate according to the intrinsic parameter of the camera device Coordinate;
Second acquisition unit, for the outer parameter acquiring according to the camera device it is corresponding with the camera device coordinate described in World coordinates.
8. terminal according to claim 7, it is characterised in that the corresponding Intrinsic Matrix of intrinsic parameter of the camera device It is expressed as:Wherein, kxIt is expressed as X-axis amplification coefficient, kyRepresent Y-axis amplification coefficient, (uo, vo) represent with The origin of the corresponding image coordinate system of the user's face characteristic point is in the coordinate corresponding to pixel coordinate system;
The corresponding outer parameter matrix of outer parameter of the camera device is expressed as:Wherein, componentRepresent each reference axis of world coordinate system in camera device respectively Direction vector under coordinate system,For motion vector, represent world coordinate system origin in camera device coordinate Position in system.
9. terminal according to claim 8, it is characterised in that the first acquisition unit is additionally operable to by equation below root Camera device coordinate corresponding with the pixel coordinate is obtained according to the intrinsic parameter of the camera device:
u v 1 = M i n x c / z c y c / z c 1 1 / z c ;
The second acquisition unit is additionally operable to the outer parameter acquiring according to the camera device and the shooting by equation below The corresponding world coordinates of device coordinate:
Wherein, world coordinate system:(xW, yW, zW);Camera device coordinate system:(xC, yC, zC), image coordinate System:(x, y), pixel coordinate system:(u, v).
10. according to any described terminal in claim 6 to 9, it is characterised in that the terminal also includes:
3rd acquisition module, for controlling the mobile terminal to the side of the user's face characteristic point according to the world coordinates To after transmission sound, user's face characteristic point is obtained relative to the mobile terminal by the sensor of the mobile terminal Running parameter;
4th acquisition module, for obtaining the pixel coordinate pair with the user's face characteristic point in real time according to the running parameter The world coordinates answered.
CN201710161239.0A 2017-03-17 2017-03-17 Orient transaudient method and terminal Pending CN106960455A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710161239.0A CN106960455A (en) 2017-03-17 2017-03-17 Orient transaudient method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710161239.0A CN106960455A (en) 2017-03-17 2017-03-17 Orient transaudient method and terminal

Publications (1)

Publication Number Publication Date
CN106960455A true CN106960455A (en) 2017-07-18

Family

ID=59470311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710161239.0A Pending CN106960455A (en) 2017-03-17 2017-03-17 Orient transaudient method and terminal

Country Status (1)

Country Link
CN (1) CN106960455A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110430320A (en) * 2019-07-26 2019-11-08 北京小米移动软件有限公司 Voice directional spreading method and device
CN110703906A (en) * 2019-09-06 2020-01-17 中国第一汽车股份有限公司 Directional sounding system and method
CN111614928A (en) * 2020-04-28 2020-09-01 深圳市鸿合创新信息技术有限责任公司 Positioning method, terminal device and conference system
CN112738335A (en) * 2021-01-15 2021-04-30 重庆蓝岸通讯技术有限公司 Sound directional transmission method and device based on mobile terminal and terminal equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103002376A (en) * 2011-09-09 2013-03-27 联想(北京)有限公司 Method for orientationally transmitting voice and electronic equipment
CN103167375A (en) * 2011-12-13 2013-06-19 新昌有限公司 Face recognition loudspeaker device and voice orientation adjusting method thereof
CN104144370A (en) * 2013-05-06 2014-11-12 象水国际股份有限公司 Loudspeaking device capable of tracking target and sound output method of loudspeaking device
CN104469191A (en) * 2014-12-03 2015-03-25 东莞宇龙通信科技有限公司 Image denoising method and device
CN105007553A (en) * 2015-07-23 2015-10-28 惠州Tcl移动通信有限公司 Sound oriented transmission method of mobile terminal and mobile terminal
CN105101004A (en) * 2015-08-19 2015-11-25 联想(北京)有限公司 Electronic device and audio directional transmission method
CN105827793A (en) * 2015-05-29 2016-08-03 维沃移动通信有限公司 Voice directional output method and mobile terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103002376A (en) * 2011-09-09 2013-03-27 联想(北京)有限公司 Method for orientationally transmitting voice and electronic equipment
CN103167375A (en) * 2011-12-13 2013-06-19 新昌有限公司 Face recognition loudspeaker device and voice orientation adjusting method thereof
CN104144370A (en) * 2013-05-06 2014-11-12 象水国际股份有限公司 Loudspeaking device capable of tracking target and sound output method of loudspeaking device
CN104469191A (en) * 2014-12-03 2015-03-25 东莞宇龙通信科技有限公司 Image denoising method and device
CN105827793A (en) * 2015-05-29 2016-08-03 维沃移动通信有限公司 Voice directional output method and mobile terminal
CN105007553A (en) * 2015-07-23 2015-10-28 惠州Tcl移动通信有限公司 Sound oriented transmission method of mobile terminal and mobile terminal
CN105101004A (en) * 2015-08-19 2015-11-25 联想(北京)有限公司 Electronic device and audio directional transmission method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
成莹: ""基于机器视觉的物体识别定位***的研究"", 《中国优秀硕士学位论文全文数据库-信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110430320A (en) * 2019-07-26 2019-11-08 北京小米移动软件有限公司 Voice directional spreading method and device
CN110703906A (en) * 2019-09-06 2020-01-17 中国第一汽车股份有限公司 Directional sounding system and method
CN111614928A (en) * 2020-04-28 2020-09-01 深圳市鸿合创新信息技术有限责任公司 Positioning method, terminal device and conference system
CN112738335A (en) * 2021-01-15 2021-04-30 重庆蓝岸通讯技术有限公司 Sound directional transmission method and device based on mobile terminal and terminal equipment
CN112738335B (en) * 2021-01-15 2022-05-17 重庆蓝岸通讯技术有限公司 Sound directional transmission method and device of mobile terminal and storage medium

Similar Documents

Publication Publication Date Title
CN109978936B (en) Disparity map acquisition method and device, storage medium and equipment
CN112907725B (en) Image generation, training of image processing model and image processing method and device
CN109474786B (en) Preview image generation method and terminal
CN109522863B (en) Ear key point detection method and device and storage medium
CN106960455A (en) Orient transaudient method and terminal
CN107948499A (en) A kind of image capturing method and mobile terminal
CN112614500B (en) Echo cancellation method, device, equipment and computer storage medium
CN110602101A (en) Method, device, equipment and storage medium for determining network abnormal group
CN109583370A (en) Human face structure grid model method for building up, device, electronic equipment and storage medium
CN111680758A (en) Image training sample generation method and device
CN110555815B (en) Image processing method and electronic equipment
CN112308103B (en) Method and device for generating training samples
CN109345636B (en) Method and device for obtaining virtual face image
CN113470116B (en) Verification method, device, equipment and storage medium for calibration data of camera device
CN112967261B (en) Image fusion method, device, equipment and storage medium
CN108550182A (en) A kind of three-dimensional modeling method and terminal
CN111982293B (en) Body temperature measuring method and device, electronic equipment and storage medium
CN114384466A (en) Sound source direction determining method, sound source direction determining device, electronic equipment and storage medium
CN112750449A (en) Echo cancellation method, device, terminal, server and storage medium
CN111325083A (en) Method and device for recording attendance information
CN109345447A (en) The method and apparatus of face replacement processing
CN109194943A (en) A kind of image processing method and terminal device
CN113689484B (en) Method and device for determining depth information, terminal and storage medium
CN113409235B (en) Vanishing point estimation method and apparatus
CN107820009A (en) Image capture method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170718

RJ01 Rejection of invention patent application after publication