CN110134234A - A kind of method and device of D object localization - Google Patents

A kind of method and device of D object localization Download PDF

Info

Publication number
CN110134234A
CN110134234A CN201910335679.2A CN201910335679A CN110134234A CN 110134234 A CN110134234 A CN 110134234A CN 201910335679 A CN201910335679 A CN 201910335679A CN 110134234 A CN110134234 A CN 110134234A
Authority
CN
China
Prior art keywords
dimension object
posture
dimension
region
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910335679.2A
Other languages
Chinese (zh)
Other versions
CN110134234B (en
Inventor
葛凯麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Wenlvyun Intelligent Technology Co ltd
Original Assignee
Joy Wisdom Technology (beijing) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Joy Wisdom Technology (beijing) Co Ltd filed Critical Joy Wisdom Technology (beijing) Co Ltd
Priority to CN201910335679.2A priority Critical patent/CN110134234B/en
Priority to CN202210261706.8A priority patent/CN114721511A/en
Publication of CN110134234A publication Critical patent/CN110134234A/en
Application granted granted Critical
Publication of CN110134234B publication Critical patent/CN110134234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a kind of methods of D object localization, comprising: using the image of single camera acquisition three-dimension object, identifies that posture and the position of the three-dimension object, the three-dimension object outer surface are divided into multiple regions, and the color between adjacent area is different;Show dummy object corresponding with the three-dimension object;After detecting that three-dimension object posture in three dimensions and/or position change, according to the posture and/or the variable quantity of position, the dummy object is adjusted.Correspondingly, the embodiment of the invention provides a kind of device of D object localization, solving the problems, such as in the prior art can not be using the variation for changing and adaptively adjusting dummy object under single camera scene according to the dynamic of three-dimension object.

Description

A kind of method and device of D object localization
Technical field
The invention belongs to augmented reality fields, and in particular, to a kind of method and device of D object localization.
Background technique
In current augmented reality application, three-dimension object can be acquired by camera, identify the three-dimension object and Corresponding virtual three-dimensional object is shown in virtual scene, for example, a hexahedral image can be acquired, and is shown in display screen Corresponding dummy object, such as a virtual hexahedron, virtual role, virtually globe.
In the prior art, dummy object can be shown by way of collecting three-dimension object, but can not be in single camera Change and adaptively adjust the variation of dummy object in usage scenario according to the dynamic of three-dimension object, it is at high cost, and be difficult to fine Change position and the posture for identifying the three-dimension object.
Summary of the invention
The present invention provides a kind of method and device of D object localization, solves not using in the prior art and singly take the photograph The problem of adaptively adjusting the variation of dummy object as changing under first show scape according to the dynamic of three-dimension object.
To achieve the goals above, the present invention provides a kind of localization methods of three-dimension object, comprising:
Using the image of single camera acquisition three-dimension object, posture and the position of the three-dimension object are identified, described three Dimension external surface of objects is divided into multiple regions, and the color between adjacent area is different;
Show dummy object corresponding with the three-dimension object;
After detecting that three-dimension object posture in three dimensions and/or position change, according to the posture And/or the variable quantity of position, the dummy object is adjusted.
It is described in one of the embodiments, that the dummy object is adjusted, comprising:
The dummy object, the rotation angle and angular speed and the three-dimensional of the dummy object are rotated in Virtual Space The attitudes vibration amount of object is corresponding, or,
The position of the mobile dummy object, the displacement of the dummy object and the three-dimension object in Virtual Space Location variation it is corresponding.
In one of the embodiments, it is described the dummy object is adjusted after, further includes:
When camera collects the various combination in different colours face of the three-dimension object, entered not according to preset instructions Same virtual scene, or,
When the angular velocity of rotation of the dummy object is more than the first preset threshold, the first virtual scene is shown, or,
When the angular velocity of rotation of the dummy object is lower than the second preset threshold, the second virtual scene is shown, described the One preset threshold is greater than second preset threshold.
It is described according to the posture and/or the variable quantity of position in one of the embodiments, to the dummy object into Row is adjusted, comprising:
According to the depth of field distance of the three-dimension object and the camera, the size of the dummy object is adjusted.
Posture and the position of the three-dimension object are identified in one of the embodiments, comprising:
Color lump segmentation is carried out to image, described image is decomposed into the region of multiple and different colors;
The color in each region is averaged, all adjacent color lumps pair are traversed;
Using look-up table to the color lump to screening, the region to match with preset model is filtered out;
Match described in calculating region towards data, and obtain position and the posture of the three-dimension object.
Match described in the calculating in one of the embodiments, region towards after data, further includes:
Extrapolate the corresponding candidate solution in region that matches;
The compatibility of the candidate solution is compared two-by-two, abandons either one or two of two compatible candidate solutions;
The edge pixel between the color lump pair is identified using edge detection algorithm;
Optimize position and the posture of the three-dimension object, the optimization formula using optimization formula are as follows:
Wherein P is position and the attitude parameter of three-dimension object, including position coordinates (x, y, z) and attitude angle (qw, qx, qy, qz);F is projection function, for calculating point X when three-dimension object is in P posture in the three-dimensional article dignityiPosition;E is generation Valence function calculates the difference of projected position and observation position;xiWith θiIt is the edge pixel point detected in described image, xiIt is Coordinate of the marginal point in described image, θiIt is the tangential angle of marginal point.
In one of the embodiments, after carrying out color lump segmentation to image, the method also includes:
Establish the model of the three-dimension object;
Traverse adjacent surface in the model, record the adjacent surface color pair and facing towards;
Record the coordinate information in the line of demarcation of all adjacent surfaces.
The embodiment of the present invention also provides a kind of method of D object localization, comprising:
Using the image of single camera acquisition three-dimension object, the three-dimension object outer surface is divided into multiple regions, and adjacent Color between region is different;
Record two or more adjacent surfaces in the three-dimension object;
Color lump segmentation is carried out to image, described image is decomposed into the region of multiple and different colors;
The color in each region is averaged, all adjacent color lumps pair are traversed;
Using look-up table to the color lump to screening, the region to match with preset model is filtered out;
Match described in calculating region towards data, and obtain position and the posture of the three-dimension object;
Show dummy object corresponding with the three-dimension object.
Match described in the calculating in one of the embodiments, region towards after data, further includes:
Extrapolate the corresponding candidate solution in region that matches;
The compatibility of the candidate solution is compared two-by-two, abandons either one or two of two compatible candidate solutions;
The edge pixel between the color lump pair is identified using edge detection algorithm;
Optimize position and the posture of the three-dimension object, the optimization formula using optimization formula are as follows:
Wherein P is position and the attitude parameter of three-dimension object, including position coordinates (x, y, z) and attitude angle (qw, qx, qy, qz);F is projection function, for calculating point X when three-dimension object is in P posture in the three-dimensional article dignityiPosition;E is generation Valence function calculates the difference of projected position and observation position;xiWith θiIt is the edge pixel point detected in described image, xiIt is Coordinate of the marginal point in described image, θiIt is the tangential angle of marginal point.
The embodiment of the invention also provides a kind of positioning device of three-dimension object, described device processor and for storing energy The memory of enough computer programs run on a processor;Wherein, when the processor is used to run the computer program, The method for executing above-mentioned D object localization.
The embodiment of the invention also provides a kind of computer readable storage mediums, are stored thereon with the executable finger of computer It enables, the method which is used to execute above-mentioned D object localization.
The embodiment of the present invention provides the method and device of D object localization, and the method is identified three-dimensional by single camera The different combinations of colors region of object, determines spatial attitude and the position of the three-dimension object, and shows corresponding dummy object, At the same time it can also refine the spatial attitude and the position that determine the three-dimension object, further, it is also possible to should by single camera measurement The depth of field of three-dimension object, it is at low cost, and precision is high.
Detailed description of the invention
Fig. 1 is the flow chart of D object localization method in the embodiment of the present invention;
Fig. 2 is the method flow diagram that three-dimension object posture and position are identified in the embodiment of the present invention;
Fig. 3 is Three-dimension object recognition schematic diagram in the embodiment of the present invention;
Fig. 4 is the another schematic diagram of Three-dimension object recognition in the embodiment of the present invention;
Fig. 5 is that three-dimension object color lump edge optimizes the schematic illustration of posture in the embodiment of the present invention;
Fig. 6 is D object localization apparatus structure schematic diagram in the embodiment of the present invention;
Fig. 7 is D object localization device composed structure schematic diagram in the embodiment of the present invention.
Specific embodiment
In order to keep the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, to this hair It is bright to be further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and do not have to It is of the invention in limiting.In addition, as long as technical characteristic involved in the various embodiments of the present invention described below is each other Between do not constitute conflict and can be combined with each other.
It to achieve the above objectives, should as shown in Figure 1, the embodiment of the invention provides a kind of recommended method of application program Method includes:
S101, the image that three-dimension object is acquired using single camera, identify posture and the position of the three-dimension object, institute It states three-dimension object outer surface and is divided into multiple regions, and the color between adjacent area is different;
In the embodiment of the present invention, it can realize that the D object localization and dummy object are aobvious by D object localization device Show, optionally, which includes single camera, processing unit and display unit, by single camera in detection zone The image of interior acquisition three-dimension object, realizes image procossing through the processing unit and finally determines posture and the position of the three-dimension object It sets, and creates one and the identical or corresponding dummy object of the three-dimension object in virtual scene by augmented reality AR technology. Three-dimension object can be polyhedron, such as sphere, cylindrical body, tetrahedron, hexahedron etc..Wherein, the color of every one side can not Together, it in the embodiment of the present invention, in three-dimension object rotation or moving process, is combined by the color of every one side, it may be determined that out should Three-dimension object faces the direction and posture of camera.For convenience of explanation, the embodiment of the present invention is said by taking hexahedron as an example It is bright, and remaining (have different colours face combination) three-dimension object is also in the protection scope of the embodiment of the present invention.
It, specifically can be with as shown in Fig. 2, identify posture and the position of the three-dimension object in one of the embodiments, Are as follows:
S201, color lump segmentation is carried out to image, described image is decomposed into the region of multiple and different colors;
As shown in figure 3, the image can be yellow color region, green color region, Blue region, black color Region, white colours region and red color region.
In addition, the model of the three-dimension object can also be established after color lump segmentation;It traverses adjacent in the model Face, record the adjacent surface color pair and facing towards;Record the coordinate information in the line of demarcation of all adjacent surfaces.That is, of the invention In embodiment, face adjacent in three-dimension object can be traversed, records the color in adjacent face to (such as " green-red ", " red-yellow ") And the direction in face, and build table and used for subsequent algorithm;Meanwhile the coordinate information in the line of demarcation of all adjacent surfaces is recorded (with one The sampled point of serial variance indicates), it is used for subsequent precise measurement.
When camera can see simultaneously two or more faces, the position that can be marked by color combination come preliminary judgement Set with posture (as direction), more accurate position and attitude data are obtained by iterative algorithm later.
S202, the color in each region is averaged, traverses all adjacent color lumps to (pair);
S203, using look-up table to the color lump to screening, filter out the region to match with preset model;
Using look-up table to color lump to screening, such as " it is red-white " be exactly to match with three-dimensional object model/it matches, But " red-blue ", " green-purple " just misfitted with model (the former because of it is red in a model with blue be not adjacent, Hou Zheshi Because there is no purple in model), give up the color lump pair misfitted, filters out the region (color lump to) to match with preset model.
Match described in S204, calculating region towards data, and obtain position and the posture of the three-dimension object.
Some candidate color lumps pair can be left after screening, for example, can be obtained in Fig. 4 " red-white ", " red-black ", " it is black- It is white " etc., using look-up table can find these faces towards data, to learn general orientation of the camera relative to label; Simultaneously as there are two the data in face simultaneously, the substantially rotation angle of camera can also be calculated (with camera and label Line is the rotation angle of axis), and then the approximate location and posture of three-dimension object can be extrapolated.
In above-described embodiment can the approximate location to three-dimension object calculate that obtained result is possible to not with posture Accurately, and in the embodiment of the present invention, further position and posture can be optimized, so that fining obtains three-dimension object in sky Between in posture and position.The specific method is as follows:
Match described in the calculating region towards after data, the embodiment of the invention also includes:
S2041, the corresponding candidate solution in region that matches is extrapolated;
S2042, the compatibility for comparing the candidate solution two-by-two abandon either one or two of two compatible candidate solutions;
When mark object location that two candidate solutions obtain and posture relatively when (distance, angle difference are less than certain threshold Value), it is believed that two candidate solutions be it is compatible, i.e., they are all the color lumps pair on the same marker.It can discard at this time One of solution.
S2043, the edge pixel between the color lump pair is identified using edge detection algorithm;
S2044, position and the posture for optimizing the three-dimension object using optimization formula, the optimization formula are as follows:
Wherein P is position and the attitude parameter of three-dimension object, including position coordinates (x, y, z) and attitude angle (qw, qx, qy, qz);F is projection function, for calculating point X when three-dimension object is in P posture in the three-dimensional article dignityiPosition;E is generation Valence function calculates the difference of projected position and observation position;xiWith θiIt is the edge pixel point detected in described image, xiIt is Coordinate of the marginal point in described image, θiIt is the tangential angle of marginal point.Using Levenberg-Marquardt algorithm to upper After formula is iterated optimization, an available preferably P*Value, the position of the marker after as optimizing and posture.
After iterative algorithm completes pose refinement, the exact posture data of three-dimension object can be obtained.
Further for video data, Kalman filtering (Kalman Filter) can be used, the posture of successive frame is carried out Smoothly, metastable posture sequence finally can be obtained.
S102, display dummy object corresponding with the three-dimension object;
It, can be in virtual field after the posture and position for identifying three-dimension object by AR technology in the embodiment of the present invention Dummy object corresponding with the three-dimension object is shown in scape.For example, display is identical with three-dimension object shape in the same size Hexahedron, and the color in each face is consistent with actual three-dimension object.
S103, after detecting that three-dimension object posture in three dimensions and/or position change, according to described The dummy object is adjusted in the variable quantity of posture and/or position.
It is described in one of the embodiments, that the dummy object is adjusted, it is specifically as follows:
The dummy object, the rotation angle and angular speed and the three-dimensional of the dummy object are rotated in Virtual Space The attitudes vibration amount of object is corresponding, or,
The position of the mobile dummy object, the displacement of the dummy object and the three-dimension object in Virtual Space Location variation it is corresponding.
In one of the embodiments, it is described the dummy object is adjusted after, further includes:
When camera collects the various combination in different colours face of the three-dimension object, entered not according to preset instructions Same virtual scene, or,
When the angular velocity of rotation of the dummy object is more than the first preset threshold, the first virtual scene is shown, or,
When the angular velocity of rotation of the dummy object is lower than the second preset threshold, the second virtual scene is shown, described the One preset threshold is greater than second preset threshold.
It is described according to the posture and/or the variable quantity of position in one of the embodiments, to the dummy object into Row is adjusted, comprising:
According to the depth of field distance of the three-dimension object and the camera, the size of the dummy object is adjusted.
For example, in field of play, which can be made one " conjury stick/magic cane " polyhedron-shaped, can By the method for above-mentioned three-dimensional localization, the combination of different face colors is identified so that it is determined that currently face camera is which Specific posture.While the user's operation conjury stick, the change that virtual conjury stick can also be adaptive, such as when user's rotation should Conjury stick, then virtual conjury stick, which also synchronizes, is rotated, and when the mobile conjury stick of user, virtual conjury stick is also in synchronizing moving. At the same time it can also measure the depth of field of the practical conjury stick relative to camera by single camera, the depth of field is not according to far and near difference Together, the size of virtual conjury stick changes, and user can use the characteristic and carry out different game experiencings, such as three-dimensional empty Between the inside operate different conjury stick, it is close from camera a little with from camera, its game effect generated is different a little further, can The game experiencing of user is improved, while reducing manufacturing cost and (being all made of the scheme of TOF camera or multi-cam on the market to survey The depth of field is measured, with high costs).
Meanwhile user can also design different game experiencings according to the different characteristics of conjury stick, such as conjury stick is revolved Turn, different faces (i.e. the collected color combination of camera is different) gone in face of camera, new interactive experience can be designed, Such as go to the face A and enter a certain game, the face B is gone into another game, generates an interaction virtually when turning the face B from the face A Scene will generate another scene when going to the face C from the face B again, meanwhile, when speed and the angle difference of rotation, can also design not With game, that is, different angles and different speed rotate, and different faces corresponding game is different or corresponding trip Playing method of playing is different.
The embodiment of the present invention provides the method and device of D object localization, and the method is identified three-dimensional by single camera The different combinations of colors region of object, determines spatial attitude and the position of the three-dimension object, and shows corresponding dummy object, At the same time it can also refine the spatial attitude and the position that determine the three-dimension object, further, it is also possible to should by single camera measurement The depth of field of three-dimension object, it is at low cost, and precision is high.
In addition, the embodiment of the present invention also provides a kind of method of D object localization, location algorithm is in advance by the letter of label Cease typing, the parametric equation of coordinate, side including all vertex, face color.There is the following when typing model:
1. traverse adjacent face in model, record the color in adjacent face to (such as " green-red ", " red-yellow ") and The direction in face is built table and is used for subsequent algorithm;
2. the coordinate information (being indicated with the sampled point of series of discrete) in the line of demarcation of all adjacent surfaces is recorded, for subsequent Precise measurement.
When camera can see simultaneously two or more faces, the position that can be marked by color combination come preliminary judgement Set with posture (being just directed towards), more accurate position and attitude data are obtained by iterative algorithm later.
This method specifically includes:
S501, the image that three-dimension object is acquired using single camera, the three-dimension object outer surface are divided into multiple regions, and Color between adjacent area is different;
S502, two or more adjacent surfaces in the three-dimension object are recorded;
S503, color lump segmentation is carried out to image, described image is decomposed into the region of multiple and different colors;
S504, the color in each region is averaged, traverses all adjacent color lumps pair;
Such as " red-white " be exactly match with model, but " red-blue ", " green-purple " these just misfitted with model it is (preceding Person because of red in a model with blue be not it is adjacent, the latter is because of not having purple in model).
S505, using look-up table to the color lump to screening, filter out the region to match with preset model;
For example " red-white ", " red-black ", " black-and-white " etc. can be obtained in Fig. 4, the court in these faces can be found using look-up table To data, to learn general orientation of the camera relative to label;Simultaneously as simultaneously there are two the data in face, it can also be with The substantially rotation angle (using the line of camera and label as the rotation angle of axis) of camera is calculated, and then can calculate bid The approximate location and posture of will object.
Match described in S506, calculating region towards data, and obtain position and the posture of the three-dimension object;
When mark object location that two candidate solutions obtain and posture relatively when (distance, angle difference are less than certain threshold Value), it is believed that two candidate solutions be it is compatible, i.e., they are all the color lumps pair on the same marker.It can discard at this time One of solution.
S507, display dummy object corresponding with the three-dimension object.
Match described in the calculating in one of the embodiments, region towards after data, further includes:
S5061, the corresponding candidate solution in region that matches is extrapolated;
S5062, the compatibility for comparing the candidate solution two-by-two abandon either one or two of two compatible candidate solutions;
S5063, the edge pixel between the color lump pair is identified using edge detection algorithm;
S5064, position and the posture for optimizing the three-dimension object using optimization formula, the optimization formula are as follows:
Wherein P is position and the attitude parameter of three-dimension object, including position coordinates (x, y, z) and attitude angle (qw, qx, qy, qz), attitude angle here uses quaternary number (Quaternion) representation;F is projection function, is in P for calculation flag object Point X when posture above itiOn the image that video camera takes be located at where;E is cost function, calculates projected position With the difference of observation position, difference is bigger, and cost is bigger;xiWith θiIt is the edge pixel point detected in image, xiIt is marginal point Coordinate in the picture, θiIt is the tangential angle of marginal point.Above formula is iterated using Levenberg-Marquardt algorithm After optimization, an available preferably P*Value, the position of the marker after as optimizing and posture.
In addition, video data can be used Kalman filtering (Kalman Filter) and be carried out to the posture of successive frame Smoothly, metastable posture sequence finally can be obtained.
Fig. 5 is that three-dimension object color lump edge optimizes the schematic illustration of posture in the embodiment of the present invention, wherein dotted line is The color lump edge that current estimation posture calculates, solid line are the color lump edge that actual photographed obtains.When optimization by adjusting Posture allows dotted line to draw close to solid line.
In addition, in embodiments of the present invention, can also combine with remote control module and carry out human-computer interaction to dummy object. Traditional remote control appliance presses a button and generates a command signal, and emit the signal for the operational order collection of medelling, So that the terminals such as TV, air-conditioning respond the instruction.And in embodiments of the present invention, can pointedly it be arranged a series of Telecommand and spatial attitude/position of three-dimension object etc. are combined by operating protocol, form a set of new human-computer interaction association View.For example, series of instructions collection can be arranged in remote control module, the interaction generated at different spatial attitude/positions can To have any different.After pressing a key to move right, if three-dimension object, which is not detected, has carried out location updating, dummy object Move right a lattice, if detecting, there is update in the position of the three-dimension object, is equally one key to move right of pressing, can With three lattice that move right.
Fig. 6 is one of schematic diagram of 3 D locating device in the embodiment of the present invention;User can use device such as Mobile phone, plate or computation application program.The apparatus may include camera, microphone, application program stage property and/or AR The data inputs device such as device.The device can also include the output devices such as display equipment, loudspeaker.
The embodiment of the invention also provides a kind of storage mediums, are stored thereon with computer instruction, and the instruction is by processor The method of above-mentioned realization D object localization is realized when execution.
Fig. 7 is a kind of system structure diagram provided in an embodiment of the present invention.The system 600 may include one or one The above central processing unit (central processing units, CPU) 610 (for example, one or more processors) and Storage medium 630 (such as one or one of memory 620, one or more storage application programs 632 or data 634 The above mass memory unit).Wherein, memory 620 and storage medium 630 can be of short duration storage or persistent storage.It is stored in The program of storage medium 630 may include one or more modules (diagram is not shown), and each module may include to dress Series of instructions operation in setting.Further, central processing unit 610 can be set to communicate with storage medium 630, fill Set the series of instructions operation executed in storage medium 630 on 600.System 600 can also include one or more power supplys 640, one or more wired or wireless network interfaces 650, one or more input/output interfaces 660, above-mentioned three The system structure shown in Fig. 7 can be based on by tieing up step performed by the embodiment of the method for positioning.
It should be understood that the size of the serial number of each process is not meant to execution sequence in the various embodiments of the application Successively, the execution sequence of each process should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present application Constitute any restriction.
Those of ordinary skill in the art may be aware that mould described in conjunction with the examples disclosed in the embodiments of the present disclosure Block and method and step can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed Scope of the present application.It is apparent to those skilled in the art that for convenience and simplicity of description, foregoing description The specific work process of system, device and module, can refer to corresponding processes in the foregoing method embodiment, no longer superfluous herein It states.
The various pieces of this specification are all made of progressive mode and are described, same and similar portion between each embodiment Dividing may refer to each other, and what each embodiment introduced is and other embodiments difference.Especially for device and it is For embodiment of uniting, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to method reality Apply the explanation of example part.
Finally, it should be understood that being not intended to limit the above is only the preferred embodiment of technical scheme The protection scope of the application.Obviously, those skilled in the art can carry out various modification and variations without departing from this to the application The range of application.If these modifications and variations of the application belong to the claim of this application and its equivalent technologies range it Interior, then any modification, equivalent replacement, improvement and so on, should be included within the scope of protection of this application.

Claims (10)

1. a kind of method of D object localization characterized by comprising
Using the image of single camera acquisition three-dimension object, posture and the position of the three-dimension object, the three-dimensional article are identified External surface is divided into multiple regions, and the color between adjacent area is different;
Show dummy object corresponding with the three-dimension object;
After detecting that three-dimension object posture in three dimensions and/or position change, according to the posture and/or The dummy object is adjusted in the variable quantity of position.
2. the method according to claim 1, wherein described be adjusted the dummy object, comprising:
The dummy object, the rotation angle and angular speed of the dummy object and the three-dimension object are rotated in Virtual Space Attitudes vibration amount it is corresponding, or,
The position of the mobile dummy object, the position of the displacement of the dummy object and the three-dimension object in Virtual Space It is corresponding to set variable quantity.
3. according to the method described in claim 2, it is characterized in that, it is described the dummy object is adjusted after, also wrap It includes:
When camera collects the various combination in different colours face of the three-dimension object, entered according to preset instructions different Virtual scene, or,
When the angular velocity of rotation of the dummy object is more than the first preset threshold, the first virtual scene is shown, or,
When the angular velocity of rotation of the dummy object is lower than the second preset threshold, the second virtual scene is shown, described first is pre- If threshold value is greater than second preset threshold.
4. right the method according to claim 1, wherein described according to the posture and/or the variable quantity of position The dummy object is adjusted, comprising:
According to the depth of field distance of the three-dimension object and the camera, the size of the dummy object is adjusted.
5. the method according to claim 1, wherein identifying posture and the position of the three-dimension object, comprising:
Color lump segmentation is carried out to image, described image is decomposed into the region of multiple and different colors;
The color in each region is averaged, all adjacent color lumps pair are traversed;
Using look-up table to the color lump to screening, the region to match with preset model is filtered out;
Match described in calculating region towards data, and obtain position and the posture of the three-dimension object.
6. according to the method described in claim 5, it is characterized in that, the region that matches described in the calculating towards data it Afterwards, further includes:
Extrapolate the corresponding candidate solution in region that matches;
The compatibility of the candidate solution is compared two-by-two, abandons either one or two of two compatible candidate solutions;
The edge pixel between the color lump pair is identified using edge detection algorithm;
Optimize position and the posture of the three-dimension object, the optimization formula using optimization formula are as follows:
Wherein P is position and the attitude parameter of three-dimension object, including position coordinates (x, y, z) and attitude angle (qw, qx, qy, qz);f For projection function, for calculating point X when three-dimension object is in P posture in the three-dimensional article dignityiPosition;E is cost letter Number calculates the difference of projected position and observation position;xiWith θiIt is the edge pixel point detected in described image, xiIt is edge Coordinate of the point in described image, θiIt is the tangential angle of marginal point.
7. method according to claim 1-6, which is characterized in that the method also includes:
Input instruction is received, instruction code is generated;
According to the variable quantity of described instruction code, the posture and/or position, the dummy object is adjusted.
8. a kind of method of D object localization characterized by comprising
Using the image of single camera acquisition three-dimension object, the three-dimension object outer surface is divided into multiple regions, and adjacent area Between color it is different;
Record two or more adjacent surfaces in the three-dimension object;
Color lump segmentation is carried out to image, described image is decomposed into the region of multiple and different colors;
The color in each region is averaged, all adjacent color lumps pair are traversed;
Using look-up table to the color lump to screening, the region to match with preset model is filtered out;
Match described in calculating region towards data, and obtain position and the posture of the three-dimension object;
Show dummy object corresponding with the three-dimension object.
9. according to the method described in claim 8, it is characterized in that, the region that matches described in the calculating towards data it Afterwards, further includes:
Extrapolate the corresponding candidate solution in region that matches;
The compatibility of the candidate solution is compared two-by-two, abandons either one or two of two compatible candidate solutions;
The edge pixel between the color lump pair is identified using edge detection algorithm;
Optimize position and the posture of the three-dimension object, the optimization formula using optimization formula are as follows:
Wherein P is position and the attitude parameter of three-dimension object, including position coordinates (x, y, z) and attitude angle (qw, qx, qy, qz);f For projection function, for calculating point X when three-dimension object is in P posture in the three-dimensional article dignityiPosition;E is cost letter Number calculates the difference of projected position and observation position;xiWith θiIt is the edge pixel point detected in described image, xiIt is edge Coordinate of the point in described image, θiIt is the tangential angle of marginal point.
10. a kind of D object localization device, which is characterized in that described device processor and for store can be on a processor The memory of the computer program of operation;Wherein, the processor is for when running the computer program, perform claim to be required The method of 1 to 9 described in any item D object localizations.
CN201910335679.2A 2019-04-24 2019-04-24 Method and device for positioning three-dimensional object Active CN110134234B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910335679.2A CN110134234B (en) 2019-04-24 2019-04-24 Method and device for positioning three-dimensional object
CN202210261706.8A CN114721511A (en) 2019-04-24 2019-04-24 Method and device for positioning three-dimensional object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910335679.2A CN110134234B (en) 2019-04-24 2019-04-24 Method and device for positioning three-dimensional object

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210261706.8A Division CN114721511A (en) 2019-04-24 2019-04-24 Method and device for positioning three-dimensional object

Publications (2)

Publication Number Publication Date
CN110134234A true CN110134234A (en) 2019-08-16
CN110134234B CN110134234B (en) 2022-05-10

Family

ID=67570956

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210261706.8A Pending CN114721511A (en) 2019-04-24 2019-04-24 Method and device for positioning three-dimensional object
CN201910335679.2A Active CN110134234B (en) 2019-04-24 2019-04-24 Method and device for positioning three-dimensional object

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210261706.8A Pending CN114721511A (en) 2019-04-24 2019-04-24 Method and device for positioning three-dimensional object

Country Status (1)

Country Link
CN (2) CN114721511A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110928414A (en) * 2019-11-22 2020-03-27 上海交通大学 Three-dimensional virtual-real fusion experimental system
CN112440882A (en) * 2019-09-04 2021-03-05 戴姆勒股份公司 Automobile luggage compartment article placing system and method and automobile comprising system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074012A (en) * 2011-01-22 2011-05-25 四川农业大学 Method for three-dimensionally reconstructing tender shoot state of tea by combining image and computation model
US20110199372A1 (en) * 2010-02-15 2011-08-18 Sony Corporation Method, client device and server
CN102737245A (en) * 2012-06-06 2012-10-17 清华大学 Three-dimensional scene object boundary detection method and device
CN103085076A (en) * 2011-11-08 2013-05-08 发那科株式会社 Device and method for recognizing three-dimensional position and orientation of article
CN104680519A (en) * 2015-02-06 2015-06-03 四川长虹电器股份有限公司 Seven-piece puzzle identification method based on contours and colors
CN106447725A (en) * 2016-06-29 2017-02-22 北京航空航天大学 Spatial target attitude estimation method based on contour point mixed feature matching
CN107274453A (en) * 2017-06-12 2017-10-20 哈尔滨理工大学 Video camera three-dimensional measuring apparatus, system and method for a kind of combination demarcation with correction
CN107636585A (en) * 2014-09-18 2018-01-26 谷歌有限责任公司 By being drawn inside reality environment and the generation of three-dimensional fashion object carried out
US20180047208A1 (en) * 2016-08-15 2018-02-15 Aquifi, Inc. System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
CN108537149A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109615655A (en) * 2018-11-16 2019-04-12 深圳市商汤科技有限公司 A kind of method and device, electronic equipment and the computer media of determining gestures of object

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110199372A1 (en) * 2010-02-15 2011-08-18 Sony Corporation Method, client device and server
CN102074012A (en) * 2011-01-22 2011-05-25 四川农业大学 Method for three-dimensionally reconstructing tender shoot state of tea by combining image and computation model
CN103085076A (en) * 2011-11-08 2013-05-08 发那科株式会社 Device and method for recognizing three-dimensional position and orientation of article
CN102737245A (en) * 2012-06-06 2012-10-17 清华大学 Three-dimensional scene object boundary detection method and device
CN107636585A (en) * 2014-09-18 2018-01-26 谷歌有限责任公司 By being drawn inside reality environment and the generation of three-dimensional fashion object carried out
CN104680519A (en) * 2015-02-06 2015-06-03 四川长虹电器股份有限公司 Seven-piece puzzle identification method based on contours and colors
CN106447725A (en) * 2016-06-29 2017-02-22 北京航空航天大学 Spatial target attitude estimation method based on contour point mixed feature matching
US20180047208A1 (en) * 2016-08-15 2018-02-15 Aquifi, Inc. System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
CN107274453A (en) * 2017-06-12 2017-10-20 哈尔滨理工大学 Video camera three-dimensional measuring apparatus, system and method for a kind of combination demarcation with correction
CN108537149A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109615655A (en) * 2018-11-16 2019-04-12 深圳市商汤科技有限公司 A kind of method and device, electronic equipment and the computer media of determining gestures of object

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112440882A (en) * 2019-09-04 2021-03-05 戴姆勒股份公司 Automobile luggage compartment article placing system and method and automobile comprising system
CN112440882B (en) * 2019-09-04 2024-03-08 梅赛德斯-奔驰集团股份公司 System and method for placing articles in car trunk and car comprising system
CN110928414A (en) * 2019-11-22 2020-03-27 上海交通大学 Three-dimensional virtual-real fusion experimental system

Also Published As

Publication number Publication date
CN114721511A (en) 2022-07-08
CN110134234B (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN110517319B (en) Method for determining camera attitude information and related device
US11887246B2 (en) Generating ground truth datasets for virtual reality experiences
US11308655B2 (en) Image synthesis method and apparatus
US20180150148A1 (en) Handheld interactive device and projection interaction method therefor
CN107016704A (en) A kind of virtual reality implementation method based on augmented reality
CN108985220B (en) Face image processing method and device and storage medium
EP3195270B1 (en) Using free-form deformations in surface reconstruction
CN107027015A (en) 3D trends optical projection system based on augmented reality and the projecting method for the system
US20110159957A1 (en) Portable type game device and method for controlling portable type game device
CN109035334A (en) Determination method and apparatus, storage medium and the electronic device of pose
CN104881114B (en) A kind of angular turn real-time matching method based on 3D glasses try-in
US20190340807A1 (en) Image generating device and method of generating image
CN107027014A (en) A kind of intelligent optical projection system of trend and its method
KR102638632B1 (en) Methods, devices, electronic devices, storage media and programs for building point cloud models
WO2020114274A1 (en) Method and device for determining potentially visible set, apparatus, and storage medium
CN108151738B (en) Codified active light marked ball with attitude algorithm
CN107580209A (en) Take pictures imaging method and the device of a kind of mobile terminal
WO2021004412A1 (en) Handheld input device, and method and apparatus for controlling display position of indication icon thereof
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
CN109069920A (en) Hand-held controller, method for tracking and positioning and system
CN110337674A (en) Three-dimensional rebuilding method, device, equipment and storage medium
CN110134234A (en) A kind of method and device of D object localization
CN110533773A (en) A kind of three-dimensional facial reconstruction method, device and relevant device
CN101281590B (en) Operating unit as well as video system containing the same
CN110120076A (en) A kind of pose determines method, system, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220420

Address after: 250000 Room 202, No. 81, Huayuan Road, Hongjialou street, Licheng District, Jinan City, Shandong Province

Applicant after: Shandong wenlvyun Intelligent Technology Co.,Ltd.

Address before: 100191 room 1104, 1st floor, building 1, No.2 Huayuan Road, Haidian District, Beijing

Applicant before: PILOSMART TECHNOLOGY BEIJING LLC

GR01 Patent grant
GR01 Patent grant