CN101281422A - Apparatus and method for generating three-dimensional information based on object as well as using interactive system - Google Patents

Apparatus and method for generating three-dimensional information based on object as well as using interactive system Download PDF

Info

Publication number
CN101281422A
CN101281422A CN 200710092172 CN200710092172A CN101281422A CN 101281422 A CN101281422 A CN 101281422A CN 200710092172 CN200710092172 CN 200710092172 CN 200710092172 A CN200710092172 A CN 200710092172A CN 101281422 A CN101281422 A CN 101281422A
Authority
CN
China
Prior art keywords
information
image
generation
stereo
stereo information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200710092172
Other languages
Chinese (zh)
Other versions
CN101281422B (en
Inventor
赵子毅
陈信嘉
李泉欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pixart Imaging Inc
Original Assignee
Pixart Imaging Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pixart Imaging Inc filed Critical Pixart Imaging Inc
Priority to CN 200710092172 priority Critical patent/CN101281422B/en
Publication of CN101281422A publication Critical patent/CN101281422A/en
Application granted granted Critical
Publication of CN101281422B publication Critical patent/CN101281422B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a three-dimensional information generation device and a three-dimensional information generation method on the base of objects, and an interactive system using the device and the method. The method comprises: gaining at least two two-dimensional images at a time point for the same region; individually extracting out the objects with characteristics from the at least two two-dimensional images; establishing a corresponding relationship among the objects; and generating three-dimensional information according to the corresponding objects. The device and the interactive system comprise at least two image capturing units used for individually capturing the two-dimensional images; and generating a processing device of the three-dimensional information according to the two-dimensional images individually captured by the two image capturing units.

Description

With object is the interaction systems of basic three-dimensional information generation device and method and use
Technical field
The present invention relates to a kind of 3 D stereo information generating apparatus and method based on object, and the interaction systems that uses these apparatus and method.
Background technology
The action of existing various detecting user ends at present, and the system that carries out interaction, for example shooting game etc.In this kind recreation, the appended indicator device of establishing of user's portable game unit (for example simulating gun) is according to the picture on the screen and reaction (for example shooting) is taked in interaction; The recreation unit is detected moving of this indicator device, and correspondingly produces reaction on screen, and for example the enemy is hit by a bullet, the house explosion, or the like.
The shortcoming of this kind interaction systems is that the two-dimensional position that can only detect hand-held object changes, and includes in into and " degree of depth " of fore-and-aft direction can't be changed.Therefore, though this kind interactive game has quite had telepresenc, but still fail to react fully truly user's operating state.
In view of this, United States Patent (USP) the 6th, 795, No. 068 is to propose the method that a kind of two-dimensional position according to object decides its three-dimensional state.Shown in Figure 1A, Figure 1B, this case provides an object (for example bat) with strong aberration, and this object is divided into two parts 301 and 303 up and down, and these two parts must have strong aberration, catch image with facility.305 of Figure 1B is the bidimensional image that video camera or Image sensor apparatus are caught, and image 307 wherein is the bidimensional image of object upper section 301.In bidimensional image, the information on all x-y directions is known, and utilizes the top width w2 of object and the ratio between the width w1 of below, asks for the Φ angle, uses the information that obtains on the z direction.But because this object may move between all places in true three-dimension space, cause the various sizes of bidimensional image to change, so for for the purpose of refinement is true, the position of width w2 and below width w1 above can not choosing arbitrarily; According to this case, this object equidistantly must be divided between two ends up and down, obtain a plurality of width numerical value, and give average.
Though this case is successfully commercialization, but still has following shortcoming.At first, the shape , Yi Calipering user that total system must the predefined object can not use the object of arbitrary shape, and only can use the object that presets.Secondly, during the image of captured object, the aberration contrast is extremely important, if cover because of user's hand position or other reason cause, will influence the judgement of system to object shapes.The 3rd, system must constantly carry out the computing of a plurality of width values to object, causes the computational burden of processor.
The present invention is directed to the shortcoming of above prior art, propose a kind of diverse practice, and produced 3 D stereo information, but do not have above-mentioned shortcoming.Again, what is called of the present invention " produce 3 D stereo information ", being not limited to must be according to this three-dimensional information, and on screen the corresponding 3-dimensional image that produces; For example, can be in response to the three-dimensional information that is produced, be further processed and produce corresponding reaction, as represent and be subjected to intensity that bat hits or the like, also belong to scope of the present invention.
Summary of the invention
First purpose of the present invention is to provide a kind of 3 D stereo information generating apparatus based on object, and does not have the shortcoming of aforementioned prior art.
Second purpose of the present invention is to provide a kind of 3 D stereo information generation method based on object.
The 3rd purpose of the present invention is to provide a kind of interaction systems that uses said apparatus or method and constitute.
For reaching above-mentioned purpose, in one of them embodiment of the present invention, provide a kind of 3 D stereo information generation method based on object, comprise:,, obtain at least two bidimensional images the same area at a time point; From these at least two bidimensional images, extract object individually with feature; Set up the corresponding relation between object; And, produce three-dimensional information according to the object of correspondence.
In the said method, still can after extracting object,, give mark, or after producing three-dimensional information, the object in this three-dimensional information gives mark, to simplify subsequent operation to this two-dimensional object.
In addition, according to another embodiment of the invention, a kind of electronic installation that produces 3 D stereo information has been proposed, comprise: at least two image acquisition units, in order to indivedual acquisition analog images, and produce digital bidimensional image, wherein the distance between this two image acquisition unit is known with separately focusing distance; The object extraction equipment, its receive this two image acquisition unit the digital bidimensional images of indivedual acquisitions, and extraction object information wherein; And treating apparatus, its according to the distance between object information, this two image acquisition unit, with this two image acquisition unit focusing distance separately, and produce three-dimensional information.
In the above-mentioned electronic installation, can more include a communication interface, be arranged between this treating apparatus and this at least two image acquisition units, or be incorporated within the treating apparatus with low frequency range.
In addition, according to another embodiment more of the present invention, also provide a kind of in order to produce the interaction systems of 3 D stereo information, comprise: at least two image acquisition units, in order to indivedual acquisition analog images, through analog digital conversion and produce digital bidimensional image, wherein the distance between this two image acquisition unit, and this two image acquisition unit focusing distance separately be known; The object extraction equipment, its receive this two image acquisition unit the digital bidimensional images of indivedual acquisitions, and extraction object information wherein; Treating apparatus, its according to the distance between object information, this two image acquisition unit, with this two image acquisition unit focusing distance separately, and produce three-dimensional information; And output interface, in order to export this three-dimensional information.
Interaction systems described in the foregoing description can comprise light emitting source again, and this light emitting source is good with the infrared light emission source.Light emitting source and this at least two image acquisition units can be divided into the two ends of an area of space, or are located at the same end of an area of space, and under latter instance, interaction systems includes reflective block in addition, is located at the other end of this area of space.
Interaction systems described in the foregoing description can more include a communication interface with low frequency range, is arranged between this treating apparatus and this at least two image acquisition units, or is incorporated within the treating apparatus.
Below illustrate in detail by specific embodiment, when the purpose that is easier to understand the present invention, technology contents, characteristics and the effect reached thereof.
Description of drawings
Figure 1A and Figure 1B are prior art, decide the method for its three-dimensional state according to the two-dimensional position of object.
Fig. 2 is one of them embodiment of the inventive method.
After Fig. 3 illustrates and extracts object, how to set up object spatially with temporal corresponding relation.
How Fig. 4 A to Fig. 4 C explanation converts three-dimensional steric information with each two-dimensional signal of organizing corresponding objects.
Fig. 5 A and Fig. 5 B are the another one embodiment of the inventive method.
After Fig. 6 illustrates and extracts object, how to set up object spatially with temporal corresponding relation.
Fig. 7 illustrates the situation that may cause the corresponding relation erroneous judgement.
Fig. 8 A and Fig. 8 B signal show reaches hardware circuit embodiment of the present invention.
Fig. 9 to Figure 11 marks three embodiment of the interaction systems of use the inventive method/circuit.
Symbol description among the figure
1,2,3,4,5,6,7,8 objects
11,12,13,14,15,16,17,18 objects
25,26,27,28 objects
The S21-S24 step
The S31-S34 step
The S41-S46 step
80L, 80R integrated circuit
81L, 8 1R sensors
82L, 82R object extraction circuit
83L, 83R processor
84 processors
86 output interfaces
90 screens
91L, 91R light emitting source
92L, 92R sensor
93 hand-held units
94 reflective blocks
Embodiment
No matter the present invention is all different with prior art on the method for system hardware and generation 3 D stereo information.In the present invention, at least one light emitting source and at least two sensors are set, and, convert thereof into three-dimensional steric information according to the sensing result of sensor to light emitting source.According to the present invention, light emitting source is good to adopt the infrared light emission source, for example can be infrared light-emitting diode (IR LED); Sensor also should adopt infrared ray sensor accordingly.But adopt other light emitting source and sensor, also belong to this case scope.
Method flow of the present invention below at first is described.See also Fig. 2, this is one of them embodiment of the inventive method, as shown in the figure, supposes to use two sensors (left sensor L and right-hand sensor R) to observe same zone, then these two analog images that sensors elder generation gets according to observation station are changed the digital bidimensional image information that produces.So-called analog image in fact is a plurality of luminous points that contain brightness and chrominance information of being caught by sensor; These luminous points may be from light source or from reflected by objects light.Sensor converts these luminous points to after the digital bidimensional image information, this numeral bidimensional image information is at first through the step (step S21) of " extracted object ", in this step, bidimensional image information is analyzed, it is classified into several " objects ".
Mandatory declaration be that institute's " object " is not an actual object that refers in the true environment.In this case, " object " in fact is meant a sets of pixels that has similar characteristic in the image, so it not only needn't be corresponding to an actual object in the true environment, even also may not be a continuous integral body.For example, screen upper left side and bottom-right two blocks that are not connected might be considered as an object.
It is multiple to become the practice of object to have the bidimensional image information categorization, for example, using under the situation of general light emitting source, can classified or according to shape, area or according to the local bright spot/dim spot density of image or according to image texture (annexation of similar pixel) or the like according to the color of bidimensional image.Under the situation of using the infrared light emission source, can be according to the brightness that is sensed, with its classification.In the above various practices, not needing all parts of bidimensional image information are all sorted out becomes object.For example, brightness is less than the part of certain intensity, the visual background of doing, or the like.
More offer a piece of advice it, in the preferable practice of the present invention, extract object after, even need not carry out computing, as long as and obtain characteristics of objects with meaning at whole object.For example, can earlier bidimensional image be given two minutes (binarization) according to a certain Rule of judgment (for example brightness critical values), brighter part is defined as object, from object, take out its feature again; So-called feature for example can be center of gravity, border, shape, size, length breadth ratio, particular point (as end points, corner point, high curvature point) of object or the like.
Taking out the purpose of feature, is complex calculation can be given simplification.For example, can with many documents, be simplified to a simple center of gravity vector, handle to quicken subsequent operation with reference to No. 94147531 application case that same applicant applied for.
By the step S21 of above-mentioned extracted object, the random two-dimensional image information can be extracted object or further extracts its feature.Therefore, do not need as prior art, the shape of pre-defined object, and highlight it with strong aberration.
In step S21, at the bidimensional image information of left and right sides two sensors gained, extract individually after the object, then in step S22, system with about object in two images, give corresponding relation.See also Fig. 3, suppose the time point in T1, from the bidimensional image of left sensor gained, the object that extracts is 1,2, from the bidimensional image of right-hand sensor gained, the object that extracts is 3,4, then give the method for corresponding relation, can be centre of gravity place, get, set up corresponding relation near the person according to each object; Or, get overlapping big person according to the overlapping scope of area, set up corresponding relation; Or, will set up corresponding relation near the person according to shape limit number (circle is considered as unlimited polygon); Or, texture content is set up corresponding relation near the person according to texture; Or according to object at a pre-conditioned situation that meets, as be enclosed in form a hollow object, two hollow objects in the black background, do not have hollow object ..., and set up corresponding relation, or the like.In the example shown in Fig. 3 top, the object 1 of left sensor gained image and the object 3 of right-hand sensor gained image are regarded as correspondence; And the object 4 of the object 2 of left sensor gained image and right-hand sensor gained image is regarded as correspondence.
Set up after the corresponding relation, in step S23, just can convert the two-dimensional signal of each group objects to three-dimensional steric information.Its conversion regime is shown in Fig. 4 A, and the center position of supposing left and right sides two sensors is T, and the focusing distance of sensor is f, and the x coordinate of certain object in left sensor gained bidimensional image is x 1, the x coordinate in right-hand sensor gained bidimensional image is x r(suppose that the center with sensor is initial point, then x rLeft in right-hand center sensor position is so be negative value), and the distance of this object distance sensor is Z, then according to the similar triangles theorem,
x 1/ f=X/Z, and-x r/ f=(T-X)/Z
So can obtain the three-dimensional information of this object:
X=(T×x 1)/(x 1-x r)
Y=(T * y 1)/(x 1-x r) (y wherein 1Not shown in the drawings)
Z=f×[T/(x 1-x r)]
Mode , Calipering can produce the three-dimensional information of object each point by this.
It is parallel and be positioned at situation on the same directrix that Fig. 4 A is depicted as left and right sides two sensors, but the present invention is not limited thereto.Shown in Fig. 4 B, suppose that left and right sides two sensors is not positioned on the same directrix, and have a relative angle each other, then still can calculate the three-dimensional information of object according to inner parameter, relative angle, the relative distance of two sensors.As for concrete account form, known by present technique field person, do not repeat them here, can publish with reference to Brooks/ColePublishing Company, by Sonka, Hlavac, with three of Boyle institute it " Image Processing; Analysis, and Machine Vision " second edition, 460-469 page or leaf for example.
If in the process of extracted object, quicken computing and do not preserve whole object information for asking, only take out characteristics of objects, then please refer to Fig. 4 C, still can be, and set conversion principle according to the feature of object, reduce the Global Information of three dimensional object.Certainly, this Global Information may not conform to fully with original whole object information, but with regard to application, does not often need to learn complete three dimensional object information; For example, in the application, may only need know between two different time points, the three-D displacement of object, in such cases, accurate shape, not detailed necessity of knowing.Shown in Fig. 4 C, suppose two end points that are characterized as that extracted then after drawing the three-dimensional coordinate of end points, can restore the whole three-dimensional information of object according to conversion principles such as set length breadth ratio or shapes.
According to the present invention, obtain the three-dimensional information of each object after, can be in next step S24, this three dimensional object is made a mark (for example, giving a feature code or ID code).The purpose of this step is the memory and the computing of simplified system.
Please consult Fig. 3 again, suppose the time point Tn after T1, the two-dimensional object of left and right sides two sensors gained is shown in below among the figure, and is then same through after giving corresponding relation (5-7 correspondence, 6-8 correspondence), can calculate the three-dimensional information of two objects, and give mark.At this moment, as can be known according to the three dimensional object information of 5-7 corresponding relation gained, with the three dimensional object information of previous 1-3 corresponding relation gained, both are the most approaching, so can calculate three-D displacement between the two by comparison; And interaction systems just can react and produce accordingly according to this displacement.
Below explanation another method embodiment of the present invention sees also Fig. 5 A, 5B and Fig. 6.Please first 5A with the aid of pictures, the method step during time point T1.Present embodiment and precedent different be in, after the step S31 of extracted object, carry out mark (step S32) at two-dimensional object earlier.Carry out after the mark, just set up corresponding relation (step S33 has set up Fig. 6 object 11-13 and 12-14 corresponding relation spatially this moment).Set up after the corresponding relation, carry out step S34, produce three-dimensional information according to each two-dimensional signal of organizing corresponding objects; But , Calipering no longer does any mark to this three dimensional object after producing three-dimensional information.
5B please with the aid of pictures again, the method step during time point Tn; Tn is certain time point after the T1.After the step S41 of extracted object, system compares with the tagged object of previous time point T1 earlier with the two-dimensional object that is extracted, and establishes the relation (step S42) between this object and the previous tagged object; This is illustrated among Fig. 6, has set up 11-15 and the 13-17 corresponding relation on time shaft respectively.Owing between the 11-13 corresponding relation is arranged, so does not need between the 15-17 can directly draw its corresponding relation by logical operation by the process , Calipering of comparison.So, can omit a plurality of objects needed operation time of comparison and hardware burden mutually.Afterwards, in step S43, carry out mark again at two-dimensional object.The two-dimensional object that , Calipering can be corresponding according to each group after the mark produces its three-dimensional information (step S45).According to the present invention, before or after step S45, still optionally carry out step S44 or S46 (but the alternatively or all omits it, but also both all adopt certainly), wherein for the corresponding relation between object, checking tries again.
May need situation about verifying as follows.The corresponding relation of suppose object is to set up according to shape, and as shown in Figure 7, the actual three-dimensional body of object 25,26 is a right cylinder; Object 25 is the end face of first right cylinder, and object 28 is the side of same right cylinder, and object 26 is the side of second right cylinder, and object 27 is the end face of same right cylinder.Because the observation angle of sensor, and the moving of actual object, when causing time point Tn, object 25 and 27 shape are very close, and the shape of object 26 and 28 is very close.At this moment, it is corresponding with 27 that system may judge object 25 by accident, and object 26 is corresponding with 28.Note that if when the angles shifts of an actual object is only arranged (change of first right cylinder is for example only arranged, and second right cylinder being motionless),, choose the gap value the lowest because of system's computing ties up in a plurality of corresponding relations combinations, so the time can't cause erroneous judgement.Only have moving of two actual objects and by chance cause mutually, just can cause erroneous judgement at once.
Though the possibility that this erroneous judgement situation takes place is extremely low,, can before carrying out computing generation three-dimensional information, step S45 verify earlier the corresponding relation between the two-dimensional object carefully according to the present invention.The method of checking for example can be obtained the center of gravity of each two-dimensional object earlier, and checks that each group has the two-dimensional object of corresponding relation, and whether its centroidal distance is nearest.In the example of Fig. 7, when the bidimensional image that left and right sides two sensors is obtained was stacked, object 25 and 28 centre of gravity place were the most approaching, and the centre of gravity place of object 26 and 27 is the most approaching, so system can rebuild corresponding relation in view of the above.Rebuild after the corresponding relation, give mark more again.
Above-mentioned according to centre of gravity place whether near making a decision, only be one of verification method.Other method also is feasible, for example can calculate each group and have the overlapping area of two-dimensional object of corresponding relation, and whether in all possible corresponding relation, the area maximum; If or be not to set up corresponding relation according to shape originally, just then this moment can whether correspondence be checked all corresponding relations according to shape.
As before step S45, not carrying out the checking of step S44, then also can after step S45, after producing three-dimensional information, check original two-dimensional object mark of giving, with three dimensional object whether conform to (step S46).As do not meet, Calipering rebulids corresponding relation and carries out the calculating of three-dimensional information again.As previously mentioned, step S44 and S46 can choose one or the other of these two, or all omit, or all carry out (but in the practicality, only adopting one should be very enough).
State various implementation methods before the present invention, its hardware circuit, the way shown in available Fig. 8 A or Fig. 8 B is reached.Consult Fig. 8 A earlier, the image that the left and right sides two sensors 81L, 81R are sensed, distinctly send corresponding object extraction circuit 82L, 82R (object extraction circuit for example can be the circuit of analyzing brightness) to, and object extracted the object information that circuit produces, send corresponding processor 83L, 83R to.Processor can be any circuit with data calculation function, even for example can be that CPU, MCU, DSP are ASIC.Sensor, object extraction circuit and the processor integrated circuit of can respectively doing for oneself, or the three is integrated into an integrated circuit, or as shown in the figure, sensor and object are extracted circuit integrated become integrated circuit 80L, a 80R, and processor is an integrated circuit separately.In the arrangement shown in Fig. 8 A, one of two processor (for example 83L) give another processor (for example 83R) with the two-dimentional data transmission of its computing gained, and the latter is calculated at the two-dimensional object data of correspondence, produce three-dimensional information, and exported by output interface 86.
Fig. 8 B is depicted as another hardware way, wherein gives same processor 84 through the two-dimensional object data transmission of obtaining after the extraction, calculates the generation three-dimensional information at this place, and is exported by output interface 86.
As previously mentioned, compare with prior art, the present invention does not need the shape of pre-defined object, and yet no color differnece condition restriction of the use of object.In addition, from above-mentioned hardware circuit framework, can find out that the present invention still has an important advantage.Between two processor 83L, the 83R of Fig. 8 A, or between circuit 80L, 80R shown in Fig. 8 B and the processor, owing to only need to transmit the characteristic information (and whole image information of non-complex) of a spot of object information even less amount, so the frequency range that it transmits and reception interface (communication interface) is required is very low, and processing speed is very high.For example, under the situation of image sampling rate (Frame Rate)=200F/sec, the object information of every image is less than 100Byte/sec, so frequency range only needs 20KByte/sec.Especially, if adopt the best kenel of implementing of the present invention, use the infrared light emission source, come extracted object and arrange in pairs or groups center of gravity calculation and object tag in the brightness mode, more can significantly alleviate hardware burden, still do not transmit reception interface, can the calculated amount that processor is required reduce to minimum yet.Therefore, as previously mentioned, processor may not need to adopt CPU, MCU, the DSP of high-order, and even can make of ASIC.Described communication interface, also not shown among the figure, can independently be arranged between two processor 83L, the 83R of Fig. 8 A, or also can be incorporated within processor 83L, the 83R, or between circuit 80L, 80R shown in Fig. 8 B and the processor 84, or also can be incorporated within the processor 84.
Use method of the present invention and circuit, but the various forms of interaction systems of construction, for example shown in Fig. 9-11.In Fig. 9, be an end that light emitting source 91L, 91R is arranged on screen 90, and sensor 92L, 92R be arranged on an end of hand-held unit 93.(light emitting source is preferable with the infrared light emission source, down together; Again, showing two light emitting sources among the figure, only is for example, one of minimum needs of the number of light emitting source.) among Figure 10, be light emitting source 91L, 91R to be arranged on an end of hand-held unit 93, and sensor 92L, 92R be arranged on an end of screen 90.In the above dual mode, light emitting source and sensor are divided into the two ends of an area of space.Among Figure 11, then be an end that light emitting source 91L, 91R and sensor 92L, 92R all is arranged on screen 90, get final product and only need to possess the reflective block made from reflective material 94 on the hand-held unit 93.Reflective block 94 can be one or more, and shape is regardless of.Arrangement shown in Figure 11 has more an advantage, and this promptly hand-held unit 93 does not need power supply fully, and is better than prior art.
Interaction systems of the present invention can be applicable to various gaming platforms, also can be provided as three-dimensional indicator device, for example as the input media of portable devices such as PDA, mobile phone, mobile computer, or be applied to any device that needs to follow the trail of the three-dimensional body change in location.
Below at preferred embodiment the present invention is described, the above person only only is to make to be familiar with present technique person and to be easy to understand content of the present invention, is not the interest field that is used for limiting the present invention.For being familiar with present technique person, when can in spirit of the present invention, thinking immediately and various equivalence variation.For example, each embodiment is an example with two sensors all, but also can use plural sensor certainly; Shown in two sensors be about configuration, but also can change up and down configuration certainly into.Sensor is for pick-up image, so as do not use sensor, and use the device of other fechtable image to replace, also belong to feasible certainly.Again, each shown circuit may not all need to become separately an independently integrated circuit; Except that expositor, other parts also can be integrated into same integrated circuit, even for example integration sensor, object extraction circuit, processor, the output interface of output interface and processor all are integrated into an integrated circuit, or the like.Can be TV screen or exclusive game screen or the like as screen 90 in addition, the possibility that has various equivalences to change.So all a notion and spirit impartial for it a variation or modification according to the present invention all should be included in the present invention's the claim.

Claims (42)

1. one kind in order to produce the interaction systems of 3 D stereo information, comprises:
At least two image acquisition units in order to indivedual acquisition analog images, produce digital bidimensional image through analog digital conversion, wherein the distance between this two image acquisition unit, and this two image acquisition unit focusing distance separately be known;
The object extraction equipment, it receives the digital bidimensional image of these indivedual acquisitions of two image acquisition units institute, and extraction object information wherein;
Treating apparatus, its according to the distance between object information and this two image acquisition unit, with this two image acquisition unit focusing distance separately, and produce three-dimensional information; And
Output interface is in order to export this three-dimensional information.
2. the interaction systems of generation 3 D stereo information as claimed in claim 1 more includes light emitting source.
3. the interaction systems of generation 3 D stereo information as claimed in claim 2, wherein this light emitting source is the infrared light emission source, and these at least two image acquisition unit system acquisition infrared ray brightness images.
4. the interaction systems of generation 3 D stereo information as claimed in claim 2, wherein this light emitting source and this at least two image acquisition units are divided into the two ends of an area of space.
5. the interaction systems of generation 3 D stereo information as claimed in claim 4, wherein this interaction systems includes a hand-held unit, which is provided with this light emitting source or this at least two image acquisition units.
6. the interaction systems of generation 3 D stereo information as claimed in claim 2, wherein this light emitting source and this at least two image acquisition units are located at the same end of an area of space, and interaction systems includes reflective block in addition, are located at the other end of this area of space.
7. the interaction systems of generation 3 D stereo information as claimed in claim 6, wherein this interaction systems includes a hand-held unit, and aforementioned reflective block is provided thereon.
8. the interaction systems of generation 3 D stereo information as claimed in claim 1, wherein this object extraction equipment comprises at least two unit, respectively with the corresponding collocation of these at least two image acquisition units.
9. the interaction systems of generation 3 D stereo information as claimed in claim 8, wherein these at least two object extraction equipment unit with these corresponding at least two image acquisition units, are integrated into integrated circuit individually respectively.
10. the interaction systems of generation 3 D stereo information as claimed in claim 1, wherein this object extraction equipment comes extracted object according to one of following mode: according to the color of image, according to the shape of image, according to the area of image, according to the local bright spot/dim spot density of image, according to the brightness of image, according to image texture or above comprehensive more than both.
11. the interaction systems of generation 3 D stereo information as claimed in claim 1 more includes a communication interface with low frequency range, is arranged between this treating apparatus and this object extraction equipment, or is incorporated within the treating apparatus.
12. the interaction systems of generation 3 D stereo information as claimed in claim 1, wherein this treating apparatus comprises at least two unit, respectively with the corresponding collocation of these at least two image acquisition units.
13. the interaction systems of generation 3 D stereo information as claimed in claim 12 more includes a communication interface with low frequency range, is arranged between two unit of this treating apparatus, or is incorporated within one of them treating apparatus unit.
14. the interaction systems of generation 3 D stereo information as claimed in claim 8, wherein this treating apparatus comprises at least two unit, respectively with these at least two corresponding collocation in object extraction equipment unit.
15. the interaction systems of generation 3 D stereo information as claimed in claim 14 more includes a communication interface with low frequency range, is arranged between two unit of this treating apparatus, or is incorporated within one of them treating apparatus unit.
16. the interaction systems of generation 3 D stereo information as claimed in claim 1, wherein this interaction systems is one of following system: interaction systems or portable electronic equipment or three-dimensional body follow-up mechanism that recreation is used.
17. as the interaction systems of the described generation 3 D stereo of claim 11,13 or 15 information, wherein said communication interface with low frequency range, its frequency range is 20KByte/sec or following.
18. an electronic installation that produces 3 D stereo information comprises:
At least two image acquisition units in order to indivedual acquisition analog images, and produce digital bidimensional image, and wherein the distance between this two image acquisition unit is known with separately focusing distance;
The object extraction equipment, it receives the digital bidimensional image of these indivedual acquisitions of two image acquisition units institute, and extraction object information wherein; And
Treating apparatus, its according to the distance between object information, this two image acquisition unit, with this two image acquisition unit focusing distance separately, and produce three-dimensional information.
19. the electronic installation of generation 3 D stereo information as claimed in claim 18, wherein this object extraction equipment comprises at least two unit, respectively with the corresponding collocation of these at least two image acquisition units.
20. the electronic installation of generation 3 D stereo information as claimed in claim 19, wherein these at least two object extraction equipment unit with these corresponding at least two image acquisition units, are integrated into integrated circuit individually respectively.
21. the electronic installation of generation 3 D stereo information as claimed in claim 18, wherein this object extraction equipment comes extracted object according to one of following mode: according to the color of image, according to the shape of image, according to the area of image, according to the local bright spot/dim spot density of image, according to the brightness of image, according to image texture or above comprehensive more than both.
22. the electronic installation of generation 3 D stereo information as claimed in claim 18 more includes a communication interface with low frequency range, is arranged between this treating apparatus and this object extraction equipment, or is incorporated within the treating apparatus.
23. the electronic installation of generation 3 D stereo information as claimed in claim 18, wherein this treating apparatus comprises at least two unit, respectively with the corresponding collocation of these at least two image acquisition units.
24. the electronic installation of generation 3 D stereo information as claimed in claim 23 more includes a communication interface with low frequency range, is arranged between two unit of this treating apparatus, or is incorporated within one of them treating apparatus unit.
25. the electronic installation of generation 3 D stereo information as claimed in claim 20, wherein this treating apparatus comprises at least two unit, respectively with these at least two corresponding collocation in object extraction equipment unit.
26. the electronic installation of generation 3 D stereo information as claimed in claim 25 more includes a communication interface with low frequency range, is arranged between two unit of this treating apparatus, or is incorporated within one of them treating apparatus unit.
27. as the electronic installation of the described generation 3 D stereo of claim 22,24 or 26 information, wherein said communication interface with low frequency range, its frequency range is 20KByte/sec or following.
28. the 3 D stereo information generation method based on object comprises:
At very first time point,, obtain at least two bidimensional images to the same area;
From these at least two bidimensional images, extract object information individually;
Set up the corresponding relation between object; And
According to the object of correspondence, produce three-dimensional information.
29. 3 D stereo information generation method as claimed in claim 28 further comprises:
At second time point,, obtain at least two bidimensional images in addition to the same area;
In addition at least two bidimensional images, extract object information from this individually;
Set up the corresponding relation between object;
According to the object of correspondence, produce three-dimensional information; And
The three-dimensional information of the comparison very first time point and second time point, the decision displacement.
30. 3 D stereo information generation method as claimed in claim 28 further comprises: after producing three-dimensional information, the object in this three-dimensional information gives mark.
31. 3 D stereo information generation method as claimed in claim 29, further comprise: after producing three-dimensional information, to the object in this three-dimensional information, give mark, and wherein " three-dimensional information of the comparison very first time point and second time point ", according to the mark of three-dimensional information, and the object of decision comparison.
32. 3 D stereo information generation method as claimed in claim 28 further comprises: after extracting object information,, give mark to this two-dimensional object.
33. 3 D stereo information generation method as claimed in claim 32 further comprises:
At second time point,, obtain at least two bidimensional images in addition to the same area;
In addition at least two bidimensional images, extract object information from this individually;
According to aforementioned object tag, obtain the corresponding relation of object; And
According to the object of correspondence, produce three-dimensional information.
34. 3 D stereo information generation method as claimed in claim 33 further comprises: the three-dimensional information of the comparison very first time point and second time point, decision displacement.
35. 3 D stereo information generation method as claimed in claim 33 further comprises: according to aforementioned object tag, obtain after the object corresponding relation, verify whether this corresponding relation is correct.
36. 3 D stereo information generation method as claimed in claim 35 is wherein verified corresponding relation according to one of following mode: according to the centre of gravity place of object, the checking corresponding relation whether near the person; Or according to the overlapping scope of area, whether the checking corresponding relation is overlapping the maximum; Or according to shape limit number, the checking corresponding relation whether near the person; Or above both are above comprehensive.
37. 3 D stereo information generation method as claimed in claim 33 further comprises: after second time point produces three-dimensional information, verify this three-dimensional information and mark whether to conform to.
38. 3 D stereo information generation method as claimed in claim 28, wherein said extracted object information step system comes extracted object according to one of following mode: according to the color of image, according to the shape of image, according to the area of image, according to the local bright spot/dim spot density of image, according to the brightness of image, according to image texture or above comprehensive more than both.
39. 3 D stereo information generation method as claimed in claim 28, wherein said extracted object information step further comprises: extract the characteristic information in the object.
40. 3 D stereo information generation method as claimed in claim 39, wherein said characteristics of objects are one of following: object center of gravity, object bounds, object shapes, object size, object particular point or above both or both above comprehensive.
41. 3 D stereo information generation method as claimed in claim 28, the wherein said corresponding relation step of setting up is to set up corresponding relation according to one of following mode: according to the centre of gravity place of object, get near the person; Or, get overlapping big person according to the overlapping scope of area; Or, get near the person according to shape limit number; Or, get texture content near the person according to texture; Or according to object at a pre-conditioned situation that meets; Or above both are above comprehensive.
42. 3 D stereo information generation method as claimed in claim 28, wherein obtained bidimensional image are infrared image.
CN 200710092172 2007-04-02 2007-04-02 Apparatus and method for generating three-dimensional information based on object as well as using interactive system Expired - Fee Related CN101281422B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200710092172 CN101281422B (en) 2007-04-02 2007-04-02 Apparatus and method for generating three-dimensional information based on object as well as using interactive system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200710092172 CN101281422B (en) 2007-04-02 2007-04-02 Apparatus and method for generating three-dimensional information based on object as well as using interactive system

Publications (2)

Publication Number Publication Date
CN101281422A true CN101281422A (en) 2008-10-08
CN101281422B CN101281422B (en) 2012-02-08

Family

ID=40013916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200710092172 Expired - Fee Related CN101281422B (en) 2007-04-02 2007-04-02 Apparatus and method for generating three-dimensional information based on object as well as using interactive system

Country Status (1)

Country Link
CN (1) CN101281422B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169364A (en) * 2010-02-26 2011-08-31 原相科技股份有限公司 Interaction module applied to stereoscopic interaction system and method of interaction module
CN102340675A (en) * 2010-07-14 2012-02-01 深圳Tcl新技术有限公司 2D/3D conversion device and method for realizing same
CN102681656A (en) * 2011-01-17 2012-09-19 联发科技股份有限公司 Apparatuses and methods for providing 3d man-machine interface (mmi)
CN102799264A (en) * 2012-04-18 2012-11-28 友达光电股份有限公司 Three-dimensional space interaction system
CN102981599A (en) * 2011-09-05 2013-03-20 硕擎科技股份有限公司 Three-dimensional human-computer interface system and method thereof
CN103124985A (en) * 2010-09-29 2013-05-29 阿尔卡特朗讯 Method and arrangement for censoring content in three-dimensional images
CN103389815A (en) * 2012-05-08 2013-11-13 原相科技股份有限公司 Method and system for detecting movement of object and outputting command
CN103400543A (en) * 2013-07-18 2013-11-20 贵州宝森科技有限公司 3D (three-dimensional) interactive display system and display method thereof
CN103916661A (en) * 2013-01-02 2014-07-09 三星电子株式会社 Display method and display apparatus
CN104423563A (en) * 2013-09-10 2015-03-18 智高实业股份有限公司 Non-contact type real-time interaction method and system thereof
US9983685B2 (en) 2011-01-17 2018-05-29 Mediatek Inc. Electronic apparatuses and methods for providing a man-machine interface (MMI)
CN112790582A (en) * 2019-11-14 2021-05-14 原相科技股份有限公司 Electric rice cooker and liquid level determination method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003094114A1 (en) * 2002-05-03 2003-11-13 Koninklijke Philips Electronics N.V. Method of producing and displaying an image of a 3 dimensional volume
CN100338434C (en) * 2003-02-06 2007-09-19 株式会社高永科技 Thrre-dimensional image measuring apparatus
JP2008517368A (en) * 2004-10-15 2008-05-22 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ 3D rendering application system using hands

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169364B (en) * 2010-02-26 2013-03-27 原相科技股份有限公司 Interaction module applied to stereoscopic interaction system and method of interaction module
CN102169364A (en) * 2010-02-26 2011-08-31 原相科技股份有限公司 Interaction module applied to stereoscopic interaction system and method of interaction module
CN102340675A (en) * 2010-07-14 2012-02-01 深圳Tcl新技术有限公司 2D/3D conversion device and method for realizing same
CN102340675B (en) * 2010-07-14 2013-11-20 深圳Tcl新技术有限公司 2D/3D conversion device and method for realizing same
CN103124985A (en) * 2010-09-29 2013-05-29 阿尔卡特朗讯 Method and arrangement for censoring content in three-dimensional images
US9632626B2 (en) 2011-01-17 2017-04-25 Mediatek Inc Apparatuses and methods for providing a 3D man-machine interface (MMI)
CN102681656A (en) * 2011-01-17 2012-09-19 联发科技股份有限公司 Apparatuses and methods for providing 3d man-machine interface (mmi)
CN105022498B (en) * 2011-01-17 2018-06-19 联发科技股份有限公司 Electronic device and its method
US9983685B2 (en) 2011-01-17 2018-05-29 Mediatek Inc. Electronic apparatuses and methods for providing a man-machine interface (MMI)
CN105022498A (en) * 2011-01-17 2015-11-04 联发科技股份有限公司 Electronic apparatus and method thereof
CN102681656B (en) * 2011-01-17 2015-06-10 联发科技股份有限公司 Apparatuses and methods for providing 3d man-machine interface (mmi)
CN102981599A (en) * 2011-09-05 2013-03-20 硕擎科技股份有限公司 Three-dimensional human-computer interface system and method thereof
CN102799264A (en) * 2012-04-18 2012-11-28 友达光电股份有限公司 Three-dimensional space interaction system
CN103389815B (en) * 2012-05-08 2016-08-03 原相科技股份有限公司 Detecting object moves method and the system thereof of output order
CN103389815A (en) * 2012-05-08 2013-11-13 原相科技股份有限公司 Method and system for detecting movement of object and outputting command
CN103916661A (en) * 2013-01-02 2014-07-09 三星电子株式会社 Display method and display apparatus
CN103400543A (en) * 2013-07-18 2013-11-20 贵州宝森科技有限公司 3D (three-dimensional) interactive display system and display method thereof
CN104423563A (en) * 2013-09-10 2015-03-18 智高实业股份有限公司 Non-contact type real-time interaction method and system thereof
CN112790582A (en) * 2019-11-14 2021-05-14 原相科技股份有限公司 Electric rice cooker and liquid level determination method
CN112790582B (en) * 2019-11-14 2022-02-01 原相科技股份有限公司 Electric rice cooker and liquid level determination method

Also Published As

Publication number Publication date
CN101281422B (en) 2012-02-08

Similar Documents

Publication Publication Date Title
CN101281422B (en) Apparatus and method for generating three-dimensional information based on object as well as using interactive system
US8605987B2 (en) Object-based 3-dimensional stereo information generation apparatus and method, and interactive system using the same
JP6976270B2 (en) Remote determination of the amount stored in a container in a geographic area
Aggarwal et al. Human activity recognition from 3d data: A review
CN102122392B (en) Information processing apparatus, information processing system, and information processing method
CN110246163B (en) Image processing method, image processing device, image processing apparatus, and computer storage medium
JP5783885B2 (en) Information presentation apparatus, method and program thereof
CN103252778B (en) For estimating robot location's Apparatus for () and method therefor
CN108292362A (en) Gesture identification for cursor control
CN112949577B (en) Information association method, device, server and storage medium
CN102640185A (en) Method, computer program, and device for hybrid tracking of real-time representations of objects in image sequence
US20170213396A1 (en) Virtual changes to a real object
CN106524909B (en) Three-dimensional image acquisition method and device
CN109754461A (en) Image processing method and related product
CN104700393B (en) The registration of multiple laser scannings
CN109727275A (en) Object detection method, device, system and computer readable storage medium
CN105934757A (en) Method and apparatus for detecting incorrect associations between keypoints of first image and keypoints of second image
CN107229887A (en) Multi-code scanning device and multi-code scan method
CN107403160A (en) Image detecting method, equipment and its storage device in a kind of intelligent driving scene
US11373329B2 (en) Method of generating 3-dimensional model data
Zeng et al. The equipment detection and localization of large-scale construction jobsite by far-field construction surveillance video based on improving YOLOv3 and grey wolf optimizer improving extreme learning machine
CN106023307A (en) Three-dimensional model rapid reconstruction method and system based on field environment
Gupta et al. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones
JP5700221B2 (en) Marker determination device, marker determination detection system, marker determination detection device, marker, marker determination method and program thereof
CN114898044A (en) Method, apparatus, device and medium for imaging detection object

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120208

CF01 Termination of patent right due to non-payment of annual fee