CN109803089B - Electronic equipment and mobile platform - Google Patents

Electronic equipment and mobile platform Download PDF

Info

Publication number
CN109803089B
CN109803089B CN201910008293.0A CN201910008293A CN109803089B CN 109803089 B CN109803089 B CN 109803089B CN 201910008293 A CN201910008293 A CN 201910008293A CN 109803089 B CN109803089 B CN 109803089B
Authority
CN
China
Prior art keywords
time
initial depth
light
flight
depth image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910008293.0A
Other languages
Chinese (zh)
Other versions
CN109803089A (en
Inventor
张学勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910008293.0A priority Critical patent/CN109803089B/en
Publication of CN109803089A publication Critical patent/CN109803089A/en
Application granted granted Critical
Publication of CN109803089B publication Critical patent/CN109803089B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Optical Radar Systems And Details Thereof (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an electronic device and a mobile platform. The electronic device includes a body and a plurality of time-of-flight components disposed at a plurality of different orientations on the body. Each time-of-flight assembly includes two phototransmitters with an angle of view of any value from 80 degrees to 120 degrees and one photoreceiver with an angle of view of any value from 180 degrees to 200 degrees. The light emitter is used for emitting laser pulses to the outside of the body, and the light receiver is used for receiving the laser pulses emitted by the two corresponding light emitters reflected by the shot target. The light emitters of the plurality of time-of-flight components emit laser light simultaneously, and the light receivers of the plurality of time-of-flight components are exposed simultaneously to acquire a panoramic depth image. In the electronic equipment and the mobile platform of the embodiment of the application, the plurality of light emitters in the plurality of different directions of the body emit laser simultaneously, and the plurality of light receivers expose simultaneously to acquire the panoramic depth image, so that the panoramic depth image can be acquired at one time.

Description

Electronic equipment and mobile platform
Technical Field
The present application relates to the field of image acquisition technologies, and more particularly, to an electronic device and a mobile platform.
Background
In order to diversify the functions of the electronic device, a depth image acquiring device may be provided on the electronic device to acquire a depth image of a subject. However, the current depth image acquiring device can acquire only a depth image in one direction or one angle range, and the acquired depth information is less.
Disclosure of Invention
The embodiment of the application provides electronic equipment and a mobile platform.
The electronic equipment comprises a body and a plurality of time-of-flight components arranged on the body, wherein the time-of-flight components are respectively positioned at a plurality of different orientations of the body, each time-of-flight component comprises one two phototransmitters and one photoreceiver, the field angle of each phototransmitter is any value from 80 degrees to 120 degrees, the field angle of each photoreceiver is any value from 180 degrees to 200 degrees, the phototransmitters are used for transmitting laser pulses to the outside of the body, and the photoreceivers are used for receiving the laser pulses transmitted by the corresponding two phototransmitters reflected by a photographed target; the light emitters of the plurality of time-of-flight components emit laser light simultaneously and the light receivers of the plurality of time-of-flight components are exposed simultaneously to acquire a panoramic depth image.
The mobile platform of the embodiment of the application comprises a body and a plurality of time-of-flight components arranged on the body, wherein the plurality of time-of-flight components are respectively positioned at a plurality of different orientations of the body, each time-of-flight component comprises two phototransmitters and one photoreceiver, the field angle of each phototransmitter is any value from 80 degrees to 120 degrees, the field angle of each photoreceiver is any value from 180 degrees to 200 degrees, the phototransmitters are used for transmitting laser pulses to the outside of the body, and the photoreceivers are used for receiving the laser pulses transmitted by the corresponding two phototransmitters reflected by a photographed target; the light emitters of the plurality of time-of-flight components emit laser light simultaneously and the light receivers of the plurality of time-of-flight components are exposed simultaneously to acquire a panoramic depth image.
In the electronic equipment and the mobile platform of the embodiment of the application, the plurality of light emitters in the plurality of different directions of the body emit laser simultaneously, and the plurality of light receivers expose simultaneously to acquire the panoramic depth image, so that the panoramic depth image can be acquired at one time.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic structural diagram of an electronic device according to some embodiments of the present application;
FIG. 2 is a block diagram of an electronic device according to some embodiments of the present application;
FIG. 3 is a schematic diagram of a light emitter of a time-of-flight component according to some embodiments of the present application;
FIG. 4 is a schematic diagram of an application scenario of an electronic device according to some embodiments of the present application;
FIG. 5 is a schematic diagram of a coordinate system for initial depth image stitching according to some embodiments of the present application;
fig. 6 to 10 are schematic views of application scenarios of an electronic device according to some embodiments of the present application;
fig. 11-14 are schematic structural views of a mobile platform according to some embodiments of the present disclosure.
Detailed Description
Embodiments of the present application will be further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout. The embodiments of the present application described below in conjunction with the drawings are exemplary only and should not be construed as limiting the present application.
Referring to fig. 1 and 2 together, an electronic device 100 according to an embodiment of the present disclosure includes a body 10, a time-of-flight assembly 20, a camera assembly 30, a microprocessor 40, and an application processor 50.
The body 10 includes a plurality of different orientations. For example, in fig. 1, the body 10 can have four different orientations, in the clockwise direction: the device comprises a first direction, a second direction, a third direction and a fourth direction, wherein the first direction is opposite to the third direction, and the second direction is opposite to the fourth direction. The first direction is a direction corresponding to the upper side of the body 10, the second direction is a direction corresponding to the right side of the body 10, the third direction is a direction corresponding to the lower side of the body 10, and the fourth direction is a direction corresponding to the left side of the body 10.
The time of flight assembly 20 is disposed on the body 10. The number of time of flight assemblies 20 may be plural, with a plurality of time of flight assemblies 20 being located in a plurality of different orientations of the body 10. In particular, the number of time-of-flight components 20 may be two, respectively time-of- flight components 20a and 20 b. The time of flight assembly 20a is disposed in a first orientation and the time of flight assembly 20b is disposed in a third orientation. Of course, the number of time-of-flight assemblies 20 may also be four (or any other number greater than two), and two additional time-of-flight assemblies 20 may be provided in the second and fourth orientations, respectively. In the embodiment of the present application, the number of the time-of-flight components 20 is two for illustration, and it can be understood that two time-of-flight components 20 can achieve obtaining of the panoramic depth image (the panoramic depth image means that the field angle of the panoramic depth image is greater than or equal to 180 degrees, for example, the field angle of the panoramic depth image may be 180 degrees, 240 degrees, 360 degrees, 480 degrees, 720 degrees, and the like), which is beneficial to saving the manufacturing cost of the electronic device 100, and reducing the volume and power consumption of the electronic device 100. The electronic device 100 of the present embodiment may be a portable electronic device such as a mobile phone, a tablet computer, and a notebook computer, which is provided with a plurality of time-of-flight components 20, and in this case, the main body 10 may be a mobile phone body, a tablet computer body, a notebook computer body, and the like. For the electronic device 100 with a higher thickness requirement, for example, a mobile phone, because the thickness of the body of the mobile phone is required to be thinner, the time-of-flight components 20 cannot be installed on the side of the body, so the arrangement of using two time-of-flight components 20 to obtain the panoramic depth image can solve the above problem, and at this time, the two time-of-flight components 20 can be installed on the front and the back of the body of the mobile phone, respectively. In addition, the manner in which the two time-of-flight components 20 can acquire the panoramic depth image is also beneficial to reducing the amount of computation of the panoramic depth image.
Each time of flight assembly 20 includes two optical transmitters 22 and one optical receiver 24. The light emitters 22 are used for emitting laser pulses to the outside of the body 10, and the light receivers 24 are used for receiving the laser pulses emitted by the corresponding two light emitters 22 reflected by the object to be shot. Specifically, time of flight assembly 20a includes an optical transmitter 222a, an optical transmitter 224a, and an optical receiver 24a, and time of flight assembly 20b includes an optical transmitter 222b, an optical transmitter 224b, and an optical receiver 24 b. The light emitter 222a and the light emitter 224a are both used for emitting laser pulses to a first position outside the body 10, the light emitter 222b and the light emitter 224b are both used for emitting laser pulses to a third position outside the body 10, the light receiver 24a is used for receiving the laser pulses emitted by the light emitter 222a and the light emitter 224a reflected by a subject in the first position, and the light receiver 24b is used for receiving the laser pulses emitted by the light emitter 222b and the light emitter 224b reflected by the subject in the third position, so that different areas outside the body 10 can be covered, compared with the existing method that the depth information can be obtained by rotating 360 degrees, the electronic device 100 in the embodiment can obtain the comprehensive depth information at one time without rotating, and is simple to execute and quick in response speed.
The light emitters 22 of the plurality of time-of-flight assemblies 20, e.g., two time-of-flight assemblies 20, emit laser light simultaneously, and the corresponding light receivers 24 of the plurality of time-of-flight assemblies 20 are exposed simultaneously to acquire the panoramic depth image. Specifically, the optical transmitter 222a, the optical transmitter 224a, the optical transmitter 222b, and the optical transmitter 224b simultaneously emit laser light, and the optical receiver 24a and the optical receiver 24b are simultaneously exposed. Because the plurality of light emitters 22 emit laser light simultaneously and the plurality of light receivers 24 expose simultaneously, when the corresponding plurality of initial depth images are obtained according to the laser pulses received by the plurality of light receivers 24, the plurality of initial depth images have the same timeliness, and can reflect the pictures displayed in all directions outside the main body 10 at the same time, that is, the panoramic depth images at the same time.
The angle of view of each optical transmitter 22 is any value from 80 degrees to 120 degrees, and the angle of view of each optical receiver 24 is any value from 180 degrees to 200 degrees.
In one embodiment, the field of view of each phototransmitter 22 is any value from 80 degrees to 90 degrees, for example, the field of view of phototransmitter 222a, phototransmitter 224a, phototransmitter 222b, and phototransmitter 224b is 80 degrees, and the field of view of phototransmitter 24a and phototransmitter 24b is 180 degrees. When the angle of view of the light emitter 22 is small, the manufacturing process of the light emitter 22 is relatively simple, the manufacturing cost is low, and the uniformity of the emitted laser light can be improved. When the field angle of the optical receiver 24 is small, the lens distortion is small, and the quality of the obtained initial depth image is good, so that the quality of the obtained panoramic depth image is also good, and more accurate depth information can be obtained.
In one embodiment, the sum of the field angles of phototransmitter 222a, phototransmitter 224a, phototransmitter 222b, and phototransmitter 224b equals 360 degrees, and the sum of the field angles of photoreceiver 24a and photoreceiver 24b equals 360 degrees. Specifically, the field angles of the phototransmitter 222a, the phototransmitter 224a, the phototransmitter 222b and the phototransmitter 224b may all be 90 degrees, the field angles of the photoreceiver 24a and the photoreceiver 24b may all be 180 degrees, and the field angles of the four phototransmitters 22 and the photoreceivers 24 do not overlap each other, so as to achieve the acquisition of a 360-degree or approximately 360-degree panoramic depth image. Alternatively, the field angles of the phototransmitter 222a and the phototransmitter 224a can be both 80 degrees, the field angles of the phototransmitter 222b and the phototransmitter 224b can be both 100 degrees, the field angles of the photoreceiver 24a and the photoreceiver 24b can be both 180 degrees, and the like, and the acquisition of the 360-degree or approximately 360-degree panoramic depth image can be realized by the angular complementation of the four phototransmitters 22 and the angular complementation of the two photoreceivers 24.
In one embodiment, the sum of the field angles of the phototransmitters 222a, 224a, 222b and 224b is greater than 360 degrees, the sum of the field angles of the photoreceiver 24a and 24b is greater than 360 degrees, the field angles of at least two of the four phototransmitters 22 overlap each other, and the field angles of the two photoreceivers 24 overlap each other. Specifically, the field angles of the light emitters 222a, 224a, 222b and 224b may all be 100 degrees, with the field angles between two of the four light emitters 22 overlapping each other. The angles of view of the optical receivers 24a and 24b may both be 200 degrees, with the angles of view between the two optical receivers 24 overlapping. When the panoramic depth image is obtained, the edge overlapping parts of the two initial depth images can be identified, and then the two initial depth images are spliced into the 360-degree panoramic depth image. Since the field angles of the four phototransmitters 22 and the two photoreceivers 24 overlap each other, it is ensured that the acquired panoramic depth image covers 360 degrees of depth information outside the body 10.
Of course, the specific values of the field angles of each of the phototransmitters 22 and each of the photoreceivers 24 are not limited to the above examples, and those skilled in the art can set the field angle of the phototransmitter 22 to any value between 80 degrees and 120 degrees and the field angle of the photoreceiver 24 to any value between 180 degrees and 200 degrees as required, for example: the field angle of the optical transmitter 22 is 80 degrees, 82 degrees, 84 degrees, 86 degrees, 90 degrees, 92 degrees, 94 degrees, 96 degrees, 98 degrees, 104 degrees, 120 degrees or any value therebetween, and the field angle of the optical receiver 24 is 180 degrees, 181 degrees, 182 degrees, 187 degrees, 188 degrees, 193.2 degrees, 195 degrees, 200 degrees or any value therebetween, which is not limited herein.
Referring to fig. 3, each light emitter 22 includes a light source 222 and a diffuser 224. The light source 222 is used for emitting laser light (e.g., infrared laser light, in which case the light receiver 24 is an infrared camera), and the diffuser 224 is used for diffusing the laser light emitted by the light source 222.
In general, the laser pulses emitted by the adjacent light emitters 22 between the two adjacent time-of-flight components 20 are likely to interfere with each other, for example, the laser pulses emitted by the light emitters 22 are likely to interfere with each other when the field angles of the light emitters 22 between the two adjacent time-of-flight components 20 overlap with each other. Thus, to improve the accuracy of the acquired depth information, the wavelengths of the laser pulses emitted by adjacent light emitters 22 of adjacent two time-of-flight assemblies 20 may be different in order to distinguish and calculate the initial depth image.
Specifically, assuming that the wavelength of the laser pulse emitted by the light emitter 222a in the first direction is λ 1, the wavelength of the laser pulse emitted by the light emitter 224a in the first direction is λ 2, the wavelength of the laser pulse emitted by the light emitter 222b in the third direction is λ 3, and the wavelength of the laser pulse emitted by the light emitter 224b in the third direction is λ 4, it is only necessary to satisfy λ 1 ≠ λ 3, and λ 2 ≠ λ 4. Where λ 1 and λ 2 may be equal or unequal (since light emitter 222a and light emitter 224a are located at the same orientation and belong to the same time-of-flight component 20a, λ 1 and λ 2 are equal and do not greatly affect acquisition of depth information when overlapping each other, and therefore λ 1 and λ 2 may be equal or unequal), λ 3 and λ 4 may be equal or unequal (above, λ 3 and λ 4 are equal and do not greatly affect acquisition of depth information when overlapping each other, λ 3 and λ 4 may be equal or unequal), λ 1 and λ 4 may be equal or unequal, and λ 2 and λ 3 may be equal or unequal. Preferably, the wavelength of the laser pulses emitted by each of the light emitters 22 is different to further improve the accuracy of the acquired depth information. That is, in λ 1 ≠ λ 2 ≠ λ 3 ≠ λ 4, the laser pulses emitted by the plurality of light emitters 22 do not interfere with each other, so that the calculation of the initial depth image is the easiest. In addition, each optical receiver 24 is configured to receive laser light pulses of a corresponding wavelength emitted by a corresponding optical transmitter 22. For example, the optical receiver 24a is used to receive the laser pulses of the corresponding wavelengths emitted by the optical transmitters 222a and 224a, but is not capable of receiving the laser pulses of the corresponding wavelengths emitted by the optical transmitters 222b and 224 b. Likewise, the optical receiver 24b is only used for receiving the laser pulses of the corresponding wavelengths emitted by the optical transmitter 222b and the optical transmitter 224 b.
Taking the example that the laser pulse emitted by the light emitter 22 is infrared light, and the wavelength of the infrared light is 770 nm to 1 mm, λ 1 can be any value between 770 nm and 1000 nm, λ 2 can be any value between 1000 nm and 1200 nm, λ 3 can be any value between 1200 nm and 1400 nm, and λ 4 can be any value between 1400 nm and 1600 nm. The optical receiver 24a is configured to receive a laser pulse with a wavelength of 770 nm to 1000 nm emitted by the optical transmitter 222a and a laser pulse with a wavelength of 1000 nm to 1200 nm emitted by the optical transmitter 224a, and the optical receiver 24b is configured to receive a laser pulse with a wavelength of 1200 nm to 1400 nm emitted by the optical transmitter 222b and a laser pulse with a wavelength of 1400 nm to 1600 nm emitted by the optical transmitter 224 b.
It should be noted that, in addition to making the wavelengths of the laser pulses emitted by the light emitters 22 different, those skilled in the art may also adopt other ways to avoid interference between different time-of-flight components 20 when they are operated simultaneously, which is not limited herein; or, the initial depth image can be directly calculated by neglecting the interference of the smaller degree; or, when the initial depth image is calculated, the influence caused by the interference is filtered through certain algorithm processing.
Referring to fig. 1 and 2, a camera assembly 30 is disposed on the body 10. The number of camera assemblies 30 may be multiple, one time-of-flight assembly 20 for each camera assembly 30. For example, when the number of time-of-flight components 20 is two, the number of camera assemblies 30 is also two, and the two camera assemblies 30 are respectively disposed in the first orientation and the third orientation.
A plurality of camera head assemblies 30 are each connected to an application processor 50. Each camera assembly 30 is used to capture a scene image of a subject and output to the application processor 50. In the present embodiment, the two camera assemblies 30 are respectively used for capturing the scene image of the subject in the first orientation and the scene image of the subject in the third orientation and outputting the captured images to the application processor 50. It will be appreciated that the field angle of each camera assembly 30 is the same or approximately the same as the optical receiver 24 of the corresponding time-of-flight assembly 20 to enable a better match of each scene image with the corresponding initial depth image.
The camera assembly 30 may be a visible light camera 32 or an infrared light camera 34. When camera assembly 30 is a visible light camera 32, the scene image is a visible light image; when camera assembly 30 is an infrared camera 34, the scene image is an infrared light image.
Referring to FIG. 2, the microprocessor 40 may be a processing chip. The number of microprocessors 40 may be plural, one time-of-flight assembly 20 for each microprocessor 40. For example, in the present embodiment, the number of time-of-flight components 20 is two, and the number of microprocessors 40 is also two. Each microprocessor 40 is connected to both the optical transmitter 22 and the optical receiver 24 in the corresponding time of flight assembly 20. Each microprocessor 40 can drive the corresponding light emitter 22 to emit laser light through the driving circuit, and the four light emitters 22 can emit laser light simultaneously through the control of the multiple microprocessors 40. Each microprocessor 40 is also used to provide the corresponding light receiver 24 with clock information for receiving laser pulses to expose the light receiver 24, and to effect simultaneous exposure of the two light receivers 24 through control of the two microprocessors 40. Each microprocessor 40 is also configured to derive an initial depth image based on the laser pulses emitted by the light emitter 22 and received by the light receiver 24 of the corresponding time-of-flight assembly 20. For example, the two microprocessors 40 obtain the initial depth image P1 according to the laser pulses emitted by the phototransmitter of the time-of-flight assembly 20a and received by the photoreceiver 24a, and the initial depth image P2 according to the laser pulses emitted by the phototransmitter of the time-of-flight assembly 20b and received by the photoreceiver 24b, respectively (as shown in the upper part of fig. 4). Each microprocessor 40 may also perform algorithm processing such as tiling, distortion correction, self-calibration, etc. on the initial depth image to improve the quality of the initial depth image.
It is understood that the number of the microprocessors 40 may be one, and the microprocessor 40 needs to obtain the initial depth image according to the laser pulse emitted by the light emitter 22 of the corresponding time-of-flight component 20 and the laser pulse received by the light receiver 24 in sequence. Two microprocessors 40 have faster processing speed and less latency than one microprocessor 40.
Both microprocessors 40 are connected to an application processor 50 to transmit the initial depth image to the application processor 50. In one embodiment, the microprocessor 40 may be connected to the application Processor 50 through a Mobile Industry Processor Interface (MIPI), and specifically, the microprocessor 40 is connected to a Trusted Execution Environment (TEE) of the application Processor 50 through the Mobile Industry Processor Interface, so as to directly transmit data (initial depth image) in the microprocessor 40 to the TEE, so as to improve the security of information in the electronic device 100. Here, both the code and the memory area in the trusted Execution Environment are controlled by the access control unit and cannot be accessed by a program in the untrusted Execution Environment (REE), and both the trusted Execution Environment and the untrusted Execution Environment may be formed in the application processor 50.
The application processor 50 may function as a system of the electronic device 100. The application processor 50 may reset the microprocessor 40, wake the microprocessor 40, debug the microprocessor 40, and so on. The application processor 50 may also be connected to a plurality of electronic components of the electronic device 100 and control the plurality of electronic components to operate according to a predetermined mode, for example, the application processor 50 is connected to the visible light camera 32 and the infrared light camera 34 to control the visible light camera 32 and the infrared light camera 34 to capture a visible light image and an infrared light image and process the visible light image and the infrared light image; when the electronic apparatus 100 includes a display screen, the application processor 50 may control the display screen to display a predetermined screen; the application processor 50 may also control an antenna of the electronic device 100 to transmit or receive predetermined data or the like.
Referring to fig. 4, in one embodiment, the application processor 50 is configured to combine two initial depth images obtained by the two microprocessors 40 into one panoramic depth image according to the field angle of the optical receiver 24.
Specifically, referring to fig. 1, a rectangular coordinate system XOY is established with the center of the body 10 as a center O, the transverse axis as an X axis, and the longitudinal axis as a Y axis, in the rectangular coordinate system XOY, the field of view of the light receiver 24a is located between 190 degrees and 350 degrees (clockwise rotation, the same applies), the field of view of the light emitter 222a is located between 190 degrees and 90 degrees, the field of view of the light emitter 224a is located between 90 degrees and 350 degrees, the field of view of the light receiver 24b is located between 10 degrees and 170 degrees, the field of view of the light emitter 222b is located between 270 degrees and 170 degrees, and the field of view of the light emitter 224b is located between 10 degrees and 270 degrees, and the application processor 50 stitches the initial depth image P1 and the initial depth image P2 into a 360-degree panoramic depth image P12 of one frame according to use the depth information.
Each microprocessor 40 processes the laser pulses emitted by the light emitter 22 and received by the light receiver 24 of the corresponding time-of-flight component 20 to obtain an initial depth image, and the depth information of each pixel is the distance between the subject at the corresponding position and the light receiver 24 at the position. That is, the depth information of each pixel in the initial depth image P1 is the distance between the subject in the first orientation and the light receiver 24 a; the depth information of each pixel in the initial depth image P2 is the distance between the subject in the third orientation and the light receiver 24 b. In the process of splicing a plurality of initial depth images of a plurality of azimuths into a 360-degree panoramic depth image of one frame, firstly, the depth information of each pixel in each initial depth image is converted into unified depth information, and the unified depth information represents the distance between each object to be shot and a certain reference position in each azimuth. After the depth information is converted into the unified depth information, the application processor 40 is convenient to perform the splicing of the initial depth image according to the unified depth information.
Specifically, one reference coordinate system is selected, and the reference coordinate system may be an image coordinate system of the light receiver 24 in a certain direction as the reference coordinate system, or another coordinate system may be selected as the reference coordinate system. Taking FIG. 5 as an example, take xo-yo-zoThe coordinate system is a reference coordinate system. The coordinate system x shown in fig. 5a-ya-zaIs the image coordinate system of the light receiver 24a, coordinate system xb-yb-zbIs the image coordinate system of the light receiver 24 b. The application processor 50 is based on a coordinate system xa-ya-zaWith reference coordinate system xo-yo-zoThe rotation matrix and the translation matrix between convert the depth information of each pixel in the initial depth image P1 into unified depth information according to the coordinate system xb-yb-zbWith reference coordinate system xo-yo-zoThe rotation matrix and the translation matrix in between convert the depth information of each pixel in the initial depth image P2 into unified depth information.
After the depth information conversion is completed, a plurality of initial depth images are positioned in a unified reference coordinate system, and one pixel point of each initial depth image corresponds to one coordinate (x)o,yo,zo) Then the stitching of the initial depth images can be done by coordinate matching. For example, a certain pixel point P in the initial depth image P1aHas the coordinates of (x)o1,yo1,zo1) In the initial depth image P2, a certain pixel point PbAlso has the coordinate of (x)o1,yo1,zo1) Due to PaAnd PbIf the coordinate values are the same in the current reference coordinate system, the pixel is describedPoint PaAnd pixel point PbWhen the initial depth image P1 and the initial depth image P2 are spliced at the same point, a pixel point P isaNeeds and pixel point PbAnd (4) overlapping. Thus, the application processor 50 can perform stitching of a plurality of initial depth images through the matching relationship of the coordinates, and obtain a 360-degree panoramic depth image.
It should be noted that, performing the stitching of the initial depth image based on the matching relationship of the coordinates requires that the resolution of the initial depth image needs to be greater than a preset resolution. It can be appreciated that if the resolution of the initial depth image is low, the coordinate (x) iso,yo,zo) Will also be relatively low, in which case matching directly from the coordinates may occur PaPoint sum PbThe points do not actually coincide but differ by an offset, and the value of the offset exceeds the error limit. If the resolution of the image is high, the coordinate (x)o,yo,zo) Will be relatively high, in which case the matching is done directly from the coordinates, even if P isaPoint sum PbThe points are not actually overlapped and differ by an offset, but the value of the offset is smaller than an error limit value, namely, the offset is within an error allowable range, and the splicing of the initial depth image cannot be greatly influenced.
It is to be understood that the following embodiments may adopt the above-mentioned manner to splice or synthesize two or more initial depth images, and are not described one by one.
The application processor 50 may also synthesize the two initial depth images and the corresponding two visible light images into a three-dimensional scene image for display for viewing by a user. For example, the two visible light images are a visible light image V1 and a visible light image V2, respectively. The processor 50 is used to synthesize the initial depth image P1 and the visible light image V1, synthesize the initial depth image P2 and the visible light image V2, and then splice the two synthesized images to obtain a 360-degree three-dimensional scene image of one frame. Or, the application processor 50 firstly splices the initial depth image P1 and the initial depth image P2 to obtain a frame of 360-degree panoramic depth image, and splices the visible light image V1 and the visible light image V2 to obtain a frame of 360-degree panoramic visible light image; and then the panoramic depth image and the panoramic visible light image are synthesized into a 360-degree three-dimensional scene image.
Referring to FIG. 6, in one embodiment, application processor 50 is configured to identify a subject based on two initial depth images acquired by two microprocessors 40 and two scene images captured by two camera assemblies 30.
Specifically, when the scene image is an infrared light image, the two infrared light images may be an infrared light image I1 and an infrared light image I2, respectively. The application processor 50 identifies a photographic subject in a first orientation from the initial depth image P1 and the infrared light image I1, and a photographic subject in a third orientation from the initial depth image P2 and the infrared light image I2, respectively. When the scene image is a visible light image, the two visible light images are a visible light image V1 and a visible light image V2, respectively. The application processor 50 identifies a photographic subject in a first orientation from the initial depth image P1 and the visible light image V1, and a photographic subject in a third orientation from the initial depth image P2 and the visible light image V2, respectively.
When the photographic subject is identified as face recognition, the application processor 50 performs face recognition with higher accuracy using the infrared light image as the scene image. The process of face recognition by the application processor 50 from the initial depth image and the infrared light image may be as follows:
firstly, face detection is carried out according to the infrared light image to determine a target face area. Because the infrared light image comprises the detail information of the scene, after the infrared light image is acquired, the human face detection can be carried out according to the infrared light image, so that whether the infrared light image contains the human face or not can be detected. And if the infrared light image contains the human face, extracting a target human face area where the human face is located in the infrared light image.
Then, the living body detection processing is performed on the target face region according to the initial depth image. Because each initial depth image corresponds to the infrared light image, and the initial depth image includes the depth information of the corresponding infrared light image, the depth information corresponding to the target face area can be acquired according to the initial depth image. Further, since the living body face is stereoscopic and the face displayed, for example, on a picture, a screen, or the like, is planar, it is possible to determine whether the target face region is stereoscopic or planar according to the acquired depth information of the target face region, thereby performing living body detection on the target face region.
And if the living body detection is successful, acquiring target face attribute parameters corresponding to the target face area, and performing face matching processing on the target face area in the infrared light image according to the target face attribute parameters to obtain a face matching result. The target face attribute parameters refer to parameters capable of representing attributes of a target face, and the target face can be identified and matched according to the target face attribute parameters. The target face attribute parameters include, but are not limited to, face deflection angles, face brightness parameters, facial features parameters, skin quality parameters, geometric feature parameters, and the like. The electronic apparatus 100 may previously store the face attribute parameters for matching. After the target face attribute parameters are acquired, the target face attribute parameters can be compared with the face attribute parameters stored in advance. And if the target face attribute parameters are matched with the pre-stored face attribute parameters, the face recognition is passed.
It should be noted that the specific process of the application processor 50 performing face recognition according to the initial depth image and the infrared light image is not limited to this, for example, the application processor 50 may also assist in detecting a face contour according to the initial depth image to improve face recognition accuracy, and the like. The process of the application processor 50 performing face recognition based on the initial depth image and the visible light image is similar to the process of the application processor 50 performing face recognition based on the initial depth image and the infrared light image, and will not be further described herein.
Referring to fig. 6 and 7, the application processor 50 is further configured to combine the two initial depth images acquired by the two microprocessors 40 into a merged depth image according to the field angle of the optical receiver 24, combine the two scene images acquired by the two camera assemblies 30 into a merged scene image according to a frame, and identify the target according to the merged depth image and the merged scene image, when the target identification fails according to the two initial depth images and the two scene images.
Specifically, in the embodiment shown in fig. 6 and 7, since the field angle of the light receiver 24 of each time-of-flight component 20 is limited, and there may be a case where half of the human face is located in the initial depth image P1 and the other half is located in the initial depth image P2, the application processor 50 synthesizes the initial depth image P1 and the initial depth image P2 into one frame of merged depth image P12, and correspondingly synthesizes the infrared light image I1 and the infrared light image I2 (or the visible light image V1 and the visible light image V2) into one frame of merged scene image I12 (or V12), so as to re-identify the object to be photographed from the merged depth image P12 and the merged scene image I12 (or V12).
Referring to fig. 8 and 9, in an embodiment, the application processor 50 is configured to determine a distance variation between the subject and the electronic device 100 according to a plurality of initial depth images.
Specifically, each optical transmitter 22 may transmit a laser pulse multiple times, and correspondingly, each optical receiver 24 may be exposed multiple times. For example, at a first time instant, the light emitter of the time-of-flight component 20a, the light emitter of the time-of-flight component 20b, the light receiver 24a and the light receiver 24b receive the laser pulses, and the two microprocessors 40 correspondingly obtain the initial depth images P11 and P21; at the second time, the optical transmitter of the time-of-flight component 20a, the optical transmitter of the time-of-flight component 20b, the optical receiver 24a and the optical receiver 24b receive the laser pulses, and the two microprocessors 40 obtain the initial depth image P12 and the initial depth image P22 respectively. Then, the application processor 50 determines a distance change between the subject at the first orientation and the electronic device 100 from the initial depth image P11 and the initial depth image P12, respectively; and judging the distance change between the shot target in the third direction and the electronic equipment 100 according to the initial depth image P21 and the initial depth image P22.
It is understood that, since the depth information of the subject is included in the initial depth image, the application processor 50 may determine a distance change between the subject corresponding to the orientation and the electronic apparatus 100 from a depth information change at a plurality of consecutive times.
Referring to fig. 10, the application processor 50 is further configured to combine two initial depth images acquired by two microprocessors 40 into one combined depth image according to the field angle of the optical receiver 24 when determining that the distance variation fails according to the multiple initial depth images, and the application processor 50 continuously performs the combining step to obtain multiple frames of continuous combined depth images and determines the distance variation according to the multiple frames of combined depth images.
Specifically, in the embodiment shown in fig. 10, since the field angle of the optical receiver 24 of each time-of-flight component 20 is limited, and there may be a case where half of a human face is located in the initial depth image P11 and the other half is located in the initial depth image P21, the application processor 50 synthesizes the initial depth image P11 and the initial depth image P21 at the first time point into one combined depth image P121, and correspondingly synthesizes the initial depth image P12 and the initial depth image P22 at the second time point into one combined depth image P122, and then re-judges the distance change according to the two combined depth images P121 and P122.
Referring to fig. 9, when it is determined that the distance is decreased according to the plurality of initial depth images or when it is determined that the distance is decreased according to the multi-frame merged depth image, the application processor 50 increases a frame rate of the initial depth image collected from the plurality of initial depth images transmitted from the at least one microprocessor 40 for determining the distance change.
It is understood that when the distance between the subject and the electronic apparatus 100 decreases, the electronic apparatus 100 cannot predict whether the distance decreases, and therefore, the application processor 50 may increase the frame rate of the initial depth image collected from the plurality of initial depth images transmitted from the at least one microprocessor 40 to determine the distance change, so as to more closely focus on the distance change. Specifically, when determining that the distance corresponding to a certain orientation decreases, the application processor 50 may increase the frame rate of the initial depth image acquired from the plurality of initial depth images transmitted by the microprocessor 40 for determining the distance change in the orientation.
For example, at a first instant, the two microprocessors 40 obtain an initial depth image P11, an initial depth image P21, respectively; at a second moment, the two microprocessors 40 respectively obtain an initial depth image P12 and an initial depth image P22; at the third moment, the two microprocessors 40 respectively obtain the initial depth image P13 and the initial depth image P23; at the fourth time, the two microprocessors 40 obtain the initial depth image P14 and the initial depth image P24, respectively.
Under normal circumstances, the application processor 50 selects an initial depth image P11 and an initial depth image P14 to judge the distance change between the subject at the first orientation and the electronic device 100; the initial depth image P21 and the initial depth image P24 are selected to judge the distance change between the subject in the third direction and the electronic device 100. The frame rate of the application processor 50 for acquiring the initial depth image in each direction is one frame acquired every two frames, that is, one frame is selected every three frames.
When the distance corresponding to the first direction is determined to decrease according to the initial depth image P11 and the initial depth image P14, the application processor 50 selects the initial depth image P11 and the initial depth image P13 to determine the distance between the subject in the first direction and the electronic device 100. The frame rate at which the application processor 50 acquires the initial depth image of the first orientation is changed to acquire one frame every other frame, i.e., one frame is selected every two frames. While the frame rates of other orientations remain the same, i.e. the application processor 50 still selects the initial depth image P21 and the initial depth image P24 to determine the distance change.
When the distance corresponding to the first position is determined to decrease according to the initial depth image P11 and the initial depth image P14, and the distance corresponding to the third position is determined to decrease according to the initial depth image P21 and the initial depth image P24, the application processor 50 selects the initial depth image P11 and the initial depth image P13 to determine the distance change between the object at the first position and the electronic device 100, selects the initial depth image P21 and the initial depth image P23 to determine the distance change between the object at the third position and the electronic device 100, and the frame rate of acquiring the initial depth images at the first position and the third position by the application processor 50 is changed to one frame per frame interval, that is, one frame per two frames is selected.
Of course, the application processor 50 may also increase the frame rate of the initial depth image collected from the plurality of initial depth images transmitted from each microprocessor 40 to determine the distance change when determining that the distance corresponding to any one of the orientations decreases. Namely: when the distance between the subject in the first position and the electronic device 100 is determined to be decreased according to the initial depth image P11 and the initial depth image P14, the application processor 50 selects the initial depth image P11 and the initial depth image P13 to determine the distance change between the subject in the first position and the electronic device 100, and selects the initial depth image P21 and the initial depth image P23 to determine the distance change between the subject in the third position and the electronic device 100.
The application processor 50 may also determine the change in distance as the distance decreases, in conjunction with the visible light image or the infrared light image. Specifically, the application processor 50 identifies the photographic subject from the visible light image or the infrared light image, and then determines the distance change from the initial depth image at a plurality of times, thereby controlling the electronic apparatus 100 to perform different operations with respect to different photographic subjects and different distances. Alternatively, the microprocessor 40 controls the frequency of the laser emitted by the corresponding light emitter 22 and the exposure of the light receiver 24 to be increased when the distance is decreased.
It should be noted that the electronic device 100 of the present embodiment may also be used as an external terminal, and may be fixedly mounted or detachably mounted on a portable electronic device such as a mobile phone, a tablet computer, a notebook computer, etc., or may be fixedly mounted on a movable object such as a vehicle body (as shown in fig. 7 and 8), an unmanned aerial vehicle body, a robot body, or a ship body. When the electronic device 100 is used specifically, a frame of panoramic depth image is synthesized according to the plurality of initial depth images as described above, and the panoramic depth image may be used for three-dimensional modeling, instant positioning and mapping (SLAM), and augmented reality display. When the electronic device 100 recognizes a subject as described above, the method may be applied to face recognition unlocking and payment of a portable electronic device, or applied to obstacle avoidance of a robot, a vehicle, an unmanned aerial vehicle, a ship, or the like. When the electronic apparatus 100 determines that the distance between the subject and the electronic apparatus 100 changes as described above, the present invention can be applied to automatic travel, object tracking, and the like of robots, vehicles, unmanned planes, ships, and the like.
Referring to fig. 2 and 11, the present invention further provides a mobile platform 300. The mobile platform 300 includes a body 10 and a plurality of time-of-flight assemblies 20 disposed on the body 10. The plurality of time of flight assemblies 20 are respectively located at a plurality of different orientations of the body 10. Each time of flight assembly 20 includes two optical transmitters 22 and one optical receiver 24. The field angle of each optical transmitter 22 is any value from 80 degrees to 120 degrees, and the field angle of each optical receiver 24 is any value from 180 degrees to 200 degrees. The light emitters 22 are used for emitting laser pulses to the outside of the body 10, and the light receivers 24 are used for receiving the laser pulses emitted by the corresponding two light emitters 22 reflected by the object to be shot. The optical transmitters 22 of the multiple time-of-flight components 20 emit laser light simultaneously and the optical receivers 24 of the multiple time-of-flight components 20 are exposed simultaneously to acquire a panoramic depth image.
Specifically, the body 10 may be a vehicle body, an unmanned aerial vehicle fuselage, a robot body, or a ship body.
Referring to fig. 11, when the body 10 is a vehicle body, the number of the plurality of time-of-flight assemblies 20 is two, and the two time-of-flight assemblies 20 are respectively installed at two sides of the vehicle body, for example, a front end and a rear end, or a left side and a right side of a vehicle body. The vehicle body can drive the two flight time assemblies 20 to move on the road, and a 360-degree panoramic depth image on a traveling route is constructed to be used as a reference map and the like; or acquiring initial depth images of two different directions to identify the subject, and determining the distance change between the subject and the mobile platform 300, so as to control the vehicle body to accelerate, decelerate, stop, detour, and the like, thereby implementing unmanned obstacle avoidance. In this way, different operations are performed according to different photographic subjects when the distance decreases, and the vehicle can be made more intelligent.
Referring to fig. 12, when the main body 10 is an unmanned aerial vehicle body, the number of the plurality of time of flight assemblies 20 is two, and the two time of flight assemblies 20 are respectively installed on two opposite sides of the unmanned aerial vehicle body, such as the front side and the rear side or the left side and the right side, or on two opposite sides of a cradle head carried on the unmanned aerial vehicle body. The unmanned aerial vehicle fuselage can drive a plurality of flight time subassemblies 20 and fly in the air to take photo by plane, patrol and examine etc. unmanned aerial vehicle can return the panorama depth image who obtains and give ground control end, also can directly carry out SLAM. A plurality of time of flight components 20 can realize that unmanned aerial vehicle accelerates, decelerates, stops, keeps away barrier, object tracking.
Referring to fig. 13, when the main body 10 is a robot main body, such as a sweeping robot, the number of the plurality of time-of-flight assemblies 20 is two, and the two time-of-flight assemblies 20 are respectively installed on two opposite sides of the robot main body. The robot body can drive the plurality of flight time assemblies 20 to move at home, and initial depth images in a plurality of different directions are acquired so as to identify a shot target and judge the distance change between the shot target and the mobile platform 300, so that the robot body is controlled to move, and the robot is enabled to clear away garbage, avoid obstacles and the like.
Referring to fig. 14, when the body 10 is a ship body, the number of the plurality of time-of-flight assemblies 20 is two, and the two time-of-flight assemblies 20 are respectively installed at two opposite sides of the ship body. The ship body can drive the flight time assembly 20 to move, and initial depth images in a plurality of different directions are acquired, so that a shot target is accurately identified in a severe environment (for example, a foggy environment), the distance change between the shot target and the mobile platform 300 is judged, and the safety of marine navigation is improved.
The mobile platform 300 according to the embodiment of the present application is a platform capable of moving independently, and the plurality of time-of-flight components 20 are mounted on the body 10 of the mobile platform 300 to obtain a panoramic depth image. However, the electronic device 100 of the embodiment of the present application is generally not independently movable, and the electronic device 100 may be further mounted on a movable apparatus such as the mobile platform 300, thereby assisting the apparatus in acquiring the panoramic depth image.
It should be noted that the above explanations of the body 10, the time-of-flight assembly 20, the camera assembly 30, the microprocessor 40, and the application processor 50 of the electronic device 100 are also applicable to the mobile platform 300 according to the embodiment of the present application, and the descriptions thereof are not repeated here.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations of the above embodiments may be made by those of ordinary skill in the art within the scope of the present application, which is defined by the claims and their equivalents.

Claims (9)

1. An electronic device, characterized in that the electronic device comprises:
a body; and
a plurality of time-of-flight components disposed on the body, the plurality of time-of-flight components being respectively located at a plurality of different orientations of the body, each time-of-flight component including two phototransmitters and one photoreceiver, each of the phototransmitters having an angle of view of any value from 80 degrees to 120 degrees and each of the photoreceivers having an angle of view of any value from 180 degrees to 200 degrees, the phototransmitters being configured to emit laser pulses out of the body and the photoreceivers being configured to receive the laser pulses emitted by the corresponding two phototransmitters reflected by a subject;
the electronic equipment comprises two time-of-flight components and two microprocessors, each microprocessor corresponds to one time-of-flight component, each microprocessor is connected with a light emitter and a light receiver of the corresponding time-of-flight component, each microprocessor is used for driving the connected light emitter through a driving circuit so as to control the light emitter of each time-of-flight component to emit laser pulses at the same time, and each microprocessor is further used for providing clock information for receiving the laser pulses to the connected light receivers so as to enable the light receivers of each time-of-flight component to be exposed at the same time to acquire a panoramic depth image;
the electronic equipment further comprises an application processor, the two microprocessors are connected with the application processor, and each microprocessor is further used for obtaining a plurality of initial depth images according to the laser pulses transmitted by the light transmitter of the corresponding time-of-flight component and the laser pulses received by the light receiver for a plurality of times and transmitting the initial depth images to the application processor;
the application processor is configured to determine a change in a distance between the subject and the electronic device according to the plurality of initial depth images, and when it is determined that the distance change is a distance decrease, increase a frame rate at which the initial depth image used to determine the distance change is acquired from the plurality of initial depth images transmitted by the at least one microprocessor.
2. The electronic device of claim 1, wherein adjacent ones of the light emitters of two of the time-of-flight components emit the laser light pulses at different wavelengths.
3. The electronic device of claim 2, wherein the laser pulses emitted by each of the light emitters differ in wavelength.
4. The electronic device of claim 1, wherein the application processor is further configured to combine the two initial depth images obtained by the two microprocessors into one frame of the panoramic depth image according to a field angle of the optical receiver.
5. The electronic device of claim 1, further comprising two camera assemblies disposed on the body, each camera assembly corresponding to one of the time-of-flight assemblies, both camera assemblies being connected to the application processor, each camera assembly being configured to capture a scene image of the object and output the scene image to the application processor;
the application processor is further used for identifying the shot target according to the two initial depth images acquired by the two microprocessors and the two scene images acquired by the two camera assemblies.
6. The electronic device according to claim 5, wherein the application processor is further configured to, when the recognition of the target object from the two initial depth images and the two scene images fails, combine the two initial depth images acquired by the two microprocessors into a frame of merged depth image according to a field angle of the optical receiver, combine the two scene images acquired by the two camera assemblies into a frame of merged scene image, and recognize the target object from the merged depth image and the merged scene image.
7. The electronic device according to claim 1, wherein the application processor is further configured to combine two of the initial depth images acquired by the two microprocessors into one combined depth image according to a field angle of the optical receiver when determining that the distance change fails according to a plurality of the initial depth images, and the application processor continuously performs the combining step to obtain a plurality of frames of the combined depth images, and determines the distance change according to the plurality of frames of the combined depth images.
8. A mobile platform, comprising:
a body; and
a plurality of time-of-flight components disposed on the body, the plurality of time-of-flight components being respectively located at a plurality of different orientations of the body, each time-of-flight component including two phototransmitters and one photoreceiver, each of the phototransmitters having an angle of view of any value from 80 degrees to 120 degrees and each of the photoreceivers having an angle of view of any value from 180 degrees to 200 degrees, the phototransmitters being configured to emit laser pulses out of the body and the photoreceivers being configured to receive the laser pulses emitted by the corresponding two phototransmitters reflected by a subject;
the electronic equipment comprises two time-of-flight components and two microprocessors, each microprocessor corresponds to one time-of-flight component, each microprocessor is connected with a light emitter and a light receiver of the corresponding time-of-flight component, each microprocessor is used for driving the connected light emitter through a driving circuit so as to control the light emitter of each time-of-flight component to emit laser pulses at the same time, and each microprocessor is further used for providing clock information for receiving the laser pulses to the connected light receivers so as to enable the light receivers of each time-of-flight component to be exposed at the same time to acquire a panoramic depth image;
the electronic equipment further comprises an application processor, the two microprocessors are connected with the application processor, and each microprocessor is further used for obtaining a plurality of initial depth images according to the laser pulses transmitted by the light transmitter of the corresponding time-of-flight component and the laser pulses received by the light receiver for a plurality of times and transmitting the initial depth images to the application processor;
the application processor is configured to determine a change in a distance between the subject and the electronic device according to the plurality of initial depth images, and when it is determined that the distance change is a distance decrease, increase a frame rate at which the initial depth image used to determine the distance change is acquired from the plurality of initial depth images transmitted by the at least one microprocessor.
9. The mobile platform of claim 8, wherein the body is a vehicle body, an unmanned aerial vehicle fuselage, a robot body, or a ship body.
CN201910008293.0A 2019-01-04 2019-01-04 Electronic equipment and mobile platform Active CN109803089B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910008293.0A CN109803089B (en) 2019-01-04 2019-01-04 Electronic equipment and mobile platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910008293.0A CN109803089B (en) 2019-01-04 2019-01-04 Electronic equipment and mobile platform

Publications (2)

Publication Number Publication Date
CN109803089A CN109803089A (en) 2019-05-24
CN109803089B true CN109803089B (en) 2021-05-18

Family

ID=66558483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910008293.0A Active CN109803089B (en) 2019-01-04 2019-01-04 Electronic equipment and mobile platform

Country Status (1)

Country Link
CN (1) CN109803089B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113126111B (en) * 2019-12-30 2024-02-09 Oppo广东移动通信有限公司 Time-of-flight module and electronic device
CN114095713A (en) * 2021-11-23 2022-02-25 京东方科技集团股份有限公司 Imaging module, processing method, system, device and medium thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107263480A (en) * 2017-07-21 2017-10-20 深圳市萨斯智能科技有限公司 A kind of robot manipulation's method and robot
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation
CN108471487A (en) * 2017-02-23 2018-08-31 钰立微电子股份有限公司 Generate the image device and associated picture device of panoramic range image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9653874B1 (en) * 2011-04-14 2017-05-16 William J. Asprey Trichel pulse energy devices
US10848731B2 (en) * 2012-02-24 2020-11-24 Matterport, Inc. Capturing and aligning panoramic image and depth data
US9525863B2 (en) * 2015-04-29 2016-12-20 Apple Inc. Time-of-flight depth mapping with flexible scan pattern
CN106371281A (en) * 2016-11-02 2017-02-01 辽宁中蓝电子科技有限公司 Multi-module 360-degree space scanning and positioning 3D camera based on structured light
CN108616703A (en) * 2018-04-23 2018-10-02 Oppo广东移动通信有限公司 Electronic device and its control method, computer equipment and readable storage medium storing program for executing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108471487A (en) * 2017-02-23 2018-08-31 钰立微电子股份有限公司 Generate the image device and associated picture device of panoramic range image
CN107263480A (en) * 2017-07-21 2017-10-20 深圳市萨斯智能科技有限公司 A kind of robot manipulation's method and robot
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation

Also Published As

Publication number Publication date
CN109803089A (en) 2019-05-24

Similar Documents

Publication Publication Date Title
CN108377380B (en) Image scanning system and method thereof
US10085011B2 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN109618108B (en) Electronic equipment and mobile platform
US11019322B2 (en) Estimation system and automobile
CN110572630B (en) Three-dimensional image shooting system, method, device, equipment and storage medium
WO2019154179A1 (en) Group optimization depth information method and system for constructing 3d feature map
JP2019049457A (en) Image processing apparatus and ranging device
CN109862275A (en) Electronic equipment and mobile platform
US20220276360A1 (en) Calibration method and apparatus for sensor, and calibration system
US20210231810A1 (en) Camera apparatus
CN109803089B (en) Electronic equipment and mobile platform
CN109587304B (en) Electronic equipment and mobile platform
CN112232275A (en) Obstacle detection method, system, equipment and storage medium based on binocular recognition
CN112955711A (en) Position information determining method, apparatus and storage medium
CN109660731B (en) Electronic equipment and mobile platform
CN109618085B (en) Electronic equipment and mobile platform
US9734429B2 (en) Method, system and computer program product for detecting an obstacle with a camera
CN109788195B (en) Electronic equipment and mobile platform
CN117250956A (en) Mobile robot obstacle avoidance method and obstacle avoidance device with multiple observation sources fused
CN109587303B (en) Electronic equipment and mobile platform
CN109660733B (en) Electronic equipment and mobile platform
CN109729250B (en) Electronic equipment and mobile platform
CN115407355A (en) Library position map verification method and device and terminal equipment
WO2022040940A1 (en) Calibration method and device, movable platform, and storage medium
KR102298047B1 (en) Method of recording digital contents and generating 3D images and apparatus using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant