WO2019051688A1 - 检测光学模组的方法、装置和电子设备 - Google Patents

检测光学模组的方法、装置和电子设备 Download PDF

Info

Publication number
WO2019051688A1
WO2019051688A1 PCT/CN2017/101638 CN2017101638W WO2019051688A1 WO 2019051688 A1 WO2019051688 A1 WO 2019051688A1 CN 2017101638 W CN2017101638 W CN 2017101638W WO 2019051688 A1 WO2019051688 A1 WO 2019051688A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
optical module
collection area
determining
location
Prior art date
Application number
PCT/CN2017/101638
Other languages
English (en)
French (fr)
Inventor
彭泓予
李彦青
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Priority to PCT/CN2017/101638 priority Critical patent/WO2019051688A1/zh
Priority to CN201780001068.3A priority patent/CN107690656B/zh
Publication of WO2019051688A1 publication Critical patent/WO2019051688A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • G06V40/1324Sensors therefor by using geometrical optics, e.g. using prisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/24Arrangements for testing

Definitions

  • Embodiments of the present invention relate to the field of optical modules, and, more particularly, to a method, apparatus, and electronic device for detecting an optical module.
  • fingerprint recognition is standard on smartphones.
  • the traditional scheme is to design fingerprint recognition buttons through independent physical buttons or virtual buttons.
  • some smart phones integrate optical modules into the front home button of the mobile phone, and use the Home button as a fingerprint recognition button.
  • the optical module includes a fingerprint sensor.
  • a feasible technical solution is a fingerprint recognition technology in the display screen.
  • the fingerprint recognition function is completely integrated into the display screen, and the user can directly touch the designated area in the display screen of the smart terminal to implement fingerprint recognition.
  • an optical module is mounted on the back of the display.
  • the fingerprint recognition technology in the display screen can not only meet the design requirements of current mobile phone manufacturers, but also make the design of the smart phone more compact. Therefore, the application of fingerprint recognition technology in the display screen to the intelligent terminal will be a future development trend.
  • a smart phone using fingerprint recognition technology in the display requires the application to obtain the installation information of the optical module by itself. For example, information such as installation location, direction of rotation, etc.
  • a method, device and electronic device for detecting an optical module are provided, which can effectively obtain installation information of the optical module.
  • a method of detecting an optical module for use in an electronic device including a display screen and an optical module;
  • the method includes:
  • the first image comprising a plurality of image units, each of the plurality of image units being provided with a pattern, the pattern on each of the image units being used for Instructing a position of the reference point in each image unit on the first image; acquiring a second image by the optical module, the second image being an acquisition of the optical module on the display screen An image of the first image acquired in the region; determining a pattern on the target image unit in the second image; determining a first position of the target reference point in the target image unit according to the pattern on the target image unit The first position is a position of the target reference point on the first image; according to the first position, determining a second position, where the second position is a center point of the collection area The position on the display.
  • the pre-designed image (display image) can be displayed on the display screen; then the image displayed on the display screen is image-collected by the optical module, and the captured image is obtained; The analysis of the acquired image determines the position of the center point of the collection area on the display screen, and finally realizes the positioning operation of the optical module.
  • the solution indicates the position of the reference point on the first image in the image unit by using the pattern set on the image unit, and then determines the position of the center point of the collection area on the display screen based on the position of the reference point on the first image, which The solution determines the position of the center point of the collection area on the display screen according to the position of the image unit on the first image, and the positioning accuracy is higher.
  • the determining the second location according to the first location includes:
  • the pattern on each image unit includes a horizontal line segment and/or a vertical line segment; each of the image units includes a left half side area and a right half side area, the left half side area
  • the line segment is used to indicate a column to which the image unit belongs on the first image
  • the right half region is for indicating a row to which the image unit belongs on the first image.
  • the pattern on each image unit includes a horizontal line segment and/or a vertical line segment; each of the image units includes an upper half region and a lower half region, wherein the lower half region A horizontal line segment represents a first number, a vertical line segment in the lower half side region represents a second number, and a horizontal line segment in the upper half side region represents a third number.
  • a horizontal line segment in the lower half region represents the number 1
  • a vertical line segment in the lower half region represents the number 4
  • a horizontal line segment in the upper half region represents Number 16.
  • the first image is an array image, each of the plurality of image units is a square image unit, and the collection area is a square area; wherein the method further includes:
  • the determining a side length of the collection area includes:
  • a third image wherein a brightness value of each pixel in the third image is greater than a first brightness value
  • acquiring, by the optical module, a fourth image where the fourth image is An image of the third image acquired by the optical module in the collection area; binarizing the fourth image to obtain a binary image; and a horizontal gradient response value and a vertical gradient response according to the binary image A value that determines the length of the side of the collection area.
  • determining the side length of the collection area according to the horizontal gradient response value and the vertical gradient response value of the binary image including:
  • the scaling is the ratio between the image of the acquisition area and the image acquired by the optical module within the acquisition area.
  • the method further includes:
  • Determining a first offset, the first offset being used to modify the second location; wherein determining the second location according to the third location comprises:
  • the second position is determined based on the first offset and the third position.
  • the technical solution of the embodiment of the invention can further improve the positioning accuracy.
  • the determining the first offset includes:
  • the second offset being an offset vector of the target reference point in the second image relative to a center point of the second image; according to a scaling ratio and the second offset And determining the first offset, the scaling being a ratio between an image of the acquisition area and an image acquired by the optical module in the collection area.
  • the method further includes:
  • the method before the determining the second location according to the first location, the method further includes:
  • the scaling ratio is a ratio between an image of the collection area and an image collected by the optical module in the collection area; and the obtaining a scaling ratio includes:
  • the fifth image includes k third line segments parallel to each other, and at least two adjacent third line segments of the k third line segments are overlaid on the collection area And k ⁇ 2; acquiring, by the optical module, a sixth image, where the sixth image is an image of the fifth image collected by the optical module in the collection area; according to the first distance and the second a distance, the first distance being a vertical distance between two adjacent third line segments in the fifth image, the second distance being two adjacent ones in the sixth image The vertical distance between the third line segments.
  • the method further includes:
  • Obtaining a rotation angle where the rotation angle is an angle of an image of the optical module collected in the collection area relative to an image of the collection area; and the obtaining the rotation angle includes:
  • Analyzing the sixth image by a Hough transform and obtaining a first angle acquiring a second processed image, wherein the second processed image is an image rotated by the first image according to the first angle;
  • the second processed image and the fifth image determine the rotation angle.
  • the method further includes:
  • the side length of the acquisition area is 2.5 times the side length of each image unit. This scheme can effectively ensure that there is a target image unit in the effective area.
  • the first brightness value is 128.
  • the resolution of the image displayed on the display screen is the same as the resolution of the display screen.
  • the image displayed on the display screen covers the collection area.
  • an apparatus for detecting an optical module comprising: a method for performing the first aspect or any of the possible implementations of the first aspect.
  • the apparatus comprises means for performing the method of any of the above-described first aspect or any of the possible implementations of the first aspect.
  • an apparatus for detecting an optical module including:
  • a display module configured to display a first image on the display screen, the first image includes a plurality of image units, and each of the plurality of image units is provided with a pattern, each of the image units The upper pattern is used to indicate the position of the reference point in each of the image units on the first image.
  • processing module is configured to:
  • a fourth aspect provides an apparatus for detecting an optical module, including:
  • a computer readable medium for storing a computer program comprising instructions for performing the method of any of the first aspect or the first aspect of the first aspect described above.
  • an electronic device comprising the apparatus for detecting an optical module of the second aspect.
  • the display screen is an organic light emitting diode OLED display a screen
  • the optical module performing a detection function by using at least a portion of the OLED pixel unit of the OLED display as a light source.
  • FIG. 1 is an illustration of an electronic device of an embodiment of the present invention.
  • FIG. 2 is an exemplary flow chart of a method of detecting an optical module according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a fourth image of an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a fifth image of an embodiment of the present invention.
  • Figure 5 is a schematic illustration of a sixth image of an embodiment of the present invention.
  • Figure 6 is a schematic illustration of a first image of an embodiment of the present invention.
  • Figure 7 is a schematic illustration of a second image of an embodiment of the present invention.
  • Figure 8 is a schematic illustration of a seventh image of an embodiment of the present invention.
  • Figure 9 is a schematic illustration of an eighth image of an embodiment of the present invention.
  • FIG. 10 is another exemplary flowchart of a method of detecting an optical module according to an embodiment of the present invention.
  • FIG. 11 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 12 is another schematic block diagram of an electronic device according to an embodiment of the present invention.
  • embodiments of the present invention are applicable to any device and apparatus configured with an optical module.
  • smart mobile phones small personal portable devices: personal digital assistants (PDAs), electronic books (e-books), and the like.
  • PDAs personal digital assistants
  • e-books electronic books
  • a smartphone is exemplified below.
  • FIG. 1 is a side cross-sectional view of an electronic device 100 in accordance with an embodiment of the present invention.
  • the electronic device 100 includes a display screen 110 and an optical module 120 mounted on the back side of the display screen 110.
  • the acquisition area 130 ie, the acquisition area
  • the optical module 120 performs image acquisition in the acquisition area 130.
  • the optical module 120 can capture images displayed on the display screen 100 in the acquisition area 130.
  • the optical module 120 can be an optical fingerprint module or other type of optical biometric detection module, which can also be used for fingerprint recognition or other biometric identification.
  • the optical module 120 may be disposed in a partial area below the display screen 110 (ie, a fingerprint structure under the display screen) or integrated into a local area inside the display screen 110 (ie, a fingerprint in the display screen)
  • the structure can be used to acquire a fingerprint image of a finger touched on the collection area 130.
  • the collection area 130 is a fingerprint detection effective area of the optical fingerprint module, which is located in at least part of the display area of the display screen 110 to implement in-display fingerprint detection.
  • the display screen 110 may be an organic light emitting diode (OLED) display screen, which uses a self-illuminating OLED pixel unit as a display unit.
  • OLED organic light emitting diode
  • the collection is located in the collection.
  • the OLED pixel unit of region 130 can simultaneously serve as a light source for the optical fingerprint module.
  • the fingerprint detection effective area of the optical fingerprint module may also cover the entire display area of the display screen 110, thereby implementing full-screen fingerprint detection.
  • the application may be required to obtain the installation information of the optical module 120 by itself.
  • a fingerprint prompt pattern needs to be displayed on the display screen 110 to instruct the user to input a fingerprint in a suitable area.
  • the position of the fingerprint cue pattern is generally determined according to the assembly position of the optical module 120, and is generally fixed. Due to inaccurate assembly processes or design modifications, the assembly positions of different batches of optical modules 120 may not be completely consistent. In this case, if the fingerprint prompt pattern is still displayed at the specific position where the display screen is fixed, the problem of inaccurate positioning of the fingerprint prompt pattern may occur, and the effective fingerprint area that the optical module 120 can collect may be changed. Less, reduce the efficiency of fingerprint recognition and affect the user experience.
  • a method for detecting an optical module is provided in an embodiment of the present invention.
  • a complete algorithm is provided.
  • the displayed image on the display screen not only the position of the center point of the collection area on the display screen but also the physical size of the optical module can be obtained.
  • the angle of rotation and the scaling of the output image is realized.
  • the adaptive positioning detection function of the optical module (Sensor) under the screen of the mobile phone is realized.
  • a method of detecting an optical module according to an embodiment of the present invention will be described below. It should be understood that the method of detecting an optical module according to an embodiment of the present invention can be applied to any electronic device including a display screen and an optical module.
  • FIG. 2 is an exemplary flow chart of a method 200 of detecting an optical module in accordance with an embodiment of the present invention.
  • the method 200 includes:
  • the first image includes a plurality of image units, each of the plurality of image units is provided with a pattern, and a pattern on each image unit is used to indicate each of the above The position of the reference point in the image unit on the first image.
  • the electronic device can display a pre-designed image (display image) on the display screen; then, the image displayed on the display screen is image-collected by the optical module, and the captured image is obtained; The analysis determines the position of the center point of the collection area on the display screen, and finally realizes the positioning operation of the optical module.
  • the electronic device may first determine, by analyzing the second image, the position of the target reference point in the second image on the display image during the positioning operation on the optical module; and further, the electronic device may Based on the position of the target reference point on the display image, the position of the center point of the collection area on the display screen is determined.
  • the electronic device may first determine, according to the first location and mapping relationship information, a third location, where the third location is a location of the target reference point on the display screen, where the mapping relationship information includes the first location and the first location a third position corresponding to a position; and then determining the second position based on the third position.
  • the electronic device may determine the second location by using the third location; the electronic device may also directly determine the first location as the second location.
  • the embodiment of the invention is not specifically limited. For example, when the first location is a coordinate position of the target reference point on the first image, and the coordinate system of the first image and the display screen is the same, the electronic device may directly determine the first location as the second location.
  • the position of the target reference point on the target image on the target image unit in the second image is referred to as a first position.
  • the position of the target reference point on the display screen is referred to as a third position.
  • the position of the center of the collection area on the display screen is referred to as the second position.
  • the electronic device is positioned by the optical module
  • the image displayed on the screen is referred to as a first image
  • the image obtained after the optical module acquires the first image in the collection area of the display screen is referred to as a second image.
  • the image output by the optical module actually includes, but is not limited to, an image acquired by the optical module in the collection area.
  • a white image is displayed on the display screen, and the image output by the optical module includes a black area and a white area, and the image of the white area corresponds to the image of the collection area on the display screen.
  • the image in the collection area of the display screen and the corresponding image outputted by the optical module are referred to as images in the effective area (for example, the white area shown in FIG. 3).
  • the pattern on each image unit on the first image of the embodiment of the present invention may include a horizontal line segment and/or a vertical line segment.
  • each of the image units may include a left half area and a right half area, the line segments in the left half area are used to indicate the column to which the image unit belongs on the first image, and the right half area is used to indicate the image.
  • each of the image units may further include an upper half area and a lower half area, wherein a horizontal line segment in the lower half area represents the number 1, and a vertical line segment in the lower half area represents the number 4. A horizontal line segment in the upper half area represents the number 16.
  • each image unit of the first image may be divided into left and right sides, the left information value represents the column number of the image unit, and the right information value represents the line number of the image unit; each side area is further divided into upper and lower In the two parts, each horizontal line of the lower part represents the value 1, and each vertical line represents the value 4, that is, the lower part is similar to the hexadecimal number; when the value of the lower part of the line represents 16, the lower line is emptied, and the upper line is incremented by 1; Therefore, the law is: the area is internally hexadecimal, and the upper and lower areas are hexadecimal. Thereby, the electronic device can determine the position of the reference point in the target image unit at the first image by the line segment on the target image unit.
  • the first image is as shown in FIG. 6, and the second image is as shown in FIG.
  • the electronic device determines the first position by analyzing the pattern on the target image unit, when designing the first image, it is not only necessary to ensure that the first image is in the collection area, It is necessary to ensure that there is at least one complete image unit in the second image.
  • the electronic device may further generate the first image before displaying the first image on the display screen.
  • each of the plurality of image units is a square image unit, and the collection area is a square area; the electronic device can determine a side length of the collection area; according to the collection area The side length determines the side length of each of the image units, and the side length of the collection area is larger than the side length of each of the image units; and the first image is generated according to the side length of each of the image units.
  • the side length of the collection area is 2.5 times the side length of each of the image units.
  • the electronic device may binarize the image acquired by the optical in the collection area by threshold segmentation, and then determine the side length of the collection region by projecting the luminance values in the vertical direction and the horizontal direction.
  • the electronic device may display a third image on the display screen, where the brightness value of each pixel in the third image is greater than the first brightness value; and the fourth image is obtained by the optical module.
  • the first brightness value is 128.
  • the third image is a pair of all white images
  • the fourth image is as shown in FIG.
  • the electronic device can set the gray value of the point on the fourth image to 0 or 255, that is, to present the entire image with a distinct black and white effect. That is, 256 luminance level grayscale images are selected by appropriate thresholds to obtain binary images that can still reflect the overall and local features of the image, and then extract binary image features, which is the most special method for studying grayscale transformation. It is called binarization of images.
  • the binary image feature may be a feature vector consisting of a string of 0's.
  • binary image occupies a very important position in digital image processing, and in particular, in practical image processing, a system constructed by binary image processing is numerous. Therefore, the use of "binarization" to process images is not only highly applicable, but also beneficial for the further processing of the image.
  • the collection property of the image is only related to the position of the pixel whose value is 0 or 255.
  • the multi-level value of the pixel makes the processing simple, and the processing and compression of the data is small.
  • a closed and connected boundary may be adopted in the embodiment of the present invention.
  • the object area It can be found that if a particular object has a uniform gray value inside, and it is in a uniform background with other levels of gray values, the thresholding method can be used to obtain a comparative segmentation effect.
  • the electronic device can display a white image (third image) on the display screen, and the optical module outputs a fourth image as shown in FIG. 3. Most of the image is highlighted, and the brightness value is at least 128 or more, and this highlight area must be a rectangle, and this rectangular area is the effective area of the fourth image of the optical module.
  • the collection area of the display screen corresponds to the effective area of the output image of the optical module.
  • the electronic device can binarize the fourth image by threshold segmentation, and project the luminance values in the vertical direction and the horizontal direction to determine the side length of the binary image, and then according to the side length of the binary image. Determine the length of the side of the collection area.
  • the electronic device may determine the side length of the binary image according to the horizontal gradient response value and the vertical gradient response value of the binary image; and then determine the side length of the binary image as the side length of the collection region.
  • the collection area of the display screen corresponds to an effective area of the output image of the optical module. That is to say, the size of the image in the acquisition area and the size of the image in the effective area may be in a scaling relationship.
  • the electronic device may first determine the side length of the binary image according to the horizontal gradient response value and the vertical gradient response value of the binary image; and then determine the acquisition region according to the scaling ratio and the side length of the binary image.
  • the side length is the ratio between the image of the acquisition area and the image acquired by the optical module in the collection area.
  • the electronic device may generate the first image before displaying the first image on the display screen; or directly call the existing first image.
  • the embodiment of the invention is not specifically limited.
  • the first image can be placed in the electronic device in a pre-configured manner.
  • the electronic device can directly determine the third location as the second location.
  • the third position is a position of the target reference point on the display screen
  • the second position is a position of a center point of the collection area on the display screen.
  • the electronic device can directly determine the position of the target reference point on the display screen as the position of the center point of the collection area on the display screen. Set. That is, the target reference point is determined as the center point of the acquisition area.
  • the target reference point in the embodiment of the present invention must be at the center of the effective area, otherwise there will be an error.
  • the electronic device may determine, according to the third location, the first offset before determining the second location, where the first offset is an offset vector used to modify the second location;
  • the first offset and the third location determine the second location.
  • the second offset is an offset vector of the target reference point in the second image with respect to a center point of the second image; the first offset is determined according to the second offset.
  • the first offset is determined according to the scaling ratio and the second offset, where the scaling is a ratio between an image of the collection area and an image acquired by the optical module in the collection area.
  • the electronic device may need to acquire the rotation angle before determining the second position.
  • the electronic device may acquire a rotation angle before determining the pattern on the target image unit in the second image, where the rotation angle is that the optical module is in the collection area.
  • the rotation angle of the optical module may be 0.
  • the actual positioning of the optical module may first determine the target without acquiring the rotation angle. The pattern on the image unit.
  • a method for an electronic device to acquire the zoom ratio and the rotation angle is further provided.
  • the electronic device may display a fifth image on the display screen, the fifth image includes k third line segments parallel to each other, and at least two adjacent third line segments of the k third line segments are covered by the On the acquisition area, k ⁇ 2; obtaining a sixth image by using the optical module, where the sixth image is an image of the fifth image collected by the optical module in the collection area; according to the first distance and the second distance, Determining the scaling ratio, the first distance being a vertical distance between two adjacent third line segments in the fifth image, the second distance being between the adjacent two third line segments in the sixth image vertical distance.
  • the first distance is 50 pixels of the display screen.
  • each third line segment of the k third line segments may be vertically disposed with j fourth line segments. At least one fourth line segment of the fourth line segment of the strip is overlaid on the collection area.
  • the j fourth line segments are mutually parallel line segments, and the vertical distance between two adjacent fourth line segments of the j fourth line segments is 25 pixels of the display screen.
  • the fifth image is as shown in FIG. 4, and the sixth image is as shown in FIG.
  • the electronic device may analyze the sixth image by Hough transform and obtain a first angle; acquire a second processed image, where the second processed image is an image rotated by the first image according to the first angle; The second processed image and the fifth image determine the rotation angle.
  • the electronic device may analyze the rotation angle of the sixth image by using a Hough Transform, rotate the sixth image according to the calculated first angle, and perform contour extraction of the Sobel operator. Threshold segmentation, and then according to the vertical projection, whether the rotation angle needs to be added or subtracted by 180 degrees, and finally determines the rotation angle.
  • the Hough Transform is introduced below.
  • Hough transform is a feature extraction technique in image processing, which detects an object with a specific shape through a voting algorithm.
  • the classical Hough transform is used to detect straight lines in an image, and then the Hough transform is extended to the recognition of objects of any shape, mostly circles and ellipses.
  • the Hough transform process involves a transformation between two coordinate spaces. Specifically, a curve or a straight line having the same shape in one coordinate space is mapped to a point of another coordinate space to form a peak, thereby converting a problem of detecting an arbitrary shape into a statistical peak problem.
  • the first angle of the sixth image may be determined using a Hough transform pair.
  • the Sobel operator is one of the operators in image processing and is mainly used for edge detection. Technically, it is a discrete difference operator that is used to calculate the approximation of the gradient of the image brightness function. Using this operator at any point in the image will produce the corresponding gradient vector or its normal vector.
  • Edge A sudden change in information such as grayscale or structure that can be used to segment an image.
  • the edge of the object is a discontinuity of the local characteristics of the image. Forms appear. For example, mutations in gray values, mutations in colors, mutations in texture structures, and the like. In essence, the edge means the end of one region and the beginning of another.
  • the edge information of the image is very important in image analysis and human vision, and is an important attribute for extracting image features in image recognition.
  • edges of the image have two characteristics of direction and amplitude.
  • the pixels along the edge change gently, while the pixels perpendicular to the edge change sharply. This variation may appear as a jump, roof and flange.
  • jump-type changes often correspond to the depth of the image or the reflective boundary, while the latter two often reflect the surface normal direction discontinuity of the image.
  • the images to be analyzed are often more complex and need to be analyzed according to the actual situation.
  • Edge point A point in the image that has coordinates [x, y] and is at a position where the intensity changes significantly.
  • Edge segment Corresponding to the edge point coordinate [x, y] and its orientation, the orientation of the edge may be the gradient angle.
  • the Sobel operator calculates the gradient value G(x, y) at all the pixels in the image, select a threshold T, if G(x, y) > T at (x, y), The point is considered to be an edge point or an edge segment.
  • the Sobel operator since the Sobel operator only needs to use two values of luminance value projection, that is, horizontal gradient response and vertical gradient response, the edge detection calculation is simple and fast.
  • Sobel operator is only an exemplary description of the embodiment of the present invention, and the embodiment of the present invention is not limited thereto.
  • it may also be a Robert operator, a Prewitt operator, and a Gaussian Laplacian of Gaussian (LOG) operator, etc.
  • the original image may be "binarized" by threshold segmentation before the image is detected. That is, the grayscale image is binarized to obtain a binary image, and image detection is performed on the basis of the binary image.
  • a method for verifying the second location is also provided.
  • the electronic device may display a seventh image on the display screen, where the seventh image includes two straight lines intersecting at a center point of the collection area; and acquiring an eighth image by using the optical module, the eighth image An image of the seventh image acquired by the optical module in the collection area; The eighth image is analyzed to verify the second location.
  • the seventh image is as shown in FIG. 8
  • the eighth image is as shown in FIG.
  • the electronic device may rotate the acquired image according to the previously calculated rotation angle, and then detect whether two straight lines intersect vertically, and if not, notify the upper interface that the positioning fails; if yes, the positioning is successful, and the coordinates of the intersection are calculated.
  • the offset from the coordinate of the center point of the effective area further adjusts the center coordinates of the acquisition area.
  • the resolution of the image displayed on the display screen may be the same as the resolution of the display screen, or may be different from the resolution of the display screen.
  • the image acquired when the display screen is completely black may be used as a background base image for removing a large amount of image noise by subtracting the background base map.
  • FIG. 10 is another exemplary flowchart of a method of detecting an optical module according to an embodiment of the present invention.
  • the algorithm process of the embodiment of the present invention includes:
  • the mobile phone is blackened in full screen, and the optical module collects an image as a background base map.
  • the optical module collect an image once, and obtain an image as shown in FIG. 3, and the collected image is used to analyze the effective area information, including the coordinates of the upper left corner of the effective area and the width and height of the effective area.
  • the effective area has a width of 190).
  • the line test chart covers the screen of the mobile phone and captures the image.
  • the mobile phone can construct a test chart consisting of a long line and a short line, and the size can cover the screen of the mobile phone, so that the mobile phone displays the picture in full screen and performs image acquisition, and the collected image is used to analyze the rotation angle and the scaling ratio of the optical module;
  • the physical size of the optical module is calculated according to the previously obtained scaling and effective area information.
  • the mobile phone can create an image of the same size as the screen resolution of the mobile phone.
  • a vertical line is designed every 50 pixels from left to right, and the line length is equal to the resolution rate height; On each line, from top to bottom, draw a short horizontal line on the right side of the line every 25 pixels, and the length is 25 pixels. Display such an image on the screen of the mobile phone, let the optical module perform a drawing, and output the image as shown in FIG.
  • the mobile phone uses a Hough transform to analyze the rotation angle of the output image, and the image is Rotate according to the calculated angle, and perform Sobel contour extraction and threshold segmentation; then, according to the vertical projection of the small horizontal line, analyze whether the rotation angle needs to be added or subtracted by 180 degrees; meanwhile, by detecting the distance between two vertical lines (for example, 75) , calculate the scaling of the acquired image (ie 75/50).
  • the grid array image covers the screen of the mobile phone, and captures the image.
  • Image detection, extracting square information, and calculating a central coordinate of the optical module are described.
  • the mobile phone can construct an array image consisting of squares and covering the screen of the mobile phone according to the result, and each square has a series of horizontal lines or vertical lines; let the mobile phone display the square array image in full screen and perform an image once.
  • the acquired image is used to analyze the calculation optics module relative to the coordinates of the upper left corner of the phone screen.
  • the best square width: 127/2.5 51.
  • the design principle of the inner segment of the small square can be as shown in Fig. 6: the square is divided into left and right sides, the left information value represents the column number of the square, and the right information value represents the row number of the square; each side
  • the area is divided into upper and lower parts.
  • Each horizontal line in the lower part represents the value 1
  • each vertical line represents the value 4, that is, the lower part is similar to the hexadecimal number; when the value of the lower part of the line represents 16, the lower line is emptied.
  • the upper line is incremented by 1; therefore, the law is: the internal area of the area is hexadecimal, and the upper and lower areas are hexadecimal.
  • the optical module 7 acquired by the optical module is rotated according to the previously calculated rotation angle, and then the square information in the image is detected.
  • the coordinate value of the upper left corner of the square the coordinates of the upper left corner of the current square in the entire mobile phone screen can be calculated.
  • the offset of the coordinate relative to the center coordinate of the effective area of the captured image is calculated, and the zoom ratio is multiplied to obtain the coordinates of the center point of the image acquisition effective area relative to the entire mobile phone screen, thereby realizing the preliminary positioning of the optical module.
  • the verification image covers the screen of the mobile phone, and the image is captured.
  • the mobile phone can design a verification image covering the screen of the mobile phone according to the calculated coordinates of the optical module, let the mobile phone display the image and perform an image acquisition, and perform coordinate verification and fine adjustment processing according to the captured image.
  • the mobile phone can construct a full black image with a screen resolution of the mobile phone.
  • two white straight lines (length 50) are perpendicularly intersected each other, specifically, as shown in FIG. Show. Let the phone display this image and perform an image acquisition to get the image shown in Figure 9.
  • the acquired image is rotated according to the previously calculated angle, and then it is detected whether two straight lines are vertically intersected. If not, the upper layer interface is notified to fail to locate; if yes, the positioning is successful, and the coordinates of the intersection point and the effective area center are calculated. The offset of the point coordinates further precisely adjusts the center coordinates of the optical module.
  • the mobile phone can output information such as the center coordinates, physical size, rotation angle, zoom ratio, and image acquisition effective area of the optical module.
  • FIG. 11 is a schematic block diagram of an apparatus 400 for detecting an optical module according to an embodiment of the present invention.
  • the apparatus 400 includes:
  • a display module 410 configured to display a first image on the display screen 420, the first image includes a plurality of image units, each of the plurality of image units is provided with a pattern on each of the image units The pattern is used to indicate the position of the reference point in each of the image units described above on the first image.
  • the processing module 440 is configured to:
  • the optical module 430 obtains a second image by the optical module 430, where the second image is an image of the first image captured by the optical module 430 in the collection area of the display screen 420; determining the target image unit in the second image a first position of the target reference point in the target image unit according to the pattern on the target image unit, the first position being a position of the target reference point on the first image; determining, according to the first position, The second position is the position of the center point of the collection area on the display screen 420.
  • processing module 440 is specifically configured to:
  • the pattern on each of the image units described above includes a horizontal line segment and/or a vertical line segment.
  • each of the image units includes a left half area and a right half area, and a line segment in the left half area is used to indicate a column to which the image unit belongs on the first image, and the right half area thereof is used. And indicating a row to which the image unit belongs on the first image.
  • each of the image units includes an upper half area and a lower half area, a horizontal line segment in the lower half area represents the number 1, and a vertical line segment in the lower half area represents the number 4, the upper A horizontal line segment in the half-sided area represents the number 16.
  • processing module 440 is further configured to:
  • the display module 410 generates the first image before displaying the first image on the display screen 420.
  • the first image is an array image
  • each of the plurality of image units is a square image unit
  • the collection area is a square area
  • the processing module 440 is specifically configured to:
  • Determining a side length of the collection area determining a side length of each of the image units according to a side length of the collection area, the side length of the collection area being greater than a side length of each of the image units; according to the edge of each image unit Long, the first image is generated.
  • the side length of the collection area is 2.5 times the side length of each of the image units.
  • the display module 410 is further configured to:
  • a third image is displayed on the display screen 420.
  • the brightness value of each pixel in the third image is greater than the first brightness value.
  • the processing module 440 is more specifically configured to:
  • the fourth image is obtained by the optical module 430, and the fourth image is an image of the third image collected by the optical module 430 in the collection area; the fourth image is binarized to obtain a binary image; The side length of the acquisition region is determined according to the horizontal gradient response value and the vertical gradient response value of the binary image.
  • the first brightness value is 128.
  • processing module 440 is more specifically configured to:
  • the side length of the acquisition area is determined according to the scaling ratio and the side length of the binary image, and the scaling ratio is a ratio between an image of the collection area and an image acquired by the optical module 430 in the collection area.
  • processing module 440 is specifically configured to:
  • processing module 440 is more specifically configured to:
  • the second offset being an offset vector of the target reference point in the second image with respect to a center point of the second image; determining the first offset according to the second offset .
  • processing module 440 is more specifically configured to:
  • the first offset is determined according to the scaling and the second offset, and the scaling is a ratio between an image of the acquisition area and an image acquired by the optical module 430 in the collection area.
  • processing module 440 is specifically configured to:
  • acquiring a rotation angle where the image acquired by the optical module 430 in the collection area is rotated relative to the image of the collection area; acquiring the first processed image
  • the first processed image is an image rotated by the second image according to the rotation angle; and the first processed image is analyzed to determine a pattern on the target image unit.
  • processing module 440 is further configured to:
  • a scaling ratio is obtained, where the scaling ratio is a ratio between an image of the collection area and an image acquired by the optical module 430 in the collection area.
  • the display module 410 is further configured to:
  • the fifth image includes k third line segments parallel to each other, and at least two adjacent third line segments of the k third line segments are overlaid on the collection area, k ⁇ 2; wherein the processing module 440 is specifically configured to:
  • the optical module 430 Obtaining a sixth image by the optical module 430, where the sixth image is an image of the fifth image collected by the optical module 430 in the collection area; determining the scaling according to the first distance and the second distance, The first distance is a vertical distance between two adjacent third line segments in the fifth image, and the second distance is a vertical distance between two adjacent third line segments in the sixth image.
  • the first distance is 50 pixels of the display screen 420.
  • each third line segment of the k third line segments is vertically disposed with j fourth line segments, and at least one fourth line segment of the j fourth line segments covers the collection area.
  • the j fourth line segments are mutually parallel line segments, and a vertical distance between two adjacent fourth line segments of the j fourth line segments is 25 pixel points of the display screen 420.
  • processing module 440 is further configured to:
  • Obtaining a rotation angle which is an angle of rotation of an image acquired by the optical module 430 in the collection area with respect to an image of the collection area.
  • processing module 440 is specifically configured to:
  • Parsing the sixth image by Hough transform and obtaining a first angle acquiring a second processed image, wherein the second processed image is an image rotated by the first image according to the first angle; by comparing the second processed image with The fifth image determines the angle of rotation.
  • the display module 410 is further configured to:
  • a seventh image is displayed on the display screen 420.
  • the seventh image includes two lines that are crossed at a center point of the collection area.
  • the processing module 440 is further configured to:
  • An eighth image is obtained by the optical module 430, where the eighth image is an image of the seventh image acquired by the optical module 430 in the collection area; and the second position is verified by analyzing the eighth image.
  • the resolution of the image displayed on the display is the same as the resolution of the display.
  • module may include at least one of the following components:
  • ASIC Application Specific Integrated Circuit
  • electronic circuit electronic circuit
  • processor for executing one or more software or firmware programs (eg shared processor, proprietary processor or group processor, etc.) and memory, merge logic Circuitry and other components that support the described functionality.
  • software or firmware programs eg shared processor, proprietary processor or group processor, etc.
  • the apparatus 400 herein is embodied in the form of a functional unit.
  • the device 400 may be specifically the electronic device mentioned in the foregoing method embodiment, and may also be the electronic device 100 shown in FIG.
  • the device 400 can be used to perform various processes and/or steps in the foregoing method embodiments. To avoid repetition, details are not described herein.
  • each step of the method embodiment in the embodiment of the present invention may be completed by an integrated logic circuit of hardware in a processor or an instruction in a form of software. More specifically, the steps of the method disclosed in the embodiments of the present invention may be directly implemented as a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software modules can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like. The storage medium is located in the memory, and the processor reads the information in the memory and combines the hardware to complete the steps of the above method.
  • the method embodiment in the embodiment of the present invention may be applied to a processor, or Processor implementation.
  • FIG. 12 shows another apparatus 500 for detecting an optical module according to an embodiment of the present invention, comprising: a processor 510, a memory 520, a display screen 530, and an optical module 540.
  • the collection area of the optical module 540 is located in at least part of the display area of the display screen 530.
  • the memory 520 is configured to store an instruction
  • the processor 510 is configured to execute an instruction stored by the memory 520, wherein execution of the instruction causes the
  • the processor 510 performs the following operations:
  • the first image comprising a plurality of image units, each of the plurality of image units being provided with a pattern, the pattern on each of the image units being used to indicate each of the above The position of the reference point in the image unit on the first image.
  • the second image is an image of the first image captured by the optical module 540 in the collection area of the display screen 530; determining the target image unit in the second image a first position of the target reference point in the target image unit according to the pattern on the target image unit, the first position being a position of the target reference point on the first image; determining, according to the first position, The second position is the position of the center point of the collection area on the display screen 530.
  • the electronic device can display a pre-designed image (display image) on the display screen; then, the image is displayed on the display screen through the optical module, and the acquired image is obtained; and then the analysis of the collected image is performed to determine The position of the center point of the collection area on the display screen finally realizes the positioning operation of the optical module.
  • a pre-designed image display image
  • the apparatus 500 shown in FIG. 12 can implement various processes implemented by the terminal device in the foregoing method embodiments. To avoid repetition, details are not described herein again.
  • the device 500 may be specifically the device 400 in the above embodiment, and may also correspond to the electronic device in the above method embodiment.
  • the apparatus 500 can be used to perform various processes and/or steps in the foregoing method embodiments. To avoid repetition, details are not described herein again.
  • an electronic device is further provided, and the electronic device includes the device 400 or the device 500 in the above embodiment.
  • the display screen may be an Organic Light-Emitting Diode (OLED) display screen
  • the optical module performs detection function by using at least part of the OLED pixel unit of the OLED display as a light source.
  • the processor mentioned in the embodiment of the present invention may be an integrated circuit chip having The methods, steps, and logic blocks disclosed in the embodiments of the present invention may be implemented or executed.
  • the above processor may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or Other programmable logic devices, transistor logic devices, discrete hardware components, and the like.
  • the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory referred to in the embodiments of the present invention may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read only memory (ROMM), an erasable programmable read only memory (erasable PROM, EPROM), or an electrical Erase programmable EPROM (EEPROM) or flash memory.
  • the volatile memory can be a random access memory (RAM) that acts as an external cache.
  • the memory in the embodiment of the present invention may also be a static random access memory (SRAM), a dynamic random access memory (DRAM), or a dynamic random access memory (DRAM).
  • SDRAM Synchronous dynamic random access memory
  • DDR double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous connection Synchro link DRAM
  • DR RAM direct memory bus
  • first image and second image may be employed in embodiments of the invention, but such images are not limited to these terms. These terms are only used to distinguish images from each other.
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the embodiments of the present invention.
  • each functional unit in the embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product stored in a storage medium.
  • the instructions include a plurality of instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method of the embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory, a random access memory, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Optics & Photonics (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Input (AREA)
  • Studio Devices (AREA)

Abstract

一种检测光学模组的方法、装置和电子设备。方法包括:在显示屏上显示第一图像,第一图像包括多个图像单元,多个图像单元中的每个图像单元上设置有图案,每个图像单元上的图案用于指示每个图像单元中的参考点在第一图像上的位置(210);获取第二图像,第二图像为光学模组在显示屏的采集区域内采集的第一图像的图像(220);确定第二图像中目标图像单元上的图案(230);根据目标图像单元上的图案,确定采集区域的中心点在显示屏上的位置(240、250)。通过在显示屏上显示预先设计好的图像;然后通过光学模组进行图像采集;再通过对采集的图像进行分析,能够有效获取光学模组的安装信息。

Description

检测光学模组的方法、装置和电子设备 技术领域
本发明实施例涉及光学模组领域,并且更具体地,涉及一种检测光学模组的方法、装置和电子设备。
背景技术
目前,指纹识别功能是智能手机的标配。传统方案是通过独立实体按键或虚拟按键进行指纹识别按键的设计,例如,有的智能手机将光学模组集成在手机的前置Home键中,将Home键作为指纹识别按键。该光学模组中包括指纹传感器。
进一步地,为了减缩智能手机的尺寸,越来越多的手机厂商的智能手机的显示屏的边框设计的超级窄。然而,这种显示屏的下方已经没有足够空间来放置指纹识别按键。为了解决这个问题,一种可行的技术方案为显示屏内指纹识别技术。具体而言,将指纹识别功能完整的集成到显示屏中,用户可以直接轻触智能终端的显示屏中指定区域实现指纹识别。例如,光学模组装配在显示屏的背面。
显示屏内指纹识别技术不仅能够满足当前手机厂商的设计需求,而且能够使得智能手机的设计变得更加简洁,因此,显示屏内指纹识别技术应用到智能终端中,将是未来的发展趋势。
但是,在某些应用场景中,采用显示屏内指纹识别技术的智能手机需要应用程序能够自行获得光学模组的安装信息。例如,安装位置,旋转方向等信息。
目前,针对上述问题还没有解决方案。
发明内容
提供了一种检测光学模组的方法、装置和电子设备,能够有效获取该光学模组的安装信息。
第一方面,提供了一种检测光学模组的方法,应用于包括显示屏和光学模组的电子设备;
所述方法包括:
在所述显示屏上显示第一图像,所述第一图像包括多个图像单元,所述多个图像单元中的每个图像单元上设置有图案,所述每个图像单元上的图案用于指示所述每个图像单元中的参考点在所述第一图像上的位置;通过所述光学模组获取第二图像,所述第二图像为所述光学模组在所述显示屏的采集区域内采集的所述第一图像的图像;确定所述第二图像中目标图像单元上的图案;根据所述目标图像单元上的图案,确定所述目标图像单元中目标参考点的第一位置,所述第一位置为所述目标参考点在所述第一图像上的位置;根据所述第一位置,确定第二位置,所述第二位置为所述采集区域的中心点在所述显示屏上的位置。
因此,本发明实施例的方案中,可以通过在显示屏上显示预先设计好的图像(显示图像);然后通过光学模组对显示屏上显示的图像进行图像采集,并得到采集图像;再通过对采集图像的分析,确定出采集区域的中心点在该显示屏上的位置,最终实现光学模组的定位操作。
此外,本方案通过图像单元上设置的图案,指示图像单元中参考点在第一图像上的位置,然后基于参考点在第一图像上的位置确定采集区域中心点在显示屏上的位置,这种方案相对于根据图像单元在第一图像上的位置确定采集区域的中心点在显示屏上的位置,定位精度更高。
在一些可能的实现方式中,所述根据所述第一位置,确定第二位置,包括:
根据所述第一位置和映射关系信息,确定第三位置,所述第三位置为所述目标参考点在所述显示屏上的位置,所述映射关系信息包括所述第一位置和所述第一位置对应的所述第三位置;根据所述第三位置,确定所述第二位置。
在一些可能的实现方式中,所述每个图像单元上的图案包括横线段和/或竖线段;所述每个图像单元包括左半侧区域和右半侧区域,所述左半侧区域内的线段用于指示图像单元在所述第一图像上所属的列,所述右半侧区域内用于指示图像单元在所述第一图像上所属的行。
在一些可能的实现方式中,所述每个图像单元上的图案包括横线段和/或竖线段;所述每个图像单元包括上半侧区域和下半侧区域,所述下半侧区域内的一条横线段表示第一数字,所述下半侧区域内的一条竖线段表示第二数字,所述上半侧区域内的一条横线段表示第三数字。
在一些可能的实现方式中,所述下半侧区域内的一条横线段表示数字1,所述下半侧区域内的一条竖线段表示数字4,所述上半侧区域内的一条横线段表示数字16。
所述第一图像为阵列图像,所述多个图像单元中的每个图像单元为正方形图像单元,所述采集区域为正方形区域;其中,所述方法还包括:
在所述显示屏上显示第一图像之前,确定所述采集区域的边长;根据所述采集区域的边长,确定所述每个图像单元的边长,所述采集区域的边长大于所述每个图像单元的边长;根据所述每个图像单元的边长,生成所述第一图像。
在一些可能的实现方式中,所述确定所述采集区域的边长,包括:
在所述显示屏上显示第三图像,所述第三图像中每个像素点的亮度值均大于第一亮度值;通过所述光学模组获取第四图像,所述第四图像为所述光学模组在所述采集区域内采集的所述第三图像的图像;对所述第四图像进行二值化,获取二值图像;根据所述二值图像的水平梯度响应值和垂直梯度响应值,确定所述采集区域的边长。
在一些可能的实现方式中,所述根据所述二值图像的水平梯度响应值和垂直梯度响应值,确定所述采集区域的边长,包括:
根据所述二值图像的水平梯度响应值和垂直梯度响应值,确定所述二值图像的边长;根据缩放比例和所述二值图像的边长,确定所述采集区域的边长,所述缩放比例为所述采集区域的图像与所述光学模组在所述采集区域内采集的图像之间的比例。
在一些可能的实现方式中,所述方法还包括:
确定第一偏移量,所述第一偏移量可以用于修正所述第二位置;其中,所述根据所述第三位置,确定所述第二位置,包括:
根据所述第一偏移量和第三位置,确定所述第二位置。
本发明实施例的技术方案,能够进一步提高定位精度。
在一些可能的实现方式中,所述确定第一偏移量,包括:
确定第二偏移量,所述第二偏移量为所述第二图像中所述目标参考点相对所述第二图像的中心点的偏移向量;根据缩放比例和所述第二偏移量,确定所述第一偏移量,所述缩放比例为所述采集区域的图像与所述光学模组在所述采集区域内采集的图像之间的比例。
在一些可能的实现方式中,所述方法还包括:
获取旋转角度,所述旋转角度为所述光学模组在所述采集区域内采集的图像相对所述采集区域的图像旋转的角度;其中,所述确定所述第二图像中目标图像单元上的图案,包括:
获取第一处理图像,所述第一处理图像为所述第二图像按照所述旋转角度旋转后的图像;通过分析所述第一处理图像,确定所述目标图像单元上的图案。
在一些可能的实现方式中,所述根据所述第一位置,确定所述第二位置之前,所述方法还包括:
获取缩放比例,所述缩放比例为所述采集区域的图像与所述光学模组在所述采集区域内采集的图像之间的比例;所述获取缩放比例,包括:
在所述显示屏上显示第五图像,所述第五图像包括相互平行的k条第三线段,所述k条第三线段中至少两条相邻的第三线段覆盖在所述采集区域上,k≥2;通过所述光学模组获取第六图像,所述第六图像为所述光学模组在所述采集区域内采集的所述第五图像的图像;根据第一距离和第二距离,确定所述缩放比例,所述第一距离为所述第五图像中相邻的两条第三线段之间的垂直距离,所述第二距离为所述第六图像中相邻的两条第三线段之间的垂直距离。
在一些可能的实现方式中,所述方法还包括:
获取旋转角度,所述旋转角度为所述光学模组在所述采集区域内采集的图像相对所述采集区域的图像旋转的角度;所述获取旋转角度,包括:
通过霍夫变换分析所述第六图像,并得到第一角度;获取第二处理图像,所述第二处理图像为所述第四图像按照所述第一角度旋转后的图像;通过对比所述第二处理图像和所述第五图像,确定所述旋转角度。
在一些可能的实现方式中,所述方法还包括:
在所述显示屏上显示第七图像,所述第七图像包括在所述采集区域的中心点十字交叉的两条直线;通过所述光学模组获取第八图像,所述第八图像为所述光学模组在所述采集区域内采集的所述第七图像的图像;通过分析所述第八图像验证所述第二位置。
在一些可能的实现方式中,所述采集区域的边长为所述每个图像单元的边长的2.5倍。本方案能够有效保证:有效区域内存在一个目标图像单元。
在一些可能的实现方式中,所述第一亮度值为128。
在一些可能的实现方式中,所述显示屏上显示的图像的分辨率与所述显示屏的分辨率相同。
在一些可能的实现方式中,所述显示屏上显示的图像覆盖所述采集区域。
第二方面,提供了一种检测光学模组的装置,包括;用于执行上述第一方面或第一方面的任意可能的实现方式中的方法。
具体地,该装置包括用于执行上述第一方面或第一方面的任意可能的实现方式中的方法的单元。
第三方面,提供了一种检测光学模组的装置,包括;
显示模块,用于在所述显示屏上显示第一图像,所述第一图像包括多个图像单元,所述多个图像单元中的每个图像单元上设置有图案,所述每个图像单元上的图案用于指示所述每个图像单元中的参考点在所述第一图像上的位置。
处理模块,所述处理模块用于:
通过所述光学模组获取第二图像,所述第二图像为所述光学模组在所述显示屏的采集区域内采集的所述第一图像的图像;确定所述第二图像中目标图像单元上的图案;根据所述目标图像单元上的图案,确定所述目标图像单元中目标参考点的第一位置,所述第一位置为所述目标参考点在所述第一图像上的位置;根据所述第一位置,确定第二位置,所述第二位置为所述采集区域的中心点在所述显示屏上的位置。
第四方面,提供了一种检测光学模组的装置,包括;
存储器和处理器,所述存储器用于存储指令,所述处理器用于执行所述存储器存储的指令,并且当所述处理器执行所述存储器存储的指令时,执行上述第一方面或第一方面的任意可能的实现方式中的方法。
第五方面,提供一种计算机可读介质,用于存储计算机程序,该计算机程序包括用于执行上述第一方面或第一方面的任意可能的实现方式中的方法的指令。
第六方面,提供了一种电子设备,包括第二方面中所述的检测光学模组的装置。
在一些可能的实现方式中,所述显示屏为有机发光二极管OLED显示 屏,所述光学模组利用所述OLED显示屏的至少部分OLED像素单元作为光源来执行检测功能。
附图说明
图1是本发明实施例的电子设备的示例。
图2是本发明实施例的检测光学模组的方法的示例性流程图。
图3是本发明实施例的第四图像的示意图。
图4是本发明实施例的第五图像的示意图。
图5是本发明实施例的第六图像的示意图。
图6是本发明实施例的第一图像的示意图。
图7是本发明实施例的第二图像的示意图。
图8是本发明实施例的第七图像的示意图。
图9是本发明实施例的第八图像的示意图。
图10是本发明实施例的检测光学模组的方法的另一示例性流程图。
图11是本发明实施例的电子设备的示意性框图。
图12是本发明实施例的电子设备的另一示意性框图。
具体实施方式
下面将结合附图,对本发明实施例中的技术方案进行描述。
应理解,本发明实施例适用于任何配置有光学模组的装置以及设备。例如,例如,智能移动电话;小型个人携带型设备:掌上电脑(Personal Digital Assistant,PDA)、电子书(electronic book,E-book)等。为了便于理解,作为示例而非限定性地,下文中以智能手机为例进行说明。
图1为本发明实施例的电子设备100的侧视截面示意图。
具体地,该电子设备100包括显示屏110与装配在显示屏110的背面的光学模组120。光学模组120的采集区域130(即采集区域)位于显示屏110中。实际工作中,光学模组120在采集区域130中进行图像的采集。
例如,光学模组120可以在采集区域130中采集显示屏100上显示的图像。
又例如,光学模组120可以为光学指纹模组或者其他类型的光学生物检测模组,其也可以用于进行指纹识别或者其他生物特征识别。具体地,所述 光学模组120采用光学指纹模组时,其可以设置在所述显示屏110下方的局部区域(即,显示屏下指纹结构)或者集成到显示屏110内部的局部区域(即,显示屏内指纹结构),可以用于采集触摸在所述采集区域130上的手指的指纹图像。在这种情况下,所述采集区域130即是所述光学指纹模组的指纹检测有效区域,其位于所述显示屏110的至少部分显示区域,以实现屏内(In-Display)指纹检测。其中,作为一种优选的实现方案,所述显示屏110可以为有机发光二极管(OLED)显示屏,其采用自发光的OLED像素单元作为显示单元;在所述显示屏110中,位于所述采集区域130的OLED像素单元可以同时作为所述光学指纹模组的光源。可替代地,在其他实施例中,所述光学指纹模组的指纹检测有效区域也可以覆盖所述显示屏110的整个显示区域,从而实现全屏指纹检测。
但是,实际应用场景中,可能需要应用程序能自行获得光学模组120的安装信息。
例如,光学模组120用于进行指纹识别时,需要在显示屏110上显示指纹提示图案,以指示用户在合适的区域输入指纹。但是,指纹提示图案的位置一般是根据光学模组120的装配位置确定的,通常是固定不变的。由于装配工艺不精确或设计修改的原因,可能导致不同批次的光学模组120的装配位置不完全一致。这种情形下,如果依然在显示屏的固定不变的该特定位置上显示指纹提示图案,可能会出现指纹提示图案的定位不准确的问题,导致光学模组120能够采集到的有效指纹面积变少,降低指纹识别的效率,影响用户体验。
为了解决上述问题,本发明实施例中提供了一种检测光学模组的方法。
具体地,提供了一套完整的算法,通过设计在显示屏上的显示的图像,不仅能够确定出采集区域的中心点在显示屏上的位置,还可以获取光学模组的物理大小,安装时的旋转角度以及输出图像的缩放比例等。进而实现手机屏幕下光学模组(Sensor)的自适应定位检测功能。
下面对本发明实施例的检测光学模组的方法进行说明。应理解,本发明实施例的检测光学模组的方法可以应用于任何包括显示屏和光学模组的电子设备。
图2是本发明实施例的检测光学模组的方法200的示例性流程图。
如图2所示,该方法200包括:
210,在该显示屏上显示第一图像,该第一图像包括多个图像单元,该多个图像单元中的每个图像单元上设置有图案,每个图像单元上的图案用于指示上述每个图像单元中的参考点在该第一图像上的位置。
220,通过该光学模组获取第二图像,该第二图像为该光学模组在该显示屏的采集区域内采集的该第一图像的图像。
230,确定该第二图像中目标图像单元上的图案。
240,根据该目标图像单元上的图案,确定该目标图像单元中目标参考点的第一位置,该第一位置为该目标参考点在该第一图像上的位置。
250,根据该第一位置,确定第二位置,该第二位置为该采集区域的中心点在该显示屏上的位置。
简而言之,电子设备可以在显示屏上显示预先设计好的图像(显示图像);然后通过光学模组对显示屏上显示的图像进行图像采集,并得到采集图像;再通过对采集图像的分析,确定出采集区域的中心点在该显示屏上的位置,最终实现光学模组的定位操作。
本发明实施例中,电子设备在对光学模组的定位操作时,可以先通过对第二图像分析,确定出第二图像中的目标参考点在显示图像上的位置;进而,该电子设备可以根据该目标参考点在显示图像上的位置,确定出采集区域的中心点在该显示屏上的位置。
例如,电子设备可以首先根据该第一位置和映射关系信息,确定第三位置,该第三位置为该目标参考点在该显示屏上的位置,该映射关系信息包括该第一位置和该第一位置对应的该第三位置;然后根据该第三位置,确定该第二位置。
应注意,本发明实施例中,电子设备可以通过第三位置确定第二位置;电子设备还可以直接将第一位置,确定为第二位置。本发明实施例不做具体限定。例如,该第一位置为目标参考点在该第一图像上的坐标位置,且第一图像和显示屏的坐标系相同时,电子设备可以直接将该第一位置,确定为该第二位置。
为便于描述,本发明实施例中,将第二图像中目标图像单元上的目标参考点的在该第一图像上的位置,称为第一位置。将该目标参考点在该显示屏上的位置称为第三位置。将该采集区域的中心点在该显示屏上的位置,称为第二位置。同样地,为便于描述,将电子设备对进行光学模组的定位时在显 示屏上显示的图像称为第一图像,将光学模组在该显示屏的采集区域内采集该第一图像后获得的图像称为第二图像。类似地,还有第三图像、第四图像、第五图像、第六图像等等,但不应限于这些术语。这些术语仅用来将类型小区组彼此区分开。
应理解,本发明实施例中,光学模组输出的图像实际包括但不限于光学模组在采集区域采集到的图像。
例如,在显示屏上显示一张白色图像,光学模组采集后输出的图像包括黑色区域和白色区域,白色区域的图像与显示屏上采集区域的图像对应。本发明实施例中,将该显示屏的采集区域内的图像,对应的在光学模组输出的图像中,称为有效区域(例如,如图3所示的白色区域)内的图像。
下面对本发明实施例的第一图像进行说明。
可选地,本发明实施例的第一图像上的每个图像单元上的图案可以包括横线段和/或竖线段。
例如,上述每个图像单元可以包括左半侧区域和右半侧区域,左半侧区域内的线段用于指示图像单元在该第一图像上所属的列,右半侧区域内用于指示图像单元在该第一图像上所属的行。
进一步地,上述每个图像单元还可以包括上半侧区域和下半侧区域,该下半侧区域内的一条横线段表示数字1,该下半侧区域内的一条竖线段表示数字4,该上半侧区域内的一条横线段表示数字16。
更具体地,第一图像的每一个图像单元可以分为左右两侧,左侧信息值代表图像单元所在列序号,右侧信息值代表图像单元所在行序号;每一侧区域再均分为上下两部分,下部分每根横线代表数值1,每根竖线代表数值4,即下部分类似于4进制;当下部分直线代表的数值满16时,清空下方直线,同时上方直线加1;故而规律为:区域内部4进制,上下区域之间为16进制。由此,电子设备可以通过目标图像单元上的线段,可以确定出目标图像单元中的参考点在该第一图像的位置。
例如,该第一图像如图6所示,该第二图像如图7所示。
需要注意的是,本发明实施例中,由于电子设备是通过分析目标图像单元上的图案确定第一位置的,因此,在设计第一图像时,不仅需要确保第一图像处于采集区域内,还需要确保第二图像中至少有一个完整的一个图像单元。
因此,本发明实施例中还提供了一种生成第一图像的方法。
可选地,该电子设备在该显示屏上显示第一图像之前,还可以生成该第一图像。
例如,假设该第一图像为阵列图像,该多个图像单元中的每个图像单元为正方形图像单元,该采集区域为正方形区域;电子设备可以确定该采集区域的边长;根据该采集区域的边长,确定上述每个图像单元的边长,该采集区域的边长大于上述每个图像单元的边长;根据上述每个图像单元的边长,生成该第一图像。
可选地,该采集区域的边长为上述每个图像单元的边长的2.5倍。
本发明实施例中,电子设备可以通过阈值分割先将光学在采集区域采集的图像二值化,然后通过垂直方向和水平方向的亮度值投影,确定该采集区域的边长。
具体而言,电子设备可以在该显示屏上显示第三图像,该第三图像中每个像素点的亮度值均大于第一亮度值;通过该光学模组获取第四图像,该第四图像为该光学模组在该采集区域内采集的该第三图像的图像;对该第四图像进行二值化,获取二值图像;根据该二值图像的水平梯度响应值和垂直梯度响应值,确定该采集区域的边长。
可选地,该第一亮度值为128。
例如,该第三图像为一副全白图像,该第四图像如图3所示。
更具体地,电子设备可以将第四图像上的点的灰度值设置为0或255,也就是将整个图像呈现出明显的黑白效果。即,将256个亮度等级的灰度图像通过适当的阈值选取而获得仍然可以反映图像整体和局部特征的二值图像,进而提取二值图像特征,这是研究灰度变换的最特殊的方法,称为图像的二值化。该二值图像特征可以是包括由一串0,1组成的特征向量。
本领域技术人员可以理解,在数字图像处理中,二值图像占有非常重要的地位,特别是在实用的图像处理中,以二值图像处理实现而构成的***是很多的。因此,采用“二值化”处理图像,不仅应用性强,而且有利于在对图像做进一步处理时,图像的集合性质只与像素点的值为0或255的点的位置有关,不再涉及像素的多级值,使处理变得简单,而且数据的处理和压缩量小。
为了得到理想的二值图像,本发明实施例中可以采用封闭、连通的边界 定义不交叠的区域。具体地,将所有灰度大于或等于阈值的像素被判定为属于特定物体,其灰度值用255表示,否则这些像素点被排除在物体区域以外,灰度值为0,表示背景或者例外的物体区域。可以发现,如果某特定物体在内部有均匀一致的灰度值,并且其处在一个具有其他等级灰度值的均匀背景下,使用阈值法就可以得到比较的分割效果。
总而言之,电子设备可以在显示屏上显示一张白色图像(第三图像),光学模组输出如图3所示的第四图像,这个图像的大部分区域是高亮的,其亮度值至少在128以上,并且这个高亮区域必然是一个矩形,而这个矩形区域就是光学模组第四图像的有效区域。然而,本发明实施例中,显示屏的采集区域对应的就是光学模组的输出图像的有效区域。
也就是说,电子设备可以通过阈值分割,将第四图像二值化,并通过垂直方向和水平方向的亮度值投影,确定出该二值图像的边长,进而根据该二值图像的边长确定该采集区域的边长。
例如,电子设备可以根据该二值图像的水平梯度响应值和垂直梯度响应值,确定该二值图像的边长;然后将该二值图像的边长确定为该采集区域的边长。
此外,本发明实施例中,由于显示屏的采集区域对应的就是光学模组的输出图像的有效区域。也就是说,采集区域内图像的尺寸和有效区域内图像的尺寸有可能成缩放关系。
因此,电子设备也可以先根据该二值图像的水平梯度响应值和垂直梯度响应值,确定该二值图像的边长;然后根据缩放比例和该二值图像的边长,确定该采集区域的边长,该缩放比例为该采集区域的图像与该光学模组在该采集区域内采集的图像之间的比例。
可以理解,在本发明实施例中,该电子设备可以在该显示屏上显示第一图像之前,生成该第一图像;也可以直接调用已有的第一图像。本发明实施例不做具体限定。例如,该第一图像可以通过预先配置的方式放置在电子设备中。
在本发明实施例中,电子设备可以将第三位置直接确定为第二位置。其中,该第三位置为该目标参考点在该显示屏上的位置,该第二位置为该采集区域的中心点在该显示屏上的位置。换句话说,电子设备可以直接将该目标参考点在该显示屏上的位置确定为该采集区域的中心点在该显示屏上的位 置。即,将目标参考点确定为采集区域的中心点。
也就是说,本发明实施例中的目标参考点必须在有效区域的中心位置,否则,必然会存在误差。
因此,本发明实施例中还提供了一种修正第二位置的方法。
可选地,该电子设备可以根据该第三位置,确定该第二位置之前,先确定第一偏移量,该第一偏移量为用于修正该第二位置的偏移向量;然后根据该第一偏移量和第三位置,确定该第二位置。
例如,该第二偏移量为该第二图像中该目标参考点相对该第二图像的中心点的偏移向量;根据该第二偏移量确定该第一偏移量。
具体地,根据缩放比例和该第二偏移量,确定该第一偏移量,该缩放比例为该采集区域的图像与该光学模组在该采集区域内采集的图像之间的比例。
此外,由于光学模组的装配工艺不精确或设计修改的原因,可能导致不同批次的光学模组的装配位置相对显示屏会出现一个旋转角度。也就是说,电子设备在确定该第二位置之前,有可能需要获取该旋转角度。
本发明实施例中,作为示例而非限定性地,该电子设备可以在确定该第二图像中目标图像单元上的图案之前,获取旋转角度,该旋转角度为该光学模组在该采集区域内采集的图像相对该采集区域的图像旋转的角度。由此,该电子设备在确定该第二图像中目标图像单元上的图案时,可以首先获取第一处理图像,该第一处理图像为该第二图像按照该旋转角度旋转后的图像;然后通过分析该第一处理图像,确定该目标图像单元上的图案。
应理解,如果光学模组的装配工艺足够精确,光学模组的旋转角度可能为0,这种情况下,实际定位光学模组的过程中可以先在不获取旋转角度的情况下,确定该目标图像单元上的图案。
本发明实施例中,还提供了一种电子设备获取该缩放比例和旋转角度的方法。
下面对该电子设备获取该缩放比例和旋转角度的实现方式进行说明。
可选地,电子设备可以在该显示屏上显示第五图像,该第五图像包括相互平行的k条第三线段,该k条第三线段中至少两条相邻的第三线段覆盖在该采集区域上,k≥2;通过该光学模组获取第六图像,该第六图像为该光学模组在该采集区域内采集的该第五图像的图像;根据第一距离和第二距离, 确定该缩放比例,该第一距离为该第五图像中相邻的两条第三线段之间的垂直距离,该第二距离为该第六图像中相邻的两条第三线段之间的垂直距离。
例如,该第一距离为该显示屏的50个像素点。
进一步地,本发明实施例中的电子设备在需要获取光学模组的旋转角度时,可以使得该k条第三线段中每条第三线段的一侧垂直设置有j条第四线段,该j条第四线段中至少一条第四线段覆盖在该采集区域上。
例如,该j条第四线段为相互平行的线段,该j条第四线段中相邻的两条第四线段之间的垂直距离为该显示屏的25个像素点。
例如,该第五图像如图4所示,该第六图像如图5所示。
具体地,电子设备可以通过霍夫变换分析该第六图像,并得到第一角度;获取第二处理图像,该第二处理图像为该第四图像按照该第一角度旋转后的图像;通过对比该第二处理图像和该第五图像,确定该旋转角度。
更具体地,电子设备可以对第六图像采用霍夫变换(Hough Transform)分析其旋转角度,将第六图像按照计算得到的第一角度旋转,并进行索贝尔算子(Sobel operator)轮廓提取与阈值分割,然后再根据垂直投影,分析旋转角度是否需要加减180度,最终确定出该旋转角度。
下面对霍夫变换(Hough Transform)进行介绍。
Hough变换是图像处理中的一种特征提取技术,它通过一种投票算法检测具有特定形状的物体。经典霍夫变换用来检测图像中的直线,后来霍夫变换扩展到任意形状物体的识别,多为圆和椭圆。在实现过程中,Hough变换过程涉及两个坐标空间之间的变换。具体地,将在一个坐标空间中具有相同形状的曲线或直线映射到另一个坐标空间的一个点上形成峰值,进而把检测任意形状的问题转化为统计峰值问题。本发明实施例中,可以利用霍夫变换对确定第六图像的第一角度。
下面对索贝尔算子(Sobel operator)进行介绍。
Sobel算子是图像处理中的算子之一,主要用作边缘检测。在技术上,它是一离散性差分算子,用来运算图像亮度函数的梯度的近似值。在图像的任何一点使用此算子,将会产生对应的梯度矢量或是其法矢量。
Sobel算子涉及的术语有:
边缘:灰度或结构等信息的突变处,利用该特征可以分割图像。
本领域技术人员可以理解,物体的边缘是以图像局部特性的不连续性的 形式出现的。例如,灰度值的突变,颜色的突变,纹理结构的突变等。从本质上说,边缘就意味着一个区域的终结和另外一个区域的开始。图像的边缘信息在图像分析和人的视觉中十分重要,是图像识别中提取图像特征的一个重要属性。
此外,图像的边缘有方向和幅度两个特性,沿边缘走向的像素变化平缓,而垂直于边缘走向的像素变化剧烈。这种变化可能呈现为跳跃型、房顶型和凸缘型。这些变化分别对应景物中不同的物理状态。例如,跳跃型变化常常对应图像的深度或者是反射边界,而后两者则常常反映图像的表面法线方向不连续。要注意的是,实际要分析的图像往往是比较复杂的,需要根据实际情况进行具体分析。
边缘点:图像中具有坐标[x,y],且处在强度显著变化的位置上的点。
边缘段:对应于边缘点坐标[x,y]及其方位,边缘的方位可能是梯度角。
在实现过程中,Sobel算子计算完图像中所有的像素点处的梯度值G(x,y)后,选择一个阈值T,如果(x,y)处的G(x,y)>T,则认为该点是边缘点或边缘段。另外,由于Sobel算子只需要采用2个方向的亮度值投影,即水平梯度响应及垂直梯度响应,使得边缘检测的计算简单,速度快。
本发明实施例中,通过对第二处理图像进行Sobel轮廓提取与阈值分割,能够分析出本发明实施例的第一角度是否需要加减180度,进而确定出光学模组的旋转角度。
应理解,Sobel算子仅是本发明实施例的示例性说明,本发明实施例不限于此,例如,还可以是罗伯特(Robert)算子、普鲁伊特(Prewitt)算子、高斯拉普拉斯(Laplacian of Gaussian,LOG)算子等。
还应理解,为了进一步提高图像处理的准确度,在本发明实施例中,还可以在检测图像之前,通过阈值分割的方式对原始图像进行“二值化”处理。即,将灰度图像二值化,得到二值图像,并在二值图像的基础上进行图像检测。
本发明实施例中,还提供了一种验证第二位置的方法。
下面对本发明实施例的验证第二位置的方法进行说明。
可选地,电子设备可以在该显示屏上显示第七图像,该第七图像包括在该采集区域的中心点十字交叉的两条直线;通过该光学模组获取第八图像,该第八图像为该光学模组在该采集区域内采集的该第七图像的图像;通过分 析该第八图像验证该第二位置。
例如,该第七图像如图8所示,该第八图像如图9所示。
具体地,电子设备可以将采集得到的图像按照前面计算的旋转角度旋转,再检测是否有两根直线垂直交叉,如果没有则通知上层接口定位失败;如果有则定位成功,并且通过计算交叉点坐标与有效区域中心点坐标的偏移,进一步精确调整采集区域的中心坐标。
应理解,本发明实施例中,该显示屏上显示的图像的分辨率可以与该显示屏的分辨率相同,也可以与该显示屏的分辨率不同。
还应理解,电子设备在采集图像时,还可以先以显示屏全黑时采集的图像作为背景基图,用于通过减去背景基图的方法去除大量图像噪声。
图10是本发明实施例的检测光学模组的方法的另一示例性流程图。
如图10所示,以电子设备为手机为例,本发明实施例的算法流程包括:
301,手机初始化算法环境。
302,输入全黑背景图。
具体地,使得手机全屏变黑,让光学模组采集一次图像,作为背景基图。
303,输入全白背景图。
具体地,让手机全屏变白,让光学模组采集一次图像,得到如图3所示的图像,采集到的图像用于分析有效区域信息,包括有效区域左上角的坐标和有效区域的宽高(例如,有效区域的宽高为190)。
304,直线测试图覆盖手机屏幕,抓取图像。
具体地,该手机可以构建一张由长线短线构成,大小可以覆盖手机屏幕的测试图,让手机全屏显示该图并进行一次图像采集,采集图像用于分析光学模组的旋转角度和缩放比例;根据之前获取的缩放比例、有效区域信息计算得到光学模组的物理尺寸。
例如,该手机可以创建一张跟手机屏幕分辨率一样大小的图像,具体地,如图4所示,从左往右,每隔50像素设计一条垂直直线,直线长度等于分辨率率高度;在每根直线上,从上往下,每隔25像素在直线右侧画一根短横线,长度为25像素。将这样一张图像显示于手机屏幕上,让光学模组进行一次采图,并输出如图5所示的图像。
305,计算光学模组的旋转角度,缩放比例。
具体地,该手机对输出的图像采用Hough变换分析其旋转角度,将图像 按照计算得到的角度旋转,并进行Sobel轮廓提取与阈值分割;再根据小段横线的垂直投影,分析旋转角度是否需要加减180度;同时,通过检测两根垂直直线的距离(例如,75),计算得到采集图像的缩放比例(即75/50)。
306,根据该缩放比例和屏幕分辨率,构建方格阵列图。
307,方格阵列图覆盖手机屏幕,抓取图像。
308,图像检测,提取方格信息,计算光学模组的中心坐标。
具体地,该手机可以根据该结果构建一张由方格组成、大小可以覆盖手机屏幕的阵列图,每个方格内部有一系列横线或者竖线;让手机全屏显示该方格阵列图并进行一次图像采集,采集的图像用于分析计算光学模组,相对于手机屏幕左上角的坐标。
更具体地,该手机可以根据缩放比例以及之前计算的有效区域宽高,得到光学模组的实际大小(采集区域的边长)为:190*50/75=127,根据该参数计算方格阵列中最佳方格宽度:127/2.5=51。根据最佳方格宽度,设计一张可以覆盖手机屏幕的方格阵列图,再让光学模组采集一次图像。
例如,小方格内部线段的设计原理可以如图6所示:方格分为左右两侧,左侧信息值代表方格所在列序号,右侧信息值代表方格所在行序号;每一侧区域再均分为上下两部分,下部分每根横线代表数值1,每根竖线代表数值4,即下部分类似于4进制;当下部分直线代表的数值满16时,清空下方直线,同时上方直线加1;故而规律为:区域内部4进制,上下区域之间为16进制。将光学模组采集的如图7所示的图像按照之前计算得到的旋转角度进行旋转,然后检测图像中的方格信息。通过获取方格内部所有的直线信息,就可以计算出该方格在整个方格阵列中的位置。再参考方格左上角的坐标值,即可计算得到当前方格左上角在整个手机屏幕中的坐标。最后,计算该坐标相对于采集图像有效区域中心坐标的偏移,乘以缩放比例,即可得到图像采集有效区域中心点相对于整个手机屏幕的坐标,从而实现了光学模组的初步定位。
309,根据缩放比例、屏幕分辨率以及光学模组坐标,构建验证图。
310,验证图覆盖手机屏幕,抓取图像。
具体地,该手机可以根据计算出的光学模组坐标设计一个大小覆盖手机屏幕的验证图,让手机显示该图并进行一次图像采集,根据抓图进行坐标验证与精细调整处理。
例如,该手机可以构建一幅大小为手机屏幕分辨率的全黑图像,在上一步得到的坐标值位置,画两根白色的直线(长度为50)相互垂直交叉,具体地,如图8所示。让手机显示这张图像,并进行一次图像采集,得到如图9所示的图像。
311,验证是否通过。
312,如果没有通过验证,则通报错误,并进入人工查找。
313,如果通过验证,则调整光学模组的坐标。
具体地,将采集得到的图像按照前面计算的角度旋转,再检测是否有两根直线垂直交叉,如果没有则通知上层接口定位失败;如果有则定位成功,并且通过计算交叉点坐标与有效区域中心点坐标的偏移,进一步精确调整光学模组的中心坐标。
314,注销算法环境。
由此,手机可以输出光学模组的中心坐标,物理大小,旋转角度,缩放比例,图像采集有效区域等信息。
下面对本发明实施例的检测光学模组的装置进行说明。
图11是本发明实施例的检测光学模组的装置400的示意性框图。
如图11所示,该装置400包括:
显示模块410,用于在该显示屏420上显示第一图像,该第一图像包括多个图像单元,该多个图像单元中的每个图像单元上设置有图案,上述每个图像单元上的图案用于指示上述每个图像单元中的参考点在该第一图像上的位置。
处理模块440,该处理模块440用于:
通过该光学模组430获取第二图像,该第二图像为该光学模组430在该显示屏420的采集区域内采集的该第一图像的图像;确定该第二图像中目标图像单元上的图案;根据该目标图像单元上的图案,确定该目标图像单元中目标参考点的第一位置,该第一位置为该目标参考点在该第一图像上的位置;根据该第一位置,确定第二位置,该第二位置为该采集区域的中心点在该显示屏420上的位置。
可选地,该处理模块440具体用于:
根据该第一位置和映射关系信息,确定第三位置,该第三位置为该目标参考点在该显示屏420上的位置,该映射关系信息包括该第一位置和该第一 位置对应的该第三位置;根据该第三位置,确定该第二位置。
可选地,上述每个图像单元上的图案包括横线段和/或竖线段。
可选地,上述每个图像单元包括左半侧区域和右半侧区域,其左半侧区域内的线段用于指示图像单元在该第一图像上所属的列,其右半侧区域内用于指示图像单元在该第一图像上所属的行。
可选地,上述每个图像单元包括上半侧区域和下半侧区域,该下半侧区域内的一条横线段表示数字1,该下半侧区域内的一条竖线段表示数字4,该上半侧区域内的一条横线段表示数字16。
可选地,该处理模块440还用于:
该显示模块410在该显示屏420上显示第一图像之前,生成该第一图像。
可选地,该第一图像为阵列图像,该多个图像单元中的每个图像单元为正方形图像单元,该采集区域为正方形区域;其中,该处理模块440具体用于:
确定该采集区域的边长;根据该采集区域的边长,确定上述每个图像单元的边长,该采集区域的边长大于上述每个图像单元的边长;根据上述每个图像单元的边长,生成该第一图像。
可选地,该采集区域的边长为上述每个图像单元的边长的2.5倍。
可选地,该显示模块410还用于:
在该显示屏420上显示第三图像,该第三图像中每个像素点的亮度值均大于第一亮度值;其中,该处理模块440更具体用于:
通过该光学模组430获取第四图像,该第四图像为该光学模组430在该采集区域内采集的该第三图像的图像;对该第四图像进行二值化,获取二值图像;根据该二值图像的水平梯度响应值和垂直梯度响应值,确定该采集区域的边长。
可选地,该第一亮度值为128。
可选地,该处理模块440更具体用于:
根据该二值图像的水平梯度响应值和垂直梯度响应值,确定该二值图像的边长;
根据缩放比例和该二值图像的边长,确定该采集区域的边长,该缩放比例为该采集区域的图像与该光学模组430在该采集区域内采集的图像之间的比例。
可选地,该处理模块440具体用于:
根据该第三位置,确定该第二位置之前,确定第一偏移量,用于修正该第二位置;根据该第一偏移量和第三位置,确定该第二位置。
可选地,该处理模块440更具体用于:
确定第二偏移量,该第二偏移量为该第二图像中该目标参考点相对该第二图像的中心点的偏移向量;根据该第二偏移量确定该第一偏移量。
可选地,该处理模块440更具体用于:
根据缩放比例和该第二偏移量,确定该第一偏移量,该缩放比例为该采集区域的图像与该光学模组430在该采集区域内采集的图像之间的比例。
可选地,该处理模块440具体用于:
确定该第二图像中目标图像单元上的图案之前,获取旋转角度,该旋转角度为该光学模组430在该采集区域内采集的图像相对该采集区域的图像旋转的角度;获取第一处理图像,该第一处理图像为该第二图像按照该旋转角度旋转后的图像;通过分析该第一处理图像,确定该目标图像单元上的图案。
可选地,该处理模块440还用于:
根据该第一位置,确定该第二位置之前,获取缩放比例,该缩放比例为该采集区域的图像与该光学模组430在该采集区域内采集的图像之间的比例。
可选地,该显示模块410还用于:
在该显示屏420上显示第五图像,该第五图像包括相互平行的k条第三线段,该k条第三线段中至少两条相邻的第三线段覆盖在该采集区域上,k≥2;其中,该处理模块440具体用于:
通过该光学模组430获取第六图像,该第六图像为该光学模组430在该采集区域内采集的该第五图像的图像;根据第一距离和第二距离,确定该缩放比例,该第一距离为该第五图像中相邻的两条第三线段之间的垂直距离,该第二距离为该第六图像中相邻的两条第三线段之间的垂直距离。
可选地,该第一距离为该显示屏420的50个像素点。
可选地,该k条第三线段中每条第三线段的一侧垂直设置有j条第四线段,该j条第四线段中至少一条第四线段覆盖在该采集区域上。
可选地,该j条第四线段为相互平行的线段,该j条第四线段中相邻的两条第四线段之间的垂直距离为该显示屏420的25个像素点。
可选地,该处理模块440还用于:
获取旋转角度,该旋转角度为该光学模组430在该采集区域内采集的图像相对该采集区域的图像旋转的角度。
可选地,该处理模块440具体用于:
通过霍夫变换分析该第六图像,并得到第一角度;获取第二处理图像,该第二处理图像为该第四图像按照该第一角度旋转后的图像;通过对比该第二处理图像和该第五图像,确定该旋转角度。
可选地,该显示模块410还用于:
在该显示屏420上显示第七图像,该第七图像包括在该采集区域的中心点十字交叉的两条直线;其中,该处理模块440还用于:
通过该光学模组430获取第八图像,该第八图像为该光学模组430在该采集区域内采集的该第七图像的图像;通过分析该第八图像验证该第二位置。
可选地,该显示屏上显示的图像的分辨率与该显示屏的分辨率相同。
本发明实施例中,术语“模块”可以包括以下部件中的至少一种:
专用集成电路(Application Specific Integrated Circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和其它支持所描述的功能的组件。
应理解,这里的装置400以功能单元的形式体现。在一个可选例子中,本领域技术人员可以理解,装置400可以具体为上述方法实施例中提及的电子设备,还可以为图1所示的电子设备100。装置400可以用于执行上述方法实施例中各个流程和/或步骤,为避免重复,在此不再赘述。
在实现过程中,本发明实施例中的方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。更具体地,结合本发明实施例公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域的成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
也就是说,本发明实施例中的方法实施例可以应用于处理器中,或者由 处理器实现。
图12示出了本发明实施例的另一检测光学模组的装置500,包括:处理器510、存储器520、显示屏530与光学模组540。其中,光学模组540的采集区域位于显示屏530的至少部分显示区域,该存储器520用于存储指令,该处理器510用于执行该存储器520存储的指令,其中,对该指令的执行使得该处理器510执行以下操作:
在该显示屏530上显示第一图像,该第一图像包括多个图像单元,该多个图像单元中的每个图像单元上设置有图案,上述每个图像单元上的图案用于指示上述每个图像单元中的参考点在该第一图像上的位置。
通过该光学模组540获取第二图像,该第二图像为该光学模组540在该显示屏530的采集区域内采集的该第一图像的图像;确定该第二图像中目标图像单元上的图案;根据该目标图像单元上的图案,确定该目标图像单元中目标参考点的第一位置,该第一位置为该目标参考点在该第一图像上的位置;根据该第一位置,确定第二位置,该第二位置为该采集区域的中心点在该显示屏530上的位置。
因此,电子设备可以在显示屏上显示预先设计好的图像(显示图像);然后通过光学模组对显示屏上显示的图像进行图像采集,并得到采集图像;再通过对采集图像的分析,确定出采集区域的中心点在该显示屏上的位置,最终实现光学模组的定位操作。
图12所示的装置500能够实现前述方法实施例中由终端设备所实现的各个过程,为避免重复,这里不再赘述。
在一个可选例子中,本领域技术人员可以理解,装置500可以具体为上述实施例中的装置400,还可对应于上述方法实施例中的电子设备。装置500可以用于执行上述方法实施例中的各个流程和/或步骤,为避免重复,在此不再赘述。
本发明实施例中,还提供一种电子设备,该电子设备包括上述实施例中的装置400或装置500。
可选地,作为一个实施例,该显示屏可以是有机发光二极管(Organic Light-Emitting Diode,OLED)显示屏,该光学模组利用该OLED显示屏的至少部分OLED像素单元作为光源来执行检测功能。
应理解,本发明实施例中提及的处理器可能是一种集成电路芯片,具有 信号的处理能力,可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。例如,上述的处理器可以是通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、分立硬件组件等等。此外,通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
此外,本发明实施例中提及的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。应理解,上述存储器为示例性但不是限制性说明,例如,本发明实施例中的存储器还可以是静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM,SLDRAM)以及直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)等等。也就是说,本文描述的***和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
最后,需要注意的是,在本发明实施例和所附权利要求书中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本发明实施例。
例如,在本发明实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”、“上述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。
又例如,在本发明实施例中可能采用术语第一图像和第二图像,但这些图像不应限于这些术语。这些术语仅用来将图像彼此区分开。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特 定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明实施例的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如,多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本发明实施例的目的。
另外,在本发明实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本发明实施例的具体实施方式,但本发明实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明实施例的保护范围之内。因此,本发明实施例的保护范围应以权利要求的保护范围为准。

Claims (27)

  1. 一种检测光学模组的方法,其特征在于,应用于包括显示屏和光学模组的电子设备;
    所述方法包括:
    在所述显示屏上显示第一图像,所述第一图像包括多个图像单元,所述多个图像单元中的每个图像单元上设置有图案,所述每个图像单元上的图案用于指示所述每个图像单元中的参考点在所述第一图像上的位置;
    通过所述光学模组获取第二图像,所述第二图像为所述光学模组在所述显示屏的采集区域内采集的所述第一图像的图像;
    确定所述第二图像中目标图像单元上的图案;
    根据所述目标图像单元上的图案,确定所述目标图像单元中目标参考点的第一位置,所述第一位置为所述目标参考点在所述第一图像上的位置;
    根据所述第一位置,确定第二位置,所述第二位置为所述采集区域的中心点在所述显示屏上的位置。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一位置,确定第二位置,包括:
    根据所述第一位置和映射关系信息,确定第三位置,所述第三位置为所述目标参考点在所述显示屏上的位置,所述映射关系信息包括所述第一位置和所述第一位置对应的所述第三位置;
    根据所述第三位置,确定所述第二位置。
  3. 根据权利要求1或2所述的方法,其特征在于,所述每个图像单元上的图案包括横线段和/或竖线段;所述每个图像单元包括左半侧区域和右半侧区域,所述左半侧区域内的线段用于指示图像单元在所述第一图像上所属的列,所述右半侧区域内用于指示图像单元在所述第一图像上所属的行。
  4. 根据权利要求1或2所述的方法,其特征在于,所述每个图像单元上的图案包括横线段和/或竖线段;所述每个图像单元包括上半侧区域和下半侧区域,所述下半侧区域内的一条横线段表示第一数字,所述下半侧区域内的一条竖线段表示第二数字,所述上半侧区域内的一条横线段表示第三数字。
  5. 根据权利要求1至4中任一项所述的方法,所述第一图像为阵列图像,所述多个图像单元中的每个图像单元为正方形图像单元,所述采集区域为正 方形区域;
    其中,所述方法还包括:
    在所述显示屏上显示第一图像之前,确定所述采集区域的边长;
    根据所述采集区域的边长,确定所述每个图像单元的边长,所述采集区域的边长大于所述每个图像单元的边长;
    根据所述每个图像单元的边长,生成所述第一图像。
  6. 根据权利要求5所述的方法,其特征在于,所述确定所述采集区域的边长,包括:
    在所述显示屏上显示第三图像,所述第三图像中每个像素点的亮度值均大于第一亮度值;
    通过所述光学模组获取第四图像,所述第四图像为所述光学模组在所述采集区域内采集的所述第三图像的图像;
    对所述第四图像进行二值化,获取二值图像;
    根据所述二值图像的水平梯度响应值和垂直梯度响应值,确定所述采集区域的边长。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述二值图像的水平梯度响应值和垂直梯度响应值,确定所述采集区域的边长,包括:
    根据所述二值图像的水平梯度响应值和垂直梯度响应值,确定所述二值图像的边长;
    根据缩放比例和所述二值图像的边长,确定所述采集区域的边长,所述缩放比例为所述采集区域的图像与所述光学模组在所述采集区域内采集的图像之间的比例。
  8. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    确定第一偏移量,所述第一偏移量用于修正所述第二位置;
    其中,所述根据所述第三位置,确定所述第二位置,包括:
    根据所述第一偏移量和第三位置,确定所述第二位置。
  9. 根据权利要求8所述的方法,其特征在于,所述确定第一偏移量,包括:
    确定第二偏移量,所述第二偏移量为所述第二图像中所述目标参考点相对所述第二图像的中心点的偏移向量;
    根据缩放比例和所述第二偏移量,确定所述第一偏移量,所述缩放比例 为所述采集区域的图像与所述光学模组在所述采集区域内采集的图像之间的比例。
  10. 根据权利要求1至9中任一项所述的方法,其特征在于,所述方法还包括:
    获取旋转角度,所述旋转角度为所述光学模组在所述采集区域内采集的图像相对所述采集区域的图像旋转的角度;
    其中,所述确定所述第二图像中目标图像单元上的图案,包括:
    获取第一处理图像,所述第一处理图像为所述第二图像按照所述旋转角度旋转后的图像;
    通过分析所述第一处理图像,确定所述目标图像单元上的图案。
  11. 根据权利要求1至10中任一项所述的方法,其特征在于,所述根据所述第一位置,确定所述第二位置之前,所述方法还包括:
    获取缩放比例,所述缩放比例为所述采集区域的图像与所述光学模组在所述采集区域内采集的图像之间的比例;所述获取缩放比例,包括:
    在所述显示屏上显示第五图像,所述第五图像包括相互平行的k条第三线段,所述k条第三线段中至少两条相邻的第三线段覆盖在所述采集区域上,k≥2;
    通过所述光学模组获取第六图像,所述第六图像为所述光学模组在所述采集区域内采集的所述第五图像的图像;
    根据第一距离和第二距离,确定所述缩放比例,所述第一距离为所述第五图像中相邻的两条第三线段之间的垂直距离,所述第二距离为所述第六图像中相邻的两条第三线段之间的垂直距离。
  12. 根据权利要求11所述的方法,其特征在于,所述方法还包括:
    获取旋转角度,所述旋转角度为所述光学模组在所述采集区域内采集的图像相对所述采集区域的图像旋转的角度;所述获取旋转角度,包括:
    通过霍夫变换分析所述第六图像,并得到第一角度;
    获取第二处理图像,所述第二处理图像为所述第四图像按照所述第一角度旋转后的图像;
    通过对比所述第二处理图像和所述第五图像,确定所述旋转角度。
  13. 根据权利要求1至12中任一项所述的方法,其特征在于,所述方法还包括:
    在所述显示屏上显示第七图像,所述第七图像包括在所述采集区域的中心点十字交叉的两条直线;
    通过所述光学模组获取第八图像,所述第八图像为所述光学模组在所述采集区域内采集的所述第七图像的图像;
    通过分析所述第八图像验证所述第二位置。
  14. 一种检测光学模组的装置,其特征在于,包括;
    显示模块,用于在所述显示屏上显示第一图像,所述第一图像包括多个图像单元,所述多个图像单元中的每个图像单元上设置有图案,所述每个图像单元上的图案用于指示所述每个图像单元中的参考点在所述第一图像上的位置;
    处理模块,所述处理模块用于:
    通过所述光学模组获取第二图像,所述第二图像为所述光学模组在所述显示屏的采集区域内采集的所述第一图像的图像;
    确定所述第二图像中目标图像单元上的图案;
    根据所述目标图像单元上的图案,确定所述目标图像单元中目标参考点的第一位置,所述第一位置为所述目标参考点在所述第一图像上的位置;
    根据所述第一位置,确定第二位置,所述第二位置为所述采集区域的中心点在所述显示屏上的位置。
  15. 根据权利要求14所述的装置,其特征在于,所述处理模块具体用于:
    根据所述第一位置和映射关系信息,确定第三位置,所述第三位置为所述目标参考点在所述显示屏上的位置,所述映射关系信息包括所述第一位置和所述第一位置对应的所述第三位置;
    根据所述第三位置,确定所述第二位置。
  16. 根据权利要求14或15所述的装置,其特征在于,所述每个图像单元上的图案包括横线段和/或竖线段;所述每个图像单元包括左半侧区域和右半侧区域,所述左半侧区域内的线段用于指示图像单元在所述第一图像上所属的列,所述右半侧区域内用于指示图像单元在所述第一图像上所属的行。
  17. 根据权利要求14或15所述的装置,其特征在于,所述每个图像单元上的图案包括横线段和/或竖线段;所述每个图像单元包括上半侧区域和下半侧区域,所述下半侧区域内的一条横线段表示第一数字,所述下半侧区域内的一条竖线段表示第二数字,所述上半侧区域内的一条横线段表示第三数 字。
  18. 根据权利要求14至17中任一项所述的装置,其特征在于,
    所述第一图像为阵列图像,所述多个图像单元中的每个图像单元为正方形图像单元,所述采集区域为正方形区域;
    其中,所述处理模块还用于:
    在所述显示屏上显示第一图像之前,确定所述采集区域的边长;
    根据所述采集区域的边长,确定所述每个图像单元的边长,所述采集区域的边长大于所述每个图像单元的边长;
    根据所述每个图像单元的边长,生成所述第一图像。
  19. 根据权利要求18所述的装置,其特征在于,所述显示模块还用于:
    在所述显示屏上显示第三图像,所述第三图像中每个像素点的亮度值均大于第一亮度值;
    其中,所述处理模块更具体用于:
    通过所述光学模组获取第四图像,所述第四图像为所述光学模组在所述采集区域内采集的所述第三图像的图像;
    对所述第四图像进行二值化,获取二值图像;
    根据所述二值图像的水平梯度响应值和垂直梯度响应值,确定所述采集区域的边长。
  20. 根据权利要求19所述的装置,其特征在于,所述处理模块更具体用于:
    根据所述二值图像的水平梯度响应值和垂直梯度响应值,确定所述二值图像的边长;
    根据缩放比例和所述二值图像的边长,确定所述采集区域的边长,所述缩放比例为所述采集区域的图像与所述光学模组在所述采集区域内采集的图像之间的比例。
  21. 根据权利要求15所述的装置,其特征在于,所述处理模块具体用于:
    在根据所述第三位置,确定所述第二位置之前,确定第一偏移量,所述第一偏移量用于修正所述第二位置;
    根据所述第一偏移量和第三位置,确定所述第二位置。
  22. 根据权利要求21所述的装置,其特征在于,所述处理模块更具体用于:
    确定第二偏移量,所述第二偏移量为所述第二图像中所述目标参考点相对所述第二图像的中心点的偏移向量;
    根据缩放比例和所述第二偏移量,确定所述第一偏移量,所述缩放比例为所述采集区域的图像与所述光学模组在所述采集区域内采集的图像之间的比例。
  23. 根据权利要求14至22中任一项所述的装置,其特征在于,所述处理模块具体用于:
    确定所述第二图像中目标图像单元上的图案之前,获取旋转角度,所述旋转角度为所述光学模组在所述采集区域内采集的图像相对所述采集区域的图像旋转的角度;
    获取第一处理图像,所述第一处理图像为所述第二图像按照所述旋转角度旋转后的图像;
    通过分析所述第一处理图像,确定所述目标图像单元上的图案。
  24. 根据权利要求14至23中任一项所述的装置,其特征在于,
    所述显示模块还用于:
    在所述显示屏上显示第五图像,所述第五图像包括相互平行的k条第三线段,所述k条第三线段中至少两条相邻的第三线段覆盖在所述采集区域上,k≥2;
    其中,所述处理模块还用于:
    通过所述光学模组获取第六图像,所述第六图像为所述光学模组在所述采集区域内采集的所述第五图像的图像;
    根据第一距离和第二距离,确定缩放比例,所述缩放比例为所述采集区域的图像与所述光学模组在所述采集区域内采集的图像之间的比例,所述第一距离为所述第五图像中相邻的两条第三线段之间的垂直距离,所述第二距离为所述第六图像中相邻的两条第三线段之间的垂直距离。
  25. 根据权利要求24所述的装置,其特征在于,所述处理模块还用于:
    通过霍夫变换分析所述第六图像,并得到第一角度;
    获取第二处理图像,所述第二处理图像为所述第四图像按照所述第一角度旋转后的图像;
    通过对比所述第二处理图像和所述第五图像,确定旋转角度,所述旋转角度为所述光学模组在所述采集区域内采集的图像相对所述采集区域的图 像旋转的角度。
  26. 根据权利要求14至25中任一项所述的装置,其特征在于,所述显示模块还用于:
    在所述显示屏上显示第七图像,所述第七图像包括在所述采集区域的中心点十字交叉的两条直线;
    其中,所述处理模块还用于:
    通过所述光学模组获取第八图像,所述第八图像为所述光学模组在所述采集区域内采集的所述第七图像的图像;
    通过分析所述第八图像验证所述第二位置。
  27. 一种电子设备,其特征在于,包括权利要求14至26中任一项所述的检测光学模组的装置。
PCT/CN2017/101638 2017-09-13 2017-09-13 检测光学模组的方法、装置和电子设备 WO2019051688A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/101638 WO2019051688A1 (zh) 2017-09-13 2017-09-13 检测光学模组的方法、装置和电子设备
CN201780001068.3A CN107690656B (zh) 2017-09-13 2017-09-13 检测光学模组的方法、装置和电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/101638 WO2019051688A1 (zh) 2017-09-13 2017-09-13 检测光学模组的方法、装置和电子设备

Publications (1)

Publication Number Publication Date
WO2019051688A1 true WO2019051688A1 (zh) 2019-03-21

Family

ID=61154082

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/101638 WO2019051688A1 (zh) 2017-09-13 2017-09-13 检测光学模组的方法、装置和电子设备

Country Status (2)

Country Link
CN (1) CN107690656B (zh)
WO (1) WO2019051688A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706158A (zh) * 2019-10-15 2020-01-17 Oppo广东移动通信有限公司 图像处理方法、图像处理装置及终端设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103002199A (zh) * 2012-10-19 2013-03-27 北京小米科技有限责任公司 基于摄像模组采集图像的方法和装置及移动终端
US20150146943A1 (en) * 2013-11-22 2015-05-28 Samsung Electronics Co., Ltd. Method of recognizing contactless fingerprint and electronic device for performing the same
CN105335072A (zh) * 2014-06-12 2016-02-17 联想(北京)有限公司 一种定位方法和装置
CN205750806U (zh) * 2016-03-07 2016-11-30 北京集创北方科技股份有限公司 指纹识别装置及移动终端
CN106839976A (zh) * 2016-12-22 2017-06-13 歌尔科技有限公司 一种检测镜头中心的方法及装置
CN106919286A (zh) * 2017-03-07 2017-07-04 上海欢米光学科技有限公司 调整触摸屏图像位置的方法与设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103002199A (zh) * 2012-10-19 2013-03-27 北京小米科技有限责任公司 基于摄像模组采集图像的方法和装置及移动终端
US20150146943A1 (en) * 2013-11-22 2015-05-28 Samsung Electronics Co., Ltd. Method of recognizing contactless fingerprint and electronic device for performing the same
CN105335072A (zh) * 2014-06-12 2016-02-17 联想(北京)有限公司 一种定位方法和装置
CN205750806U (zh) * 2016-03-07 2016-11-30 北京集创北方科技股份有限公司 指纹识别装置及移动终端
CN106839976A (zh) * 2016-12-22 2017-06-13 歌尔科技有限公司 一种检测镜头中心的方法及装置
CN106919286A (zh) * 2017-03-07 2017-07-04 上海欢米光学科技有限公司 调整触摸屏图像位置的方法与设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706158A (zh) * 2019-10-15 2020-01-17 Oppo广东移动通信有限公司 图像处理方法、图像处理装置及终端设备
CN110706158B (zh) * 2019-10-15 2023-04-07 Oppo广东移动通信有限公司 图像处理方法、图像处理装置及终端设备

Also Published As

Publication number Publication date
CN107690656B (zh) 2019-04-02
CN107690656A (zh) 2018-02-13

Similar Documents

Publication Publication Date Title
CN112348815B (zh) 图像处理方法、图像处理装置以及非瞬时性存储介质
TWI619080B (zh) 指紋重疊區域面積的計算方法及電子裝置
US9519968B2 (en) Calibrating visual sensors using homography operators
CN108875723B (zh) 对象检测方法、装置和***及存储介质
US9734379B2 (en) Guided fingerprint enrollment
US9710109B2 (en) Image processing device and image processing method
US9405182B2 (en) Image processing device and image processing method
WO2016062159A1 (zh) 图像匹配方法及手机应用测试平台
US20120092329A1 (en) Text-based 3d augmented reality
KR101032446B1 (ko) 영상의 정점 검출 장치 및 방법
CN103065134A (zh) 一种具有提示信息的指纹识别装置和方法
US20170091521A1 (en) Secure visual feedback for fingerprint sensing
KR20130007950A (ko) 관심영역 검출 장치와 방법 및 상기 방법을 구현하는 프로그램이 기록된 기록매체
WO2018184255A1 (zh) 图像校正的方法和装置
US10395090B2 (en) Symbol detection for desired image reconstruction
US11216905B2 (en) Automatic detection, counting, and measurement of lumber boards using a handheld device
JP2022519398A (ja) 画像処理方法、装置及び電子機器
WO2018058573A1 (zh) 对象检测方法、对象检测装置以及电子设备
WO2019051688A1 (zh) 检测光学模组的方法、装置和电子设备
CN108255298B (zh) 一种投影交互***中的红外手势识别方法及设备
US20230186596A1 (en) Object recognition method, apparatus, device and storage medium
CN105930813B (zh) 一种在任意自然场景下检测行文本的方法
TWI507919B (zh) 追蹤與記錄指尖軌跡的影像處理方法
TWI674536B (zh) 指紋導航方法及電子裝置
TWI553512B (zh) 一種手勢辨識追蹤方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17924960

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17924960

Country of ref document: EP

Kind code of ref document: A1