CN111489395B - Image signal direction judging method and related equipment - Google Patents

Image signal direction judging method and related equipment Download PDF

Info

Publication number
CN111489395B
CN111489395B CN202010288935.XA CN202010288935A CN111489395B CN 111489395 B CN111489395 B CN 111489395B CN 202010288935 A CN202010288935 A CN 202010288935A CN 111489395 B CN111489395 B CN 111489395B
Authority
CN
China
Prior art keywords
processed
pixel
data
variance
var
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010288935.XA
Other languages
Chinese (zh)
Other versions
CN111489395A (en
Inventor
张鑫
陈欢
马维维
杨傲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN202010288935.XA priority Critical patent/CN111489395B/en
Publication of CN111489395A publication Critical patent/CN111489395A/en
Application granted granted Critical
Publication of CN111489395B publication Critical patent/CN111489395B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides a method for judging the direction of an image signal, which comprises the following steps: taking the current pixel point position to be processed as a center, and acquiring neighborhood n multiplied by n pixel data of the pixel point to be processed of the image signal; converting pixel data of the neighborhood n multiplied by n into brightness information of m multiplied by m size, and calculating the brightness information mean value of alpha directions with the pixel point to be processed as the center, so as to obtain the brightness information variance of alpha directions; and judging whether the pixel point to be processed has the direction and the specific direction according to the alpha direction brightness information variance and the threshold value. The technical scheme provided by the application has the advantages of small calculation amount and capability of realizing point-by-point judgment and being beneficial to actual engineering realization.

Description

Image signal direction judging method and related equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method for determining a direction of an image signal and a related device.
Background
Direction Detection (Direction Detection) is a method for acquiring image features in a computer vision system, and is widely applied to the fields of motion Detection, image matching, video tracking, three-dimensional modeling, target identification and the like.
The existing direction detection method comprises a wavelet domain-based edge direction feature detection method and an edge direction histogram method.
The wavelet domain edge direction characteristic detection method is characterized in that wavelet coefficient sub-bands HL and LH obtained by wavelet decomposition of each level and wavelet coefficient sub-bands HL _45 and HL _135 in the other two orthogonal directions obtained after the original image is rotated by positive and negative 45 degrees in the horizontal direction are utilized, and the edge directivity of a corresponding point can be judged based on the four wavelet coefficient sub-bands.
The edge direction histogram method for direction detection is to divide the image into several small regions, divide each small region into sub-cells according to the size of the direction filter, and judge the directionality of the currently divided region by the histogram statistical mode through the operations of filtering, normalizing and the like of the average gray value of four sub-cells in each small region and the digital filter.
The existing direction detection method has large calculation amount and higher time complexity; and the judgment based on block statistics is not based on the judgment of pixel points one by one, which is not beneficial to the realization of engineering.
Disclosure of Invention
The embodiment of the application discloses a direction judgment method of an image signal, which can realize image direction detection, has small calculated amount and high speed, can realize point-by-point detection and is beneficial to the realization and the use in engineering. A first aspect of the present embodiment discloses a method for determining a direction of an image signal, including:
taking the current pixel point position to be processed as a center, acquiring pixel data in _ data [ i ] [ j ] of a neighborhood n multiplied by n of the pixel point to be processed of the image signal, wherein i =0,1, \8230;, n-1; j =0,1, \8230, n-1, then in _ data [ (n-1)/2 ] [ (n-1)/2 ] is the current pixel point to be detected;
converting pixel data of the neighborhood n multiplied by n into brightness information of m multiplied by m size, and calculating the brightness information mean value of alpha directions with the pixel point to be processed as the center, so as to obtain the brightness information variance of alpha directions;
judging whether the pixel point to be processed has a direction and a specific direction according to the variance of the brightness information in the alpha directions and a threshold value;
m is odd number and m is more than or equal to 3,n =2m +1.
In a second aspect, a terminal is provided, which includes:
the image signal processing device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring pixel data in _ data [ i ] [ j ] of a neighborhood n multiplied by n of a pixel point to be processed of an image signal by taking the current pixel point to be processed as a center, wherein i =0,1, \\ 8230;, n-1; j =0,1, \8230, if n-1, in _ data [ (n-1)/2 ] [ (n-1)/2 ] is the current pixel point to be detected;
the processing unit is used for firstly converting the pixel data of the neighborhood n multiplied by n into brightness information with the size of m multiplied by m according to the pixel data of the neighborhood, calculating the brightness information mean value of alpha directions with the pixel point to be processed as the center, and further obtaining the brightness information variance of the alpha directions; judging whether the pixel point to be processed has a direction and a specific direction according to the alpha direction brightness information variance and a threshold value;
m is odd number and m is more than or equal to 3,n =2m +1.
In a third aspect, there is provided a terminal comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of the method of the first aspect.
A fourth aspect of embodiments of the present application discloses a computer-readable storage medium, which is characterized by storing a computer program for electronic data exchange, wherein the computer program causes a computer to execute the method of the first aspect.
A fifth aspect of embodiments of the present application discloses a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
By implementing the embodiment of the application, the terminal in the technical scheme provided by the application takes the current pixel point to be processed as the center, obtains the pixel data in _ data [ i ] [ j ] of the neighborhood nxn of the pixel point to be processed of the image signal, then converts the pixel data of nxn into the brightness information with the size of mxm, calculates the mean value of the brightness information in alpha directions and the variance of the brightness information, and finally judges whether the pixel point to be processed has the direction and the specific direction according to the variance and the threshold.
Drawings
The drawings used in the embodiments of the present application are described below.
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for determining a direction of an image signal according to an embodiment of the present disclosure;
FIG. 2a is a diagram of 11 × 11 neighborhood data;
FIG. 3 is a schematic diagram of a mirror-extended image according to an embodiment of the present application;
FIG. 4 is a schematic view of 4 directions provided by an embodiment of the present application;
fig. 5 is a flowchart illustrating a method for determining a direction of an image signal according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings.
The term "and/or" in this application is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document indicates that the former and latter related objects are in an "or" relationship.
The "plurality" appearing in the embodiments of the present application means two or more. The descriptions of the first, second, etc. appearing in the embodiments of the present application are only for illustrating and differentiating the objects, and do not represent the order or the particular limitation of the number of the devices in the embodiments of the present application, and do not constitute any limitation to the embodiments of the present application. The term "connect" in the embodiments of the present application refers to various connection manners, such as direct connection or indirect connection, to implement communication between devices, which is not limited in this embodiment of the present application.
A terminal in the embodiments of the present application may refer to various forms of UE, access terminal, subscriber unit, subscriber station, mobile station, MS (mobile station), remote station, remote terminal, mobile device, user terminal, terminal device (terminal equipment), wireless communication device, user agent, or user equipment. The terminal device may also be a cellular phone, a cordless phone, an SIP (session initiation protocol) phone, a WLL (wireless local loop) station, a PDA (personal digital assistant) with a wireless communication function, a handheld device with a wireless communication function, a computing device or other processing device connected to a wireless modem, a vehicle-mounted device, a wearable device, a terminal device in a future 5G network or a terminal device in a future evolved PLMN (public land mobile network, chinese), and the like, which are not limited in this embodiment.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a terminal disclosed in an embodiment of the present application, the terminal 100 includes a storage and processing circuit 110, and a sensor 170 connected to the storage and processing circuit 110, where the sensor 170 may include a camera, a distance sensor, a gravity sensor, and the like, the electronic device may include two transparent display screens, the transparent display screens are disposed on a back side and a front side of the electronic device, and part or all of components between the two transparent display screens may also be transparent, so that the electronic device may be a transparent electronic device in terms of visual effect, and if part of the components are transparent, the electronic device may be a hollow electronic device. Wherein:
the terminal 100 may include control circuitry, which may include storage and processing circuitry 110. The storage and processing circuitry 110 may be memory, such as hard disk drive memory, non-volatile memory (e.g., flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), volatile memory (e.g., static or dynamic random access memory, etc.), etc., and embodiments of the present application are not limited thereto. Processing circuitry in the storage and processing circuitry 110 may be used to control the operation of the terminal 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 110 may be used to run software in the terminal 100, such as an Internet browsing application, a Voice Over Internet Protocol (VOIP) telephone call application, an email application, a media playing application, operating system functions, and so forth. The software may be used to perform control operations such as, for example, camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functions implemented based on status indicators such as status indicator lights of light emitting diodes, touch event detection based on a touch sensor, functions associated with displaying information on multiple (e.g., layered) display screens, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in the terminal 100, without limitation, embodiments of the present application are also contemplated.
The terminal 100 may include an input-output circuit 150. The input-output circuit 150 may be used to enable the terminal 100 to input and output data, i.e., to allow the terminal 100 to receive data from external devices and also to allow the terminal 100 to output data from the terminal 100 to external devices. The input-output circuit 150 may further include a sensor 170. Sensor 170 vein identification module, can also include ambient light sensor, proximity sensor based on light and electric capacity, fingerprint identification module, touch sensor (for example, based on light touch sensor and/or capacitanc touch sensor, wherein, touch sensor can be touch-control display screen's partly, also can regard as a touch sensor structure independent utility), acceleration sensor, the camera, and other sensors etc. the camera can be leading camera or rear camera, the fingerprint identification module can integrate in the display screen below, be used for gathering the fingerprint image, the fingerprint identification module can be: optical fingerprint module, etc., and is not limited herein. The front camera can be arranged below the front display screen, and the rear camera can be arranged below the rear display screen. Certainly, the front camera or the rear camera may not be integrated with the display screen, and certainly, in practical application, the front camera or the rear camera may also be of a lifting structure, and the specific structure of the front camera or the rear camera is not limited in the specific embodiment of the present application.
The input-output circuitry 150 may also include one or more display screens, and when multiple display screens, such as 2 display screens, one display screen may be disposed on the front of the electronic device and another display screen may be disposed on the back of the electronic device, such as display screen 130. The display 130 may include one or a combination of liquid crystal display, transparent display, organic light emitting diode display, electronic ink display, plasma display, and display using other display technologies. The display screen 130 may include an array of touch sensors (i.e., the display screen 130 may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The terminal 100 can also include an audio component 140. The audio component 140 may be used to provide audio input and output functionality for the terminal 100. The audio components 140 in the terminal 100 may include a speaker, a microphone, a buzzer, a tone generator, and other components for generating and detecting sound.
The communication circuit 120 can be used to provide the terminal 100 with the capability to communicate with external devices. The communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 120 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless Communication circuitry in Communication circuitry 120 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving Near Field coupled electromagnetic signals. For example, the communication circuit 120 may include a near field communication antenna and a near field communication transceiver. The communications circuitry 120 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuitry and antenna, and so forth.
The terminal 100 may further include a battery, a power management circuit, and other input-output units 160. The input-output unit 160 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
A user may input commands through input-output circuitry 150 to control operation of terminal 100 and may use output data of input-output circuitry 150 to enable receipt of status information and other outputs from terminal 100.
Referring to fig. 2, fig. 2 provides a method for determining the direction of an image signal, which may be performed by the terminal shown in fig. 1, and the method shown in fig. 2 includes the following steps:
step S200, the terminal takes the current pixel point position to be processed as the center, and obtains pixel data in _ data [ i ] [ j ] of a neighborhood n multiplied by n of the pixel point to be processed of the image signal.
Wherein, i =0,1, \8230;, n-1; j =0,1, \8230;, n-1, then in _ data [ (n-1)/2 ] [ (n-1)/2 ] is the current pixel point to be detected.
Referring to fig. 2a, fig. 2a is a schematic diagram of 11 × 11 neighborhood data, as shown in fig. 2a, wherein the pixel point to be detected in the neighborhood is in _ data [5] [5], i.e., the R channel pixel point of the center point.
Certainly, in practical applications, the pixel to be processed may also be one of a B channel pixel, a Gr channel pixel or a Gb channel pixel.
Step S201, the terminal firstly converts the pixel data of the neighborhood nxn into brightness information of m multiplied by m size according to the pixel data of the neighborhood, calculates the brightness information mean value of alpha directions taking the pixel point to be processed as the center, and further obtains the brightness information variance of the alpha directions;
step S202, the terminal judges whether the pixel point to be processed has a direction and a specific direction according to the variance of the brightness information in the alpha directions and a threshold value;
m is odd number and m is more than or equal to 3,n =2m +1.
According to the technical scheme, a terminal takes a current pixel point position to be processed as a center, pixel data in _ data [ i ] [ j ] of a neighborhood n multiplied by n of the pixel point to be processed of an image signal is obtained, then the pixel data of the n multiplied by n is converted into brightness information of the size of m multiplied by m, brightness information mean value calculation and brightness information variance calculation in alpha directions are carried out, and whether the pixel point to be processed has the direction and the specific direction can be judged according to the variance and a threshold.
In an optional scheme, the obtaining pixel data of a neighborhood n × n with a pixel point to be processed of the image signal as a center specifically includes:
carrying out mirror image extension on the original image to obtain a mirror image extension image; acquiring pixel data of a neighborhood n multiplied by n with a pixel point to be processed as a center from the mirror image extension image;
the mirror image extension image is obtained by extending the original image in a mirror image symmetry mode by respectively taking the first-row pixel points at the upper edge, the last-row pixel points at the lower edge, the first-column pixel points at the left edge and the last-column pixel points at the right edge of the original image as symmetry axes.
Because the direction judgment of the pixel points to be processed needs to be combined with neighborhood pixel information, and the image data at the positions of the front (n-1)/2 lines, the rear (n-1)/2 lines, the front (n-1)/2 columns and the rear (n-1)/2 columns of the image in the original image does not completely have neighborhood image data with the size of n multiplied by n, the method firstly needs to carry out mirror image extension on the original image, and then carries out subsequent direction judgment operation on the basis of a mirror image extension image.
At this time, all data points in the original image may obtain neighborhood data of n × n pixel points of the pixel point to be processed from the mirror extended image of the original image to be processed. The mirror image extension image of the original image is respectively extended in a mirror image symmetry mode by taking a first line of pixel points at the upper edge, a last line of pixel points at the lower edge, a first line of pixel points at the left edge and a last line of pixel points at the right edge of the image as symmetry axes, so that pixel points at all positions in the original image can completely acquire neighborhood data of the pixel points to be processed in size of nxn.
In fig. 3, S is an original image area, L is a left extension area, R is a right extension area, U is an upper extension area, and D is a lower extension area. And forming a mirror image expansion image by L + R + U + D + S. The expanded mirror image expansion diagram of the original M multiplied by N image is (M + N-1) × (N + N-1), namely, the left and right sides of the original image are respectively expanded with (N-1)/2 columns of image data, and the upper and lower sides of the original image are respectively expanded with (N-1)/2 lines of image data. The specific way of performing mirror image extension on the original image data is as follows:
firstly, copying an original image to an S area from top to bottom in sequence in lines; then, respectively taking the first column data and the last column data of the original image as symmetry axes, and mirroring the data in the S into the L and the R; and finally, taking the L + R + S region as a whole, taking the first row of data and the last row of data as symmetry axes, and mirroring the data in the region into U and D to obtain a complete mirror image extension diagram.
In an optional scheme, the converting, according to the pixel data of the neighborhood n × n of the pixel point to be processed, into the luminance information of the size m × m specifically includes:
and carrying out Gaussian filtering on the input neighborhood n multiplied by n Bayer data by taking the size of 3 multiplied by 3 as a unit, converting the data into brightness information, and further carrying out direction detection on the current position pixel. Because the pixel information of different RGB channels is stored at different positions of Bayer data, in the method, when the Bayer data is subjected to brightness conversion, only the pixel point information of the channel which is the same as the current pixel point to be detected is selected as the central pixel of the data to be filtered with the size of 3 multiplied by 3 to complete the brightness conversion.
The kernel function used for Gaussian filtering is
Figure BDA0002448912990000081
Then the corresponding calculation relationship between the m × m brightness value of the current position pixel neighborhood to be detected and the input n × n Bayer neighborhood data is as follows:
Figure BDA0002448912990000082
wherein, lum _ data [ ii ] [ jj ] is the corresponding m × m brightness information value of the input n × n Bayer data conversion, ii =0,1, \8230; m-1; jj =0,1, \8230am-1.
In an optional scheme, the calculating, according to the m × m luminance information converted from the pixel data of the neighborhood n × n, a mean value of luminance information in α directions of the pixel point to be processed specifically includes:
extracting the brightness information of m co-channel pixel points in the beta-degree direction from the m × m brightness information converted by the neighborhood nxn pixel data, and calculating the average value of the brightness information of the m co-channel pixel points in the alpha directions to determine the average value of the brightness information of the pixel points to be processed in the corresponding beta-degree direction;
the co-channel pixel points are pixel points which are the same as the channels of the pixel points to be processed.
Where α =4 and β =0, 45, 90 or 135, i.e. the horizontal direction, the 45 degree oblique direction, the vertical direction and the 135 degree oblique direction, as shown in fig. 4.
The step of calculating the average value of the luminance information of the m co-channel pixel points in the α directions respectively and determining the average value as the luminance information average value of the pixel point to be processed in the corresponding β ° direction may specifically include:
horizontal direction mean value calculation:
Figure BDA0002448912990000083
vertical direction mean value calculation:
Figure BDA0002448912990000084
calculating the average value in the direction inclined by 45 degrees:
mean[2]=(lum_data[0][m-1]+lum_data[1][m-2]+…+lum_data[m-1][0])/m
calculating the average value of the inclined 135-degree direction:
mean[3]=(lum_data[0][0]+lum_data[1][1]+…+lum_data[m-1][m-1])/m
where lum _ data [ i ] [ j ] is m × m luminance information converted from pixel data of neighborhood n × n, and mean [ i ] is an average value corresponding to 4 directional luminance information.
In an optional scheme, the calculating, according to the m × m luminance information converted from the pixel data of the neighborhood n × n, a mean value of luminance information in α directions and a variance of the luminance information in α directions of the pixel point to be processed specifically includes:
and calculating the variance between the brightness information and the brightness information of the m same-channel pixel points to determine the variance of the brightness information in the direction of beta degrees.
The calculating the variance between the luminance information and the luminance information of the m co-channel pixels to determine the variance of the luminance information in the β ° direction may specifically include:
horizontal direction variance calculation:
Figure BDA0002448912990000091
vertical direction variance calculation:
Figure BDA0002448912990000092
and (3) calculating the variance in the direction inclined by 45 degrees:
var[2]=((lum_data[0][m-1]-mean[2]) 2 +(lum_data[1][m-2]-mean[2]) 2 +…+(lum_data[m-1][0]-mean[2]) 2 )/m
and calculating the variance in the direction of 135 degrees:
var[3]=((lum_data[0][0]-mean[3]) 2 +(lum_data[1][1]-mean[3]) 2 +…+(lum_data[m-1][m-1]-mean[3]) 2 )/m
the lumdataj is m × m luminance information converted from pixel data of neighborhood n × n, mean [ i ] is a mean value corresponding to α directional luminance information, and var [ i ] is a variance corresponding to α directional luminance information.
In an optional scheme, the determining whether the pixel point to be processed has a direction according to the variance of the luminance information in the α directions and a threshold specifically includes:
and extracting the minimum value var [ min _ var _ index ] of the brightness information variance in 4 directions, and if var [ min _ var _ index ] is more than or equal to a variance threshold value min _ var _ th, determining that the pixel point to be processed has no direction.
In an optional scheme, the determining, according to the variance of the luminance information in the α directions and a threshold, whether the pixel to be processed has a direction specifically includes:
if the current pixel point to be processed has directionality, the variance of the luminance information in the direction is necessarily very small, and the difference between the variance of the luminance information in the remaining non-directions and the variance of the luminance information in the direction is relatively large.
The image direction detection strength can be flexibly adjusted by setting a threshold value in the direction judging process, and the specifically set threshold value comprises the following steps: a minimum variance threshold min _ var _ th of 4 pieces of directional luminance information; the difference threshold diff _ var _ th for the remaining 3 directional variances that are not the minimum directional variance and the minimum directional variance; the difference value of the remaining 3 directional variances that are not the smallest directional variance from the smallest directional variance exceeds a count threshold th of a difference threshold diff _ var _ th.
Extracting a minimum value var [ min _ var _ index ] of the brightness information variance from the brightness information variances in 4 directions; then, comparing the minimum variance with a minimum variance threshold min _ var _ th, wherein if var [ min _ var _ index ] < min _ var _ th, the pixel point to be processed possibly has directionality, otherwise, the pixel point to be processed does not have directionality; when var [ min _ var _ index ] < min _ var _ th is satisfied, the minimum direction variance and the brightness information variances of the other 3 directions are subtracted, the obtained 3 difference values are compared with a difference threshold diff _ var _ th, the number count of the 3 difference values larger than the difference threshold diff _ var _ th is determined, and if the count is larger than the number threshold count _ th, the direction of the pixel point to be processed is determined.
In an optional scheme, the determining whether the pixel to be processed has a direction and a specific direction according to the variance of the luminance information in the α directions and the threshold specifically includes:
and when the pixel point to be processed is determined to have the direction, determining the direction of the pixel point to be processed as the direction corresponding to the var [ min _ var _ index ].
When the brightness information variance meets the relevant threshold value setting, the pixel point to be processed has directivity, and the direction of the pixel point to be processed is determined as the direction to which the minimum direction variance var [ min _ var _ index ] belongs, that is, when min _ var _ index =0 (the variance corresponding to the horizontal direction is minimum), it means that the detection result is the horizontal direction; when min _ var _ index =1 (the variance corresponding to the vertical direction is minimum), it indicates that the detection result is the vertical direction; when min _ var _ index =2 (the variance corresponding to the 45 ° oblique direction is minimum), it indicates that the detection result is the 45 ° oblique direction; when min _ var _ index =3 (the variance corresponding to the 135 ° direction is minimum), it indicates that the detection result is the 135 degree direction.
In an optional scheme, the step of determining the relationship between the set threshold and the determination result specifically includes:
when the minimum variance threshold min _ var _ th of the 4 pieces of directional luminance information is set smaller, the direction detectability is weaker. That is, when the thresholds diff _ var _ th and count _ th are kept unchanged, the smaller the setting of the threshold min _ var _ th is, the smaller the probability that the pixel point to be processed is detected to have the direction is; when the difference threshold diff _ var _ th of the remaining 3 direction variances, which are not the minimum direction variance, from the minimum direction variance is larger, the direction detectability is weaker. Namely, when the threshold min _ var _ th and the threshold count _ th are kept unchanged, the larger the threshold diff _ var _ th is set, the smaller the probability that the pixel point to be processed is detected to have the direction is; the larger the difference value between the remaining 3 direction variances, which are not the minimum direction variance, and the minimum direction variance exceeds the number threshold count _ th of the difference threshold diff _ var _ th, the weaker the direction detectability is. That is, when the thresholds min _ var _ th and diff _ var _ th are kept unchanged, the larger the threshold count _ th is set, the smaller the probability that the pixel to be processed is detected to have a direction is. Since the difference between the variance in at most 3 directions and the minimum direction variance is greater than diff _ var _ th, and the difference between the variance in at least 0 directions and the minimum direction variance is greater than diff _ var _ th, the threshold count _ th ranges from 0 to 3.
Referring to fig. 5, fig. 5 provides a direction determination method of an image signal, which is performed by the terminal shown in fig. 1, and which includes, as shown in fig. 5: the Bayer image data collected by the CMOS image sensor is used as input, and the direction of the Bayer image data is judged in a point-by-point traversal mode. Converting Bayer image data into brightness information according to a certain corresponding relation; then, calculating the variance of each direction of the brightness information; and finally, judging whether the current pixel point has the direction and the specific directivity by using a set threshold value.
Specifically, neighborhood pixel data of the position n multiplied by n of the current pixel point to be processed is input and stored into an array in _ data [ i ] [ j ], then in _ data [ (n-1)/2 ] [ (n-1)/2 ] is the current pixel point to be processed, the current pixel point to be processed is converted into luminance information lum _ data [ ii ] [ jj ] of m multiplied by m according to the n multiplied by n neighborhood information, m is an odd number, m is larger than or equal to 3, n is larger than or equal to 2m < +1, and the specifically selected value can be selected according to the size of the image and the hardware processing capacity.
Wherein in _ data [ i ] [ j ] is pixel data in the neighborhood of the current pixel point position to be detected, i =0,1, \ 8230; j =0,1, \ 8230;, n-1; lum _ data [ ii ] [ jj ] is the converted luminance information, ii =0,1, \ 8230;, m-1; jj =0,1, \8230;, m-1.
Luminance information calculation
In the direction detection, it is necessary to perform gaussian filtering on input Bayer data shown in fig. 3 in a unit of size of 3 × 3, convert the data into luminance information, and further perform direction detection on a pixel at the current position. Because the pixel information of different RGB channels is stored at different positions of Bayer data, in the method, when the Bayer data is subjected to brightness conversion, only the pixel point information of the channel which is the same as the current pixel point to be detected is selected as the central pixel of the data to be filtered with the size of 3 multiplied by 3 to complete the brightness conversion. Therefore, the size of the converted luminance information obtained by gaussian filtering the input n × n Bayer data in a unit of size of 3 × 3 is m × m, where n =2m +1.
Referring to fig. 3, for example, with n =11, in _ data [5]][5]For the current pixel point to be processed, the current pixel point to be processed is the pixel of R channel in FIG. 3, and the kernel function used for Gaussian filtering is
Figure BDA0002448912990000121
Then the corresponding calculation relationship between the 5 × 5 brightness value of the pixel neighborhood of the current position to be detected and the input 11 × 11Bayer neighborhood data is as follows:
Figure BDA0002448912990000122
where lum _ data [ ii ] [ jj ] is the corresponding luminance information value of the input 11 × 11Bayer data conversion, ii =0,1.. 4; jj =0,1.. 4.
The variances of the 4 directional luminance information shown in fig. 4 are calculated, respectively, taking n =11 as an example:
horizontal direction variance calculation:
var[0]=((lum_data[2][0]-mean[0]) 2 +(lum_data[2][1]-mean[0]) 2 +(lum_data[2][2]-mean[0]) 2 +(lum_data[2][3]-mean[0]) 2 +(lum_data[2][4]-mean[0]) 2 )/5
wherein mean [0] is a mean value corresponding to the luminance information in the horizontal direction, and the calculation mode is as follows:
mean[0]=(lum_data[2][0]+lum_data[2][1]+lum_data[2][2]+lum_data[2][3]+lum_data[2][4])/5
vertical direction variance calculation:
var[1]=((lum_data[0][2]-mean[1]) 2 +(lum_data[1][2]-mean[1]) 2 +(1um_data[2][2]-mean[1]) 2 +(1um_data[3][2]-mean[1]) 2 +(lum_data[4][2]-mean[1]) 2 )/5
wherein mean [1] is a mean value corresponding to the brightness information in the vertical direction, and the calculation mode is as follows:
mean[1]=(lum_data[0][2]+lum_data[1][2]+lum_data[2][2]+lum_data[3][2]+lum_data[4][2])/5
and (3) calculating the variance in the direction inclined by 45 degrees:
var[2]=((1um_data[0][4]-mean[2]) 2 +(lum_data[1][3]-mean[2]) 2 +(lum_data[2][2]-mean[2]) 2 +(lum_data[3][1]-mean[2]) 2 +(lum_data[4][0]-mean[2]) 2 )/5
wherein mean [2] is a mean value corresponding to brightness information in a direction inclined by 45 degrees, and the calculation mode is as follows:
mean[2]=(lum_data[0][4]+lum_data[1][3]+lum_data[2][2]+lum_data[3][1]+lum_data[4][0])/5
and calculating the variance in the direction of 135 degrees:
var[3]=((1um_data[0][0]-mean[3]) 2 +(1um_data[1][1]-mean[3]) 2 +(lum_data[2][2]-mean[3]) 2 +(lum_data[3][3]-mean[3]) 2 +(lum_data[4][4]-mean[3]) 2 )/5
wherein mean [3] is a mean value corresponding to brightness information in the direction inclined by 135 degrees, and the calculation mode is as follows:
mean[3]=(lum_data[0][0]+lum_data[1][1]+lum_data[2][2]+lum_data[3][3]+lum_data[4][4])/5
and obtaining a variance minimum value, searching for a minimum variance from the variances calculated in the 4 directions, and recording a direction min _ var _ index corresponding to the variance minimum value.
Judging the direction, presetting three thresholds diff _ var _ th, min _ var _ th and count _ th, counting the number count value that the absolute value of the difference value between the variance values of the other 3 directions and var [ min _ var _ index ] is greater than diff _ var _ th if var [ min _ var _ index ] is less than min _ var _ th, if count is greater than count _ th, detecting the direction of the current pixel, otherwise, setting min _ var _ index = -1; if var [ min _ var _ index ] > = -1, min _ var _ index is set. Outputting an identifier min _ var _ index of the current pixel, namely a pixel direction detection result, wherein min _ var _ index =0 represents that the detection result is in a horizontal direction; min _ var _ index =1 indicates that the detection result is in the vertical direction; min _ var _ index =2 indicates that the detection result is in a 45-degree oblique direction; min _ var _ index =3 indicates that the detection result is in the 135-degree oblique direction; min _ var _ index = -1 indicates that the detection result is non-directional.
Referring to fig. 6, fig. 6 provides a terminal including:
an obtaining unit 601, configured to obtain pixel data in _ data [ i ] [ j ] of a neighborhood n × n of a pixel point to be processed of the image signal with a current pixel point position to be processed as a center, where i =0, 1.. And n-1; j =0, 1., n-1, then in _ data [ (n-1)/2 ] [ (n-1)/2 ] is the current pixel point to be detected;
the processing unit 602 is configured to convert pixel data of the neighborhood n × n into luminance information of m × m size, calculate a mean value of luminance information in α directions with the pixel point to be processed as a center, and further obtain a variance of the luminance information in the α directions; judging whether the pixel point to be processed has a direction and a specific direction according to the alpha direction brightness information variance and a threshold value;
m is odd number and m is more than or equal to 3,n =2m +1.
The specific processing manner of the processing unit in the terminal shown in fig. 6 may refer to the description of the embodiment shown in fig. 2, which is not described herein again.
Referring to fig. 7, fig. 7 is a device 70 provided in this embodiment of the present application, where the device 70 includes a processor 701, a memory 702, and a communication interface 703, and the processor 701, the memory 702, and the communication interface 703 are connected to each other through a bus 704.
The memory 702 includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), and the memory 702 is used for related computer programs and data. The communication interface 703 is used for receiving and transmitting data.
The processor 701 may be one or more Central Processing Units (CPUs), and in the case that the processor 701 is one CPU, the CPU may be a single-core CPU or a multi-core CPU.
The processor 701 in the device 70 is configured to read the computer program code stored in the memory 702 and perform the following operations:
taking the current pixel point position to be processed as a center, acquiring pixel data in _ data [ i ] [ j ] of a neighborhood n multiplied by n of the pixel point to be processed of the image signal, wherein i =0,1, \8230;, n-1; j =0,1, \8230, if n-1, in _ data [ (n-1)/2 ] [ (n-1)/2 ] is the current pixel point to be detected;
converting pixel data of the neighborhood n multiplied by n into brightness information of m multiplied by m size, and calculating the brightness information mean value of alpha directions with the pixel point to be processed as the center, so as to obtain the brightness information variance of alpha directions;
judging whether the pixel point to be processed has a direction and a specific direction according to the variance of the brightness information in the alpha directions and a threshold value;
m is odd number and m is more than or equal to 3,n =2m +1.
In an optional scheme, the pixel point to be processed is: r channel pixel points, B channel pixel points, gr channel pixel points or Gb channel pixel points.
In an optional scheme, the obtaining pixel data of a neighborhood n × n with a pixel point to be processed of the image signal as a center specifically includes:
carrying out mirror image extension on the original image to obtain a mirror image extension image; acquiring pixel data of a neighborhood n multiplied by n with a pixel point to be processed as a center from the mirror image extension image;
the mirror image extension image is obtained by extending the original image in a mirror image symmetry mode by respectively taking the first-row pixel points at the upper edge, the last-row pixel points at the lower edge, the first-column pixel points at the left edge and the last-column pixel points at the right edge of the original image as symmetry axes.
In an optional scheme, the converting into m × m-sized luminance information according to the pixel data of the neighborhood n × n specifically includes:
in the n × n neighborhood pixel data, taking a pixel point at the same channel position as a current pixel point to be detected as a center, taking the size of 3 × 3 as a neighborhood unit, and filtering through a gaussian kernel function of 3 × 3 to complete brightness conversion.
In an optional scheme, the calculating, according to the m × m luminance information converted from the n × n pixel data of the neighborhood, a mean value of luminance information in α directions of the pixel point to be processed specifically includes:
extracting the brightness information of m co-channel pixel points in the beta-degree direction from the m × m brightness information converted from the neighborhood n × n pixel data, calculating the average value of the brightness information of the m co-channel pixel points in the alpha directions, and determining the average value as the brightness information average value of the pixel points to be processed in the corresponding beta-degree direction;
and the same-channel pixel points are pixel points with the same channel as the pixel points to be processed.
In an optional scheme, the calculating, according to the m × m luminance information converted from the n × n pixel data of the neighborhood, a mean value of luminance information in α directions and a variance of the luminance information in α directions of the pixel point to be processed specifically includes:
and calculating the variance between the brightness information and the brightness information of the m same-channel pixel points to determine the variance of the brightness information in the direction of beta degrees.
In an alternative, α =4 and β =0, 45, 90 or 135.
In an optional scheme, the determining, according to the variance of the luminance information in the α directions and a threshold, whether the pixel to be processed has a direction specifically includes:
and extracting the minimum value var [ min _ var _ index ] of the brightness information variances in 4 directions, and if the var [ min _ var _ index ] is more than or equal to a variance threshold value min _ var _ th, determining that the pixel point to be processed has no direction.
In an optional scheme, the determining, according to the variance of the luminance information in the α directions and a threshold, whether the pixel to be processed has a direction specifically includes:
extracting a minimum value var [ min _ var _ index ] of the brightness information variances in 4 directions, if var [ min _ var _ index ] is less than a variance threshold value min _ var _ th, subtracting the brightness information variances in the remaining 3 directions from the minimum brightness information variance var [ min _ var _ index ], comparing the obtained 3 difference values with a difference threshold value diff _ var _ th, determining the number count of the 3 difference values which is greater than the difference threshold value diff _ var _ th, and if the count is greater than the number threshold value count _ th, determining that the pixel to be processed has the direction.
In an optional scheme, the determining, according to the variance of the luminance information in the α directions and a threshold, whether the pixel point to be processed has a direction and a specific direction includes:
and when the pixel point to be processed is determined to have the direction, determining the direction of the pixel point to be processed as the direction corresponding to the var [ min _ var _ index ].
The embodiment of the present application further provides a chip system, where the chip system includes at least one processor, a memory and an interface circuit, where the memory, the transceiver and the at least one processor are interconnected by a line, and the at least one memory stores a computer program; when the computer program is executed by the processor, the method flows shown in fig. 2 and 5 are realized.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a network device, the method flows shown in fig. 2 and fig. 5 are implemented.
The embodiments of the present application also provide a computer program product, where when the computer program product runs on a terminal, the method flows shown in fig. 2 and fig. 5 are implemented.
Embodiments of the present application also provide a terminal including a processor, a memory, a communication interface, and one or more programs, the one or more programs being stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps in the method of the embodiment shown in fig. 2 or fig. 5.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments provided herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the above-described units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific implementation manner and the application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for determining a direction of an image signal, comprising:
taking the current pixel point position to be processed as the center, acquiring pixel data in _ data [ i ] [ j ] of a neighborhood n multiplied by n of the pixel point to be processed of the image signal, wherein i =0,1, \ 8230, n-1; j =0,1, \8230, n-1, then in _ data [ (n-1)/2 ] [ (n-1)/2 ] is the current pixel point to be detected;
converting pixel data of the neighborhood n multiplied by n into brightness information of m multiplied by m size, and calculating the brightness information mean value of alpha directions with the pixel point to be processed as the center, so as to obtain the brightness information variance of alpha directions;
judging whether the pixel point to be processed has a direction and a specific direction according to the alpha direction brightness information variance and a threshold value;
m is odd number and m is more than or equal to 3,n =2m +1;
the pixel points to be processed are: r channel pixel points, B channel pixel points, gr channel pixel points or Gb channel pixel points;
the brightness information is the brightness value converted by using the pixel data with the peripheral 3 × 3 neighborhood of the pixel point of the same channel to be processed as the unit through the Gaussian filtering method, and the kernel function used by the Gaussian filtering method is
Figure FDA0004052258390000011
The same-channel pixel points are pixel points which are the same as the channels of the pixel points to be processed;
the calculating the mean value of the luminance information in the alpha directions and the variance of the luminance information in the alpha directions of the pixel points to be processed according to the luminance information in the m × m size converted from the pixel data of the neighborhood n × n specifically includes:
and calculating the variance of the brightness information and the brightness information of the m same-channel pixel points to determine the variance of the brightness information in the beta-degree direction.
2. The method according to claim 1, wherein the obtaining of the pixel data of the neighborhood n × n with the pixel point to be processed of the image signal as a center specifically comprises:
carrying out mirror image extension on the original image to obtain a mirror image extension image; acquiring neighborhood n multiplied by n pixel data taking a pixel point to be processed as a center from the mirror image extension image;
the mirror image extension image is obtained by extending the original image in a mirror image symmetry mode by respectively taking the first row of pixel points on the upper edge, the last row of pixel points on the lower edge, the first row of pixel points on the left edge and the last row of pixel points on the right edge of the original image as symmetry axes; the extension area of each boundary is (N-1)/2 rows and (N-1)/2 columns, i.e. when the original image size is M N, the mirror image extension size is (M + N) × (N + N).
3. The method of claim 1, wherein the calculating the mean value of the luminance information in the α directions of the pixel points to be processed according to the m × m luminance information converted from the pixel data of the neighborhood n × n specifically comprises:
extracting brightness information of m same-channel pixel points in the direction of beta degrees from the pixel data of the neighborhood n multiplied by n respectively, and calculating the average value of the brightness information of the m same-channel pixel points in the direction of alpha respectively to determine the average value of the brightness information of the pixel points to be processed in the direction of beta degrees;
the co-channel pixel points are pixel points which are the same as the channels of the pixel points to be processed.
4. The method of claim 1 or 3, wherein α =4 and β =0, 45, 90 or 135.
5. The method according to claim 1, wherein the determining whether the pixel to be processed has a direction according to the variance of the luminance information in the α directions and a threshold specifically comprises:
and extracting the minimum value var [ min _ var _ index ] of the brightness information variance in 4 directions, and if var [ min _ var _ index ] is more than or equal to a variance threshold value min _ var _ th, determining that the pixel point to be processed has no direction.
6. The method according to claim 1, wherein the determining whether the pixel to be processed has a direction according to the variance of the luminance information in the α directions and a threshold specifically comprises:
extracting a minimum value var [ min _ var _ index ] of the brightness information variances in 4 directions, if var [ min _ var _ index ] is less than a variance threshold value min _ var _ th, subtracting the brightness information variances in the remaining 3 directions from the minimum brightness information variance var [ min _ var _ index ], comparing the obtained 3 difference values with a difference threshold value diff _ var _ th, determining the number count of the 3 difference values which is greater than the difference threshold value diff _ var _ th, and if the count is greater than the number threshold value count _ th, determining that the pixel to be processed has the direction.
7. The method according to claim 6, wherein the determining whether the pixel to be processed has a direction and a specific direction according to the variance of the luminance information in the α directions and a threshold value specifically comprises:
and when the pixel point to be processed is determined to have the direction, determining the direction of the pixel point to be processed as the direction corresponding to the var [ min _ var _ index ].
8. A terminal, comprising:
the image processing device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring pixel data in _ data [ i ] [ j ] of a neighborhood n multiplied by n of a pixel point to be processed of an image signal by taking the current pixel point to be processed as a center, wherein i =0,1, \\ 8230;, n-1; j =0,1, \8230, n-1, then in _ data [ (n-1)/2 ] [ (n-1)/2 ] is the current pixel point to be detected;
the processing unit is used for firstly converting the pixel data of the neighborhood nxn into the brightness information of the size of mxm according to the pixel data of the neighborhood, calculating the brightness information mean value of alpha directions taking the pixel point to be processed as the center, and further obtaining the brightness information variance of the alpha directions; judging whether the pixel point to be processed has a direction and a specific direction according to the alpha direction brightness information variance and a threshold value;
m is odd number and m is more than or equal to 3,n =2m +1;
the pixel points to be processed are as follows: r channel pixel points, B channel pixel points, gr channel pixel points or Gb channel pixel points;
the brightness information is the brightness value converted by using the pixel data with the surrounding 3 × 3 neighborhood of the pixel point of the same channel of the pixel point to be processed as the unit through the Gaussian filtering mode, and the kernel function used by the Gaussian filtering is
Figure FDA0004052258390000031
The same-channel pixel points are pixel points which are the same as the channels of the pixel points to be processed;
the calculating the mean value of the luminance information in the alpha directions and the variance of the luminance information in the alpha directions of the pixel points to be processed according to the luminance information in the m × m size converted from the pixel data of the neighborhood n × n specifically includes:
and calculating the variance between the brightness information and the brightness information of the m same-channel pixel points to determine the variance of the brightness information in the direction of beta degrees.
9. A terminal comprising a processor, memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-7.
CN202010288935.XA 2020-04-13 2020-04-13 Image signal direction judging method and related equipment Active CN111489395B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010288935.XA CN111489395B (en) 2020-04-13 2020-04-13 Image signal direction judging method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010288935.XA CN111489395B (en) 2020-04-13 2020-04-13 Image signal direction judging method and related equipment

Publications (2)

Publication Number Publication Date
CN111489395A CN111489395A (en) 2020-08-04
CN111489395B true CN111489395B (en) 2023-03-14

Family

ID=71812746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010288935.XA Active CN111489395B (en) 2020-04-13 2020-04-13 Image signal direction judging method and related equipment

Country Status (1)

Country Link
CN (1) CN111489395B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739110B (en) * 2020-08-07 2020-11-27 北京美摄网络科技有限公司 Method and device for detecting image over-darkness or over-exposure

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017096946A1 (en) * 2015-12-07 2017-06-15 乐视控股(北京)有限公司 Method and device for locating high-frequency information of image
CN106934768A (en) * 2015-12-30 2017-07-07 展讯通信(天津)有限公司 A kind of method and device of image denoising

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017096946A1 (en) * 2015-12-07 2017-06-15 乐视控股(北京)有限公司 Method and device for locating high-frequency information of image
CN106934768A (en) * 2015-12-30 2017-07-07 展讯通信(天津)有限公司 A kind of method and device of image denoising

Also Published As

Publication number Publication date
CN111489395A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
US10061969B2 (en) Fingerprint unlocking method and terminal
CN107038681B (en) Image blurring method and device, computer readable storage medium and computer device
CN112308806B (en) Image processing method, device, electronic equipment and readable storage medium
CN107909583B (en) Image processing method and device and terminal
CN107480496A (en) Solve lock control method and Related product
CN107645606B (en) Screen brightness adjusting method, mobile terminal and readable storage medium
CN107423699A (en) Biopsy method and Related product
CN111614908B (en) Image processing method, image processing device, electronic equipment and storage medium
CN107506687A (en) Biopsy method and Related product
CN107451454B (en) Unlocking control method and related product
CN106297657A (en) The brightness adjusting method of a kind of AMOLED display screen and terminal
CN107749046B (en) Image processing method and mobile terminal
CN111401463B (en) Method for outputting detection result, electronic equipment and medium
CN109151348B (en) Image processing method, electronic equipment and computer readable storage medium
CN112703534B (en) Image processing method and related product
CN111599460A (en) Telemedicine method and system
CN107368791A (en) Living iris detection method and Related product
CN111881813B (en) Data storage method and system of face recognition terminal
CN111145151B (en) Motion area determining method and electronic equipment
CN112329926A (en) Quality improvement method and system for intelligent robot
CN111489395B (en) Image signal direction judging method and related equipment
CN110162264B (en) Application processing method and related product
CN107357412A (en) Solve lock control method and Related product
CN108520727B (en) Screen brightness adjusting method and mobile terminal
CN110661972A (en) Image processing method, image processing apparatus, electronic device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant