CN114363519A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN114363519A
CN114363519A CN202210022321.6A CN202210022321A CN114363519A CN 114363519 A CN114363519 A CN 114363519A CN 202210022321 A CN202210022321 A CN 202210022321A CN 114363519 A CN114363519 A CN 114363519A
Authority
CN
China
Prior art keywords
pixel
image
target
point
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210022321.6A
Other languages
Chinese (zh)
Inventor
贺天童
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210022321.6A priority Critical patent/CN114363519A/en
Publication of CN114363519A publication Critical patent/CN114363519A/en
Priority to PCT/CN2023/070691 priority patent/WO2023131236A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/57Control of contrast or brightness

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image processing method and apparatus, and an electronic device, and in particular, to the field of image processing technology. The method comprises the following steps: acquiring a first image; acquiring at least one pixel coordinate and a first pixel value of the at least one pixel coordinate; generating a second image including at least one highlight point based on the at least one pixel coordinate and the first pixel value of the at least one pixel coordinate; and superposing the second image on the first image to obtain a target image.

Description

Image processing method and device and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
At present, in order to achieve a better visual effect in image processing, a high-luminance point can be added to an image, and an image processing method for adding a high-luminance point to an image is urgently needed.
Disclosure of Invention
In order to solve the technical problem or at least partially solve the technical problem, the present disclosure provides an image processing method, an apparatus, and an electronic device, which can superimpose a highlight on an image to achieve an effect of adding the highlight to the image.
In order to achieve the above purpose, the technical solutions provided by the embodiments of the present disclosure are as follows:
in a first aspect, an image processing method is provided, including:
acquiring a first image;
acquiring at least one pixel coordinate and a first pixel value of the at least one pixel coordinate;
generating a second image including at least one highlight point based on the at least one pixel coordinate and a first pixel value of the at least one pixel coordinate;
and superposing the second image on the first image to obtain a target image.
As an optional implementation manner of the embodiment of the present disclosure, the acquiring at least one pixel coordinate and a first pixel value of the at least one pixel coordinate includes:
acquiring at least one three-dimensional world coordinate;
determining the at least one pixel coordinate according to the at least one three-dimensional world coordinate;
determining a first pixel value of the at least one pixel coordinate based on the at least one three-dimensional world coordinate and a trigonometric function.
As an optional implementation manner of the embodiment of the present disclosure, the determining, according to the at least one three-dimensional world coordinate and the trigonometric function, a first pixel value of the at least one pixel coordinate includes:
inputting the at least one three-dimensional world coordinate to a trigonometric function to obtain a first pixel value of the at least one pixel coordinate;
wherein the trigonometric function is represented as: y ═ frac (sin (x) × 100.99999), where x represents three-dimensional world coordinates and y is used to characterize the first pixel value.
As an optional implementation manner of the embodiment of the present disclosure, the acquiring at least one pixel coordinate and a first pixel value of the at least one pixel coordinate includes:
acquiring a random noise map;
determining a target normal vector of each pixel point in the random noise image;
determining the pixel value of each pixel point based on the target normal vector and the sight line direction vector;
determining at least one first pixel point from the pixel points, wherein the pixel value of the at least one first pixel point is greater than or equal to a preset value;
and determining the pixel coordinate of the at least one first pixel point as the at least one pixel coordinate, and taking the pixel value of the at least one first pixel point as the first pixel value corresponding to the at least one pixel coordinate.
As an optional implementation manner of the embodiment of the present disclosure, the target normal vector is an object normal vector, or an offset normal vector.
As an optional implementation manner of the embodiment of the present disclosure, the determining the target normal vector of each pixel point in the random noise map includes:
determining an object normal vector of each pixel point in the random noise image;
constructing a matrix offset normal vector of each pixel point, wherein the matrix offset normal vector is used for offsetting the normal vector of the object;
and calculating to obtain the offset normal vector of each pixel point according to the object normal vector of each pixel point and the matrix offset normal vector of each pixel point.
As an optional implementation manner of the embodiment of the present disclosure, the generating a second image including at least one highlight point based on the at least one pixel coordinate and the first pixel value of the at least one pixel coordinate includes:
generating an initial image including the at least one highlight point based on the at least one pixel coordinate and a first pixel value of the at least one pixel coordinate;
performing n-round convolution operations on the initial image by using a target convolution kernel to obtain a second image comprising the at least one highlight point;
and the moving direction of the target convolution kernel is the target direction, and n is a positive integer.
As an optional implementation manner of the embodiment of the present disclosure, the target convolution kernel is a mean convolution kernel, or a gaussian convolution kernel.
As an optional implementation manner of the embodiment of the present disclosure, the method further includes:
after the i-1 th convolution operation and before the i-1 th convolution operation, determining that a first weight value in the target convolution kernel in the i-1 th convolution operation is 1/i of a second weight value in the i-1 th convolution operation;
the first weight value is a weight value corresponding to a pixel point with a distance of a target highlight point being i pixel widths, the second weight value is a weight value corresponding to a pixel point with a distance of the target highlight point being i-1 pixel widths, the target highlight point is any highlight point of the at least one highlight point, i is an integer greater than or equal to 2 and less than or equal to n.
As an optional implementation manner of this embodiment of the present disclosure, the performing n-round convolution operations on the initial image with a target convolution kernel to obtain a second image including the at least one highlight point includes:
in the jth convolution operation, the following steps are performed:
acquiring a rotation angle;
calculating the target pixel coordinate after rotation according to the rotation angle, the pixel width and the initial pixel coordinate of each pixel point in the convolution result of the jth convolution operation;
and setting the pixel value of each pixel point in the convolution result of the jth convolution operation as the pixel value of the pixel point of the target pixel coordinate, wherein j is an integer less than or equal to n.
In a second aspect, there is provided an image processing apparatus comprising:
the acquisition module is used for acquiring a first image; acquiring at least one pixel coordinate and a first pixel value of the at least one pixel coordinate;
a generating module for generating a second image comprising at least one highlight point based on the at least one pixel coordinate and a first pixel value of the at least one pixel coordinate;
and the superposition module is used for superposing the second image on the first image to obtain a target image.
As an optional implementation manner of the embodiment of the present disclosure, the obtaining module is specifically configured to:
acquiring at least one three-dimensional world coordinate;
determining the at least one pixel coordinate according to the at least one three-dimensional world coordinate;
determining a first pixel value of the at least one pixel coordinate based on the at least one three-dimensional world coordinate and a trigonometric function.
As an optional implementation manner of the embodiment of the present disclosure, the obtaining module is specifically configured to:
inputting the at least one three-dimensional world coordinate to a trigonometric function to obtain a first pixel value of the at least one pixel coordinate;
wherein the trigonometric function is represented as: y ═ frac (sin (x) × 100.99999), where x represents three-dimensional world coordinates and y is used to characterize the first pixel value.
As an optional implementation manner of the embodiment of the present disclosure, the obtaining module is specifically configured to:
acquiring a random noise map;
determining a target normal vector of each pixel point in the random noise image;
determining the pixel value of each pixel point based on the target normal vector and the sight line direction vector;
determining at least one first pixel point from the pixel points, wherein the pixel value of the at least one first pixel point is greater than or equal to a preset value;
and determining the pixel coordinate of the at least one first pixel point as the at least one pixel coordinate, and taking the pixel value of the at least one first pixel point as the first pixel value corresponding to the at least one pixel coordinate.
As an optional implementation manner of the embodiment of the present disclosure, the target normal vector is an object normal vector, or an offset normal vector.
As an optional implementation manner of the embodiment of the present disclosure, the target normal vector is an offset normal vector, and the obtaining module is specifically configured to:
determining an object normal vector of each pixel point in the random noise image;
constructing a matrix offset normal vector of each pixel point, wherein the matrix offset normal vector is used for offsetting the normal vector of the object;
and calculating to obtain the offset normal vector of each pixel point according to the object normal vector of each pixel point and the matrix offset normal vector of each pixel point.
As an optional implementation manner of the embodiment of the present disclosure, the generating module is specifically configured to:
generating an initial image including the at least one highlight point based on the at least one pixel coordinate and a first pixel value of the at least one pixel coordinate;
performing n-round convolution operations on the initial image by using a target convolution kernel to obtain a second image comprising the at least one highlight point;
and the moving direction of the target convolution kernel is the target direction, and n is a positive integer.
As an optional implementation manner of the embodiment of the present disclosure, the target convolution kernel is a mean convolution kernel, or a gaussian convolution kernel.
As an optional implementation manner of the embodiment of the present disclosure, the generating module is further configured to:
after the i-1 th convolution operation, before the i-th convolution operation,
determining that a first weight value in the target convolution kernel in the ith round of convolution operation is 1/i of a second weight value in the ith-1 round of convolution operation;
the first weight value is a weight value corresponding to a pixel point with a distance of a target highlight point being i pixel widths, the second weight value is a weight value corresponding to a pixel point with a distance of the target highlight point being i-1 pixel widths, the target highlight point is any highlight point of the at least one highlight point, i is an integer greater than or equal to 2 and less than or equal to n.
As an optional implementation manner of the embodiment of the present disclosure, the generating module is specifically configured to:
in the jth convolution operation, the following steps are performed:
acquiring a rotation angle;
calculating the target pixel coordinate after rotation according to the rotation angle, the pixel width and the initial pixel coordinate of each pixel point in the convolution result of the jth convolution operation;
and setting the pixel value of each pixel point in the convolution result of the jth convolution operation as the pixel value of the pixel point of the target pixel coordinate, wherein j is an integer less than or equal to n.
In a third aspect, an electronic device is provided, including: a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the image processing method according to the first aspect or any one of its alternative embodiments.
In a fourth aspect, a computer-readable storage medium is provided, comprising: the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements an image processing method as set forth in the first aspect or any one of its alternative embodiments.
In a fifth aspect, a computer program product is provided, comprising: when the computer program product runs on a computer, the computer is caused to implement the image processing method according to the first aspect or any one of its alternative embodiments.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: in the embodiment of the present disclosure, a second image including at least one highlight point may be generated based on at least one pixel coordinate and a first pixel value of the at least one pixel coordinate, and the generated second image may be superimposed on the acquired first image, so that at least one highlight point may be superimposed on the first image, and the obtained target image may exhibit a highlight display effect.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a first schematic flowchart of an image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart illustrating an image processing method according to an embodiment of the disclosure;
fig. 3 is a schematic diagram of a cross-shaped highlight point generation process provided in an embodiment of the present disclosure;
FIG. 4A is a diagram illustrating a mean convolution kernel according to an embodiment of the present disclosure;
FIG. 4B is a diagram illustrating a Gaussian convolution kernel according to an embodiment of the present disclosure;
fig. 5 is a block diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a hardware structure according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
In order to add a highlight to an image, embodiments of the present disclosure provide an image processing method, an image processing apparatus, and an electronic device, which may generate a second image including at least one highlight based on at least one pixel coordinate and a first pixel value of the at least one pixel coordinate, and superimpose the generated second image on the acquired first image, so that at least one highlight may be superimposed on the first image, and an obtained target image may exhibit a highlight display effect.
The image processing method provided in the embodiments of the present disclosure may be implemented by an image processing apparatus and an electronic device, where the image processing apparatus may be a functional module or a functional entity in the electronic device for implementing the image processing method.
The electronic device may be a tablet computer, a mobile phone, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, and the like, which is not particularly limited in this disclosure.
As shown in fig. 1, a first flowchart of an image processing method provided in an embodiment of the present disclosure may include the following steps S101 to S104.
S101, acquiring a first image.
The first image may be an image locally stored in the electronic device, or the first image is an image obtained by the electronic device in real time through a camera.
In the embodiment of the present disclosure, the content of the first image is not limited, that is, regardless of the content in the first image, highlight points may be superimposed on the first image by the image processing methods shown in steps S101 to S104 to present a highlighting display effect.
S102, at least one pixel coordinate and a first pixel value of the at least one pixel coordinate are obtained.
In the embodiment of the present disclosure, the at least one pixel coordinate and the first pixel value of the at least one pixel coordinate may be determined by determining a random number and determining the at least one pixel coordinate by the random number. Illustratively, the random numbers may be generated in two ways, one by a random function (e.g., trigonometric function described below) and the other by a random noise map. In some embodiments, the random number is generated by a random function, and since only a pure pseudo-random number is generated, unlike the relatively high requirement for random numbers in cryptography, a pseudo-random number generator is not required, and thus the random number can be obtained by a trigonometric function.
For example, the obtaining at least one pixel coordinate and the first pixel value of the at least one pixel coordinate in S102 includes the following steps:
s102, 102a, acquiring at least one three-dimensional world coordinate.
The at least one three-dimensional world coordinate may be randomly acquired, or the at least one three-dimensional world coordinate may be acquired according to a certain acquisition rule, which is not limited in the embodiment of the present disclosure, and may be set according to actual requirements.
S102b, determining at least one pixel coordinate based on the at least one three-dimensional world coordinate.
Wherein each three-dimensional world coordinate in the at least one three-dimensional world coordinate may be converted to a two-dimensional pixel coordinate, such that the at least one pixel coordinate may be obtained.
S102c, determining a first pixel value of at least one pixel coordinate based on the at least one three-dimensional world coordinate and the trigonometric function.
The determining the first pixel value of the at least one pixel coordinate according to the at least one three-dimensional world coordinate and the trigonometric function may include: at least one three-dimensional world coordinate is input to the trigonometric function to obtain a first pixel value of at least one pixel coordinate.
Wherein, the trigonometric function is expressed as formula one: y ═ frac (sin (x) × 100.99999), x representing three-dimensional world coordinates, y characterizing the first pixel value.
It is understood that y obtained in the above formula one is a random number, and in the embodiment of the present disclosure, y is used to characterize the first pixel value. In the implementation manner, by the characteristic that the trigonometric function value domain is located between [ -1,1], the obtained random number y can be between 0 and 1, and meanwhile, when the difference of the input x is small, the obtained random number has a large difference, so that the random effect can be satisfied, and the random first pixel value can be obtained. After y is calculated, the corresponding highlight point can be generated based on the obtained y and the corresponding two-dimensional coordinates.
It is understood that after y is obtained according to formula one, y may represent the first pixel value, but the value of y is not directly taken as the first pixel value, but y is converted into the corresponding first pixel value through a certain conversion relationship. Wherein, the larger the y value is, the larger the corresponding first pixel value is; the smaller the y value, the smaller the corresponding first pixel value.
It should be noted that the trigonometric function shown in the formula one above is one of random functions, and in practice, other forms of random numbers determined by random functions may also be used to characterize the pixel value corresponding to the highlight in the embodiment of the present disclosure.
In some embodiments, the random numbers may be generated by a random noise map, which may be understood as a randomly selected one of the noise maps in the embodiments of the present disclosure, and the method of generating random numbers by a random noise map is more reliable due to the fact that the shape of the noise map is controllable.
For example, the obtaining at least one pixel coordinate and the first pixel value of the at least one pixel coordinate in S102 includes the following steps:
and S102, 102A, acquiring a random noise image.
S102B, determining the target normal vector of each pixel point in the random noise image.
The target normal vector is an object normal vector or an offset normal vector. That is to say, in the embodiment of the present disclosure, one implementation manner is: determining the at least one pixel coordinate and a first pixel value corresponding to the at least one pixel coordinate based on the object normal vector of each pixel point in the random noise map; the other realization mode is as follows: the at least one pixel coordinate and the first pixel value corresponding to the at least one pixel coordinate can be determined based on the offset normal vector of each pixel point in the random noise image.
For the case that the target normal vector is an offset normal vector, the above-mentioned manner of determining the offset normal vector of each pixel point in the random noise map may be:
(1) and determining the object normal vector of each pixel point in the random noise image.
The object normal vector of one pixel point is a vector of the vertical direction of the surface of the pixel point corresponding to the object.
(2) And constructing a matrix offset normal vector of each pixel point, wherein the matrix offset normal vector is used for offsetting the normal vector of the object.
The DisMatrix can be constructed at will according to a random sampling graph, and the construction form of the embodiment of the disclosure is not limited, so that the normal vector offset of the object can be regarded as the standard.
Illustratively, the matrix offset normal vector DisMatrix constructed for one pixel point in the random sampling graph is as follows:
Figure BDA0003463153630000101
in the matrix offset normal vector, x represents a value of a R (red) color channel of a current pixel, y represents a value of a G (green) color channel of the current pixel, and z represents a value of a B (blue) color channel of the current pixel.
(3) And calculating to obtain the offset normal vector of each pixel point according to the object normal vector of each pixel point and the matrix offset normal vector of each pixel point.
When the offset normal vector of each pixel point is obtained through calculation according to the object normal vector of each pixel point and the matrix offset normal vector of each pixel point, the offset normal vector of each pixel point is obtained through calculation according to the object normal vector of each pixel point in each pixel point and the matrix offset normal vector of each corresponding pixel point.
For each pixel point, calculating the offset normal vector can be expressed as the following formula two:
DistortNomal=DisMatrix·normal;
in the second formula, DistortNomal represents an offset normal vector, distrix represents the above matrix offset normal vector, and normal represents an object normal vector.
S102C, determining the pixel value of each pixel point based on the target normal vector and the sight line direction vector.
Calculation can be performed based on the target normal vector, the sight line direction vector and the formula III, and the calculated value is used for representing the pixel value of each pixel point. The gaze direction vector and the target normal vector are dot-multiplied by the higher power. Thus the more the line of sight coincides with the normal direction, the larger the value obtained.
Wherein, the formula three is: (viewDir. Q)PThe P is power, the larger the P value is, the larger the pixel value of the obtained pixel point is, the higher the brightness is, the smaller the range is, the viewDir represents the sight line direction vector, the Q represents the target Normal vector, and when the target Normal vector is the object Normal vector, the formula III can be converted into (viewDir Normal)PIf the target normal vector is an offset normal vector, then equation three can be converted to (viewDir distorsnormal)P
S102D, determining at least one first pixel point from the pixel points, wherein the pixel value of the at least one first pixel point is larger than or equal to a preset value.
In the above manner, the pixel values of the pixel points are represented by performing dot product on the sight direction vector and the target normal vector and multiplying the dot product by the high power, so that the pixel values of the pixel points can obtain values of different sizes according to the deviation degree of the direction of the realization direction, and when the pixel value of at least one first pixel point is selected from the pixel values, some pixel points with smaller deviation between the sight direction vector and the target normal vector can be determined as highlight points, and corresponding pixel coordinates and first pixel values corresponding to the pixel coordinates are determined, so that the highlight points which can be generated have randomness.
S102E, determining the pixel coordinate of the at least one first pixel point as the at least one pixel coordinate, and using the pixel value of the at least one first pixel point as the first pixel value corresponding to the at least one pixel coordinate.
Compared with the implementation mode of determining the at least one pixel coordinate and the first pixel value corresponding to the at least one pixel coordinate based on the offset normal vector, the implementation mode of determining the at least one pixel coordinate and the first pixel value corresponding to the at least one pixel coordinate based on the offset normal vector can offset the normal vector of the object, so that the highlight points have randomness, and thus, a plurality of random highlight points exist in the generated image, and the finally determined highlight points can be prevented from being excessively aggregated to form a highlight point with a larger area.
S103, generating a second image comprising at least one highlight point based on the at least one pixel coordinate and the first pixel value of the at least one pixel coordinate.
In some embodiments, in the process of generating the second image including the at least one highlight point based on the at least one pixel coordinate and the first pixel value of the at least one pixel coordinate, after the initial image is generated based on the at least one pixel coordinate and the first pixel value of the at least one pixel coordinate, some post-processing operations need to be performed on the initial image before the second image is obtained.
For example, as shown in fig. 2, for a second flowchart of an image processing method provided by the embodiment of the present disclosure, after determining at least one pixel coordinate and a first pixel value of the at least one pixel coordinate based on the above-mentioned reference numeral 102, rendering may be performed based on these parameters to obtain an initial image, i.e., to obtain RT1 shown in fig. 2, where the RT1 may include at least one highlight point, and then post-processing may be performed on the RT1 to obtain a final RT2, and then the RT2 may be stored in a camera buffer. Finally, the first image (the first image can also be an image stored locally) acquired by the camera is fused with the RT2 stored in the camera buffer area, so that a corresponding target image can be obtained.
In some embodiments, the method specifically includes: generating an initial image including at least one highlight point based on at least one pixel coordinate and a first pixel value of the at least one pixel coordinate; and performing n-round convolution operations on the initial image by using the target convolution kernel to obtain a second image comprising at least one highlight point. The target convolution kernel is a mean value convolution kernel or a Gaussian convolution kernel, the moving direction of the target convolution kernel is the target direction, and n is a positive integer.
The target direction may be any one moving direction or a plurality of different moving directions.
In the embodiment of the present disclosure, the target convolution kernel moves along the target direction to perform n-round convolution operations on the initial image, so that the highlight in the initial image presents two sharp corners (or is called a tail) opposite to each other in the target direction, or presents one sharp corner in each of the forward direction and the reverse direction of the highlight along the target direction, and after the n-round convolution operations in one or more directions, the highlight can be processed into a star shape, which may have 2m corners, where m represents the number of different moving directions in the convolution operation process of the target convolution kernel.
Illustratively, the highlight points may be treated in the shape of a two-pointed star, a four-pointed star (e.g., a cross star), a six-pointed star, an eight-pointed star, or the like. The two-pointed star is obtained by performing n-round convolution operation on the highlight in one direction; the four-pointed star is obtained by respectively performing n-wheel convolution operations on the highlight in two different directions, and the two different directions are vertical when the four-pointed star is a cross star; the hexagons are obtained by respectively performing n-wheel convolution operations on the highlight points in three different directions, and the octagons are obtained by respectively performing n-wheel convolution operations on the highlight points in four different directions.
Taking processing the highlight into a cross-star shape as an example, the target convolution kernel can move along the X direction and the Y direction (the X direction is perpendicular to the Y direction) respectively to perform n-round convolution operations on pixel points in the X direction and the Y direction, and by sampling RGBA values of pixels around the highlight in the initial image and superimposing the RGBA values, the pixel points around the highlight in the initial image can all obtain corresponding RGBA values, so that the cross-star shaped highlight can be realized in the finally obtained second image. Where R in the RGBA values represents the value of the R color channel, G represents the value of the G color channel, B represents the value of the B color channel, and A represents opacity (alpha).
For example, as shown in fig. 3, which is a schematic diagram of a cross-shaped highlight point generation process provided by the embodiment of the present disclosure, as shown in (a) in fig. 3, a highlight point may be obtained, after performing n-round convolution operations along the X direction shown in the figure for processing, the obtained highlight point may be as shown in (b) in fig. 3, and then, on the basis of the highlight point shown in (b) in fig. 3, performing n-round convolution operations along the Y direction shown in the figure again, so that a cross-shaped highlight point as shown in (c) in fig. 3 may be obtained.
Illustratively, fig. 4A is a schematic diagram of a mean convolution kernel provided in an embodiment of the present disclosure. The convolution kernel is a mean value convolution kernel that can be used to construct a cross-star shape, and is characterized by the absence of weights.
In order to make the constructed cross-star shape have a tailing attenuation effect, a convolution kernel with a weight may also be used, for example, a gaussian convolution kernel may be used, and fig. 4B is an exemplary schematic diagram of a gaussian convolution kernel provided by the embodiment of the present disclosure. As can be seen from fig. 4B, the weight of the center of the gaussian convolution kernel is the largest, the weight is smaller the farther away from the center, and the weight is larger the closer to the center, and after performing convolution operation based on such a convolution kernel, the obtained cross-star shape has an obvious trailing attenuation effect, that is, the brightness of the cross-star shape closer to the center is larger, and the brightness of the cross-star shape farther from the center is lower.
It should be noted that fig. 4A and 4B both illustrate a convolution kernel of 3 × 3 as an example, and in practical applications, the size of the convolution kernel may be set according to actual requirements.
The gaussian convolution kernel uses convolution kernels with different weights, so that the closer the weight to the edge is, the smaller the influence of the highlight point on the pixel with the larger distance from the center of the highlight point is, and the lower the brightness is, and thus, the better tailing attenuation effect can be obtained. In practical cases, in order to achieve a longer tailing effect, a convolution kernel as small as 3 × 3 shown in fig. 4B is not used, but a larger convolution kernel is used, and the size of the convolution kernel is controlled by one loop according to the embodiment of the present disclosure, and the loop variable may be used as a weight to influence the size of the weight value in the convolution kernel. In some embodiments, including n rounds of convolution operations, after the i-1 th round of convolution operation, before the i-th round of convolution operation, the magnitude of the weight value in the convolution kernel in the current i-th round of convolution operation may be determined. Specifically, the method comprises the following steps: and determining that the first weight value in the target convolution kernel in the ith round of convolution operation is 1/i of the second weight value in the ith-1 round of convolution operation.
The first weight value is a weight value corresponding to a pixel point with a distance of i pixel widths from a target highlight point, the second weight value is a weight value corresponding to a pixel point with a distance of i-1 pixel widths from the target highlight point, the target highlight point is any highlight point in at least one highlight point, and i is an integer which is greater than or equal to 2 and less than n.
It should be noted that, the setting of the first weight value to 1/i of the second weight value is an exemplary description, and the setting may also be performed according to an actual situation, so long as it is ensured that the first weight value is smaller than the second weight value.
In some embodiments, in the second image obtained by post-processing the initial image based on the convolution operation, the obtained highlight dots are in a regular star shape, for example, the highlight dots are in a regular cross star shape, and in order to obtain the highlight dots in a rotated regular cross star shape, the following scheme can be adopted to obtain the highlight dots in a rotated cross star shape.
The implementation mode can comprise the following steps: in performing n rounds of convolution operations on the initial image with the target convolution kernel to obtain a second image including at least one highlight point, performing the following steps in a j-th round of convolution operations:
1) and acquiring the rotation angle.
The rotation angle may be a default rotation angle or a rotation angle set by a user.
2) And calculating the coordinates of the rotated target pixel according to the rotation angle, the pixel width and the initial pixel coordinates of each pixel point in the convolution result of the jth convolution operation.
Wherein n convolution operations are performed, and j is greater than or equal to n, wherein j can be any one of the n convolution operations.
3) And setting the pixel value of each pixel point in the convolution result of the jth convolution operation as the pixel value of the pixel point of the target pixel coordinate.
The target pixel coordinate refers to a target pixel coordinate corresponding to each pixel point after rotation, and it can be understood that one target pixel coordinate is determined for each pixel point in the initial image. The pixel values may refer to RGBA values.
Taking the high-light-spot of the cross star as an example, the direction vector of the substrate vector in the X direction after rotating the rotation angle can be determined based on the rotation angle, and is represented as Xdir; and a direction vector of the base vector in the Y direction rotated by the rotation angle is denoted by Ydir.
For the jth convolution operation, the rotated target pixel coordinates can be calculated according to the combination of Xdir, Ydir, pixel width, and the following formulas four to seven. Specifically, the pixel value of each pixel point in the convolution result of the jth convolution operation is set as the pixel value of the pixel point of the target pixel coordinate, it should be noted that, in the calculation process, when the target convolution kernel moves along the X direction and performs convolution operation, calculation may be performed based on the formula four and the formula five corresponding to the X direction, and when the target convolution kernel moves along the Y direction and performs convolution operation, calculation may be performed based on the formula six and the formula seven corresponding to the Y direction.
The formula corresponding to the X direction is as follows:
the formula five is as follows:
finaColor+=texture2D(MainTex,vec2(uv.x+Xdir.x*maintexTexelSize.x*xIndex,uv.y-Xdir.y*maintexTexelSize.y*xIndex));
formula six:
finaColor+=texture2D(MainTex,vec2(uv.x-Xdir.x*maintexTexelSize.x*xIndex,uv.y+Xdir.y*maintexTexelSize.y*xIndex))。
in the X direction, each pixel point on the first side of the highlight point is calculated based on a formula five to obtain a target phase pixel coordinate corresponding to each pixel point on the second side of the highlight point, and the RGBA value of each pixel point on the second side of the highlight point is sampled to the pixel point corresponding to the target phase pixel coordinate; and calculating each pixel point on the second side of the highlight point based on a formula six to obtain a target phase pixel coordinate corresponding to each pixel point on the second side of the highlight point, and sampling the RGBA value of each pixel point on the second side of the highlight point to a pixel point corresponding to the target phase pixel coordinate. The first side is the moving direction of the target convolution kernel when moving in the X direction, and the second side is the reverse direction of the moving direction of the target convolution kernel when moving in the X direction.
Where finaColor + represents the RGBA value corresponding to the target pixel coordinate, texture2D represents RT sampling, MainTex is used to represent which RT is sampled, which refers to the initial image in the embodiment of the present disclosure, vec2 identifies the sampling manner for the sampled RT, uv.x and uv.y represent the initial pixel coordinate of the current pixel currently sampled for the initial image, xdir.x represents the component of Xdir in the X direction, xdir.y represents the component of Xdir in the Y direction, maintexelsize.x represents the pixel width in the X direction, the pixel width in the X direction represents the distance between two pixels adjacent in the X direction, maintexelsize.y represents the pixel width in the Y direction, the pixel width in the Y direction represents the distance between two pixels adjacent in the Y direction, and xIndex represents the number of traversals in the X direction, for example, where n times of convolution operations are performed on the target convolution kernel in the X direction, and xIndex is n.
The formula corresponding to the Y direction is as follows:
the formula seven:
finaColor+=texture2D(MainTex,vec2(uv.x+Ydir.x*maintexTexelSize.x*yIndex,uv.y+Ydir.y*maintexTexelSize.y*yIndex));
the formula eight:
finaColor+=texture2D(MainTex,vec2(uv.x-Ydir.x*maintexTexelSize.x*yIndex,uv.y-Ydir.y*maintexTexelSize.y*yIndex))。
where ydir.x represents the component of Ydir in the X direction, and ydir.y represents the component of Ydir in the Y direction.
Correspondingly, in the Y direction, each pixel point on the first side of the highlight point is calculated based on the formula seven to obtain a target pixel coordinate corresponding to each pixel point on the second side of the highlight point, and the RGBA value of each pixel point on the second side of the highlight point is sampled to the pixel point corresponding to the target pixel coordinate; and calculating each pixel point on the second side of the highlight point based on the formula eight to obtain a target phase pixel coordinate corresponding to each pixel point on the second side of the highlight point, and sampling the RGBA value of each pixel point on the second side of the highlight point to a pixel point corresponding to the target phase pixel coordinate. The first side is the moving direction of the target convolution kernel when moving in the Y direction, and the second side is the opposite direction of the moving direction of the target convolution kernel when moving in the Y direction.
Based on the above-described embodiment, it is possible to deal with a highlight dot that is rotated in a cross-star shape, which can exhibit a rotating effect of the highlight dot.
And S104, superimposing the second image on the first image to obtain a target image.
The second image and the first image can be arranged on different layers in the process of obtaining the target image, wherein the layer where the second image is located on the upper layer of the first image, and the target image can be obtained after the second image is superposed on the first image, and the target image is the image with the display effect of at least one highlight point added on the first image.
In the embodiment of the present disclosure, a second image including at least one highlight point may be generated based on at least one pixel coordinate and a first pixel value of the at least one pixel coordinate, and the generated second image may be superimposed on the acquired first image, so that at least one highlight point may be superimposed on the first image, and the obtained target image may exhibit a highlight display effect.
As shown in fig. 5, a block diagram of an image processing apparatus provided in an embodiment of the present disclosure includes:
an obtaining module 501, configured to obtain a first image; acquiring at least one pixel coordinate and a first pixel value of the at least one pixel coordinate;
a generating module 502 for generating a second image comprising at least one highlight point based on at least one pixel coordinate and a first pixel value of the at least one pixel coordinate;
and an overlaying module 503, configured to overlay the second image on the first image to obtain a target image.
As an optional implementation manner of the embodiment of the present disclosure, the obtaining module 501 is specifically configured to:
acquiring at least one three-dimensional world coordinate;
determining at least one pixel coordinate according to at least one three-dimensional world coordinate;
a first pixel value of at least one pixel coordinate is determined based on at least one three-dimensional world coordinate and a trigonometric function.
As an optional implementation manner of the embodiment of the present disclosure, the obtaining module 501 is specifically configured to:
inputting at least one three-dimensional world coordinate into a trigonometric function to obtain a first pixel value of at least one pixel coordinate;
wherein the trigonometric function is represented as: y ═ frac (sin (x) × 100.99999), where x represents three-dimensional world coordinates and y is used to characterize the first pixel value.
As an optional implementation manner of the embodiment of the present disclosure, the obtaining module 501 is specifically configured to:
acquiring a random noise map;
determining a target normal vector of each pixel point in the random noise image;
determining the pixel value of each pixel point based on the target normal vector and the sight line direction vector;
determining at least one first pixel point from the pixel points, wherein the pixel value of the at least one first pixel point is greater than or equal to a preset value;
and determining the pixel coordinate of at least one first pixel point as at least one pixel coordinate, and taking the pixel value of at least one first pixel point as the first pixel value corresponding to at least one pixel coordinate.
As an alternative implementation of the embodiment of the present disclosure, the target normal vector is an object normal vector, or an offset normal vector.
As an optional implementation manner of the embodiment of the present disclosure, the target normal vector is an offset normal vector, and the obtaining module 501 is specifically configured to:
determining an object normal vector of each pixel point in the random noise image;
constructing a matrix offset normal vector of each pixel point, wherein the matrix offset normal vector is used for offsetting the normal vector of the object;
and calculating to obtain the offset normal vector of each pixel point according to the object normal vector of each pixel point and the matrix offset normal vector of each pixel point.
As an optional implementation manner of the embodiment of the present disclosure, the generating module 502 is specifically configured to:
generating an initial image including at least one highlight point based on at least one pixel coordinate and a first pixel value of the at least one pixel coordinate;
performing n-round convolution operations on the initial image by using a target convolution kernel to obtain a second image comprising at least one highlight point;
the moving direction of the target convolution kernel is the target direction, and n is a positive integer.
As an optional implementation manner of the embodiment of the present disclosure, the target convolution kernel is a mean convolution kernel, or a gaussian convolution kernel.
As an optional implementation manner of the embodiment of the present disclosure, the generating module 502 is further configured to:
after the convolution operation of the (i-1) th round and before the convolution operation of the (i) th round, setting the pixel value of the target pixel point as 1/i of the pixel value of the target pixel point in the convolution result of the convolution operation of the (i-1) th round;
the target pixel points are pixel points which are at a distance of i pixel widths from the target highlight points, the target highlight points are any highlight point of at least one highlight point, and i is an integer which is greater than or equal to 2 and less than or equal to n.
As an optional implementation manner of the embodiment of the present disclosure, the generating module 502 is specifically configured to:
in the jth convolution operation, the following steps are performed:
acquiring a rotation angle;
calculating the target pixel coordinate after rotation according to the rotation angle, the pixel width and the initial pixel coordinate of each pixel point in the convolution result of the jth convolution operation;
and setting the pixel value of each pixel point in the convolution result of the jth convolution operation as the pixel value of the pixel point of the target pixel coordinate, wherein j is an integer less than or equal to n.
As shown in fig. 6, an embodiment of the present disclosure provides an electronic device, including: a processor 601, a memory 602 and a computer program stored on the memory 602 and executable on the processor 601, the computer program implementing the respective processes of the image processing method in the above-described method embodiments when executed by the processor 601. And the same technical effect can be achieved, and in order to avoid repetition, the description is omitted.
An embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the image processing method in the foregoing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Embodiments of the present invention provide a computer program product, where the computer program is stored, and when being executed by a processor, the computer program implements each process of the image processing method in the foregoing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium.
In the present disclosure, the Processor may be a Central Processing Unit (CPU), and may also be other general purpose processors, Digital Signal Processors (DSP), Application Specific Integrated Circuits (ASIC), Field-Programmable Gate arrays (FPGA) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In the present disclosure, the memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
In the present disclosure, computer-readable media include both non-transitory and non-transitory, removable and non-removable storage media. Storage media may implement information storage by any method or technology, and the information may be computer-readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (13)

1. An image processing method, comprising:
acquiring a first image;
acquiring at least one pixel coordinate and a first pixel value of the at least one pixel coordinate;
generating a second image including at least one highlight point based on the at least one pixel coordinate and a first pixel value of the at least one pixel coordinate;
and superposing the second image on the first image to obtain a target image.
2. The method of claim 1, wherein obtaining at least one pixel coordinate and a first pixel value of the at least one pixel coordinate comprises:
acquiring at least one three-dimensional world coordinate;
determining the at least one pixel coordinate according to the at least one three-dimensional world coordinate;
determining a first pixel value of the at least one pixel coordinate based on the at least one three-dimensional world coordinate and a trigonometric function.
3. The method of claim 2, wherein determining the first pixel value for the at least one pixel coordinate from the at least one three-dimensional world coordinate and a trigonometric function comprises:
inputting the at least one three-dimensional world coordinate to a trigonometric function to obtain a first pixel value of the at least one pixel coordinate;
wherein the trigonometric function is represented as: y ═ frac (sin (x) × 100.99999), where x represents three-dimensional world coordinates and y is used to characterize the first pixel value.
4. The method of claim 1, wherein obtaining at least one pixel coordinate and a first pixel value of the at least one pixel coordinate comprises:
acquiring a random noise map;
determining a target normal vector of each pixel point in the random noise image;
determining the pixel value of each pixel point based on the target normal vector and the sight line direction vector;
determining at least one first pixel point from the pixel points, wherein the pixel value of the at least one first pixel point is greater than or equal to a preset value;
and determining the pixel coordinate of the at least one first pixel point as the at least one pixel coordinate, and taking the pixel value of the at least one first pixel point as the first pixel value corresponding to the at least one pixel coordinate.
5. The method of claim 4, wherein the target normal vector is an object normal vector, or an offset normal vector.
6. The method of claim 4, wherein the target normal vector is an offset normal vector, and wherein the determining the target normal vector for each pixel point in the random noise map comprises:
determining an object normal vector of each pixel point in the random noise image;
constructing a matrix offset normal vector of each pixel point, wherein the matrix offset normal vector is used for offsetting the normal vector of the object;
and calculating to obtain the offset normal vector of each pixel point according to the object normal vector of each pixel point and the matrix offset normal vector of each pixel point.
7. The method of claim 1, wherein generating the second image comprising at least one highlight point based on the at least one pixel coordinate and the first pixel value of the at least one pixel coordinate comprises:
generating an initial image including the at least one highlight point based on the at least one pixel coordinate and a first pixel value of the at least one pixel coordinate;
performing n-round convolution operations on the initial image by using a target convolution kernel to obtain a second image comprising the at least one highlight point;
and the moving direction of the target convolution kernel is the target direction, and n is a positive integer.
8. The method of claim 7, wherein the target convolution kernel is a mean convolution kernel or a gaussian convolution kernel.
9. The method according to claim 7 or 8, characterized in that the method further comprises:
after the i-1 th convolution operation and before the i-1 th convolution operation, determining that a first weight value in the target convolution kernel in the i-1 th convolution operation is 1/i of a second weight value in the i-1 th convolution operation;
the first weight value is a weight value corresponding to a pixel point with a distance of a target highlight point being i pixel widths, the second weight value is a weight value corresponding to a pixel point with a distance of the target highlight point being i-1 pixel widths, the target highlight point is any highlight point of the at least one highlight point, i is an integer greater than or equal to 2 and less than or equal to n.
10. The method of claim 7, wherein performing n-round convolution operations on the initial image with a target convolution kernel to obtain a second image comprising the at least one highlight, comprises:
in the jth convolution operation, the following steps are performed:
acquiring a rotation angle;
calculating the target pixel coordinate after rotation according to the rotation angle, the pixel width and the initial pixel coordinate of each pixel point in the convolution result of the jth convolution operation;
and setting the pixel value of each pixel point in the convolution result of the jth convolution operation as the pixel value of the pixel point of the target pixel coordinate, wherein j is an integer less than or equal to n.
11. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring a first image; acquiring at least one pixel coordinate and a first pixel value of the at least one pixel coordinate;
a generating module for generating a second image comprising at least one highlight point based on the at least one pixel coordinate and a first pixel value of the at least one pixel coordinate;
and the superposition module is used for superposing the second image on the first image to obtain a target image.
12. An electronic device, comprising: processor, memory and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the image processing method of any one of claims 1 to 10.
13. A computer-readable storage medium, comprising: the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the image processing method of any one of claims 1 to 10.
CN202210022321.6A 2022-01-10 2022-01-10 Image processing method and device and electronic equipment Pending CN114363519A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210022321.6A CN114363519A (en) 2022-01-10 2022-01-10 Image processing method and device and electronic equipment
PCT/CN2023/070691 WO2023131236A1 (en) 2022-01-10 2023-01-05 Image processing method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210022321.6A CN114363519A (en) 2022-01-10 2022-01-10 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114363519A true CN114363519A (en) 2022-04-15

Family

ID=81110048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210022321.6A Pending CN114363519A (en) 2022-01-10 2022-01-10 Image processing method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN114363519A (en)
WO (1) WO2023131236A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023131236A1 (en) * 2022-01-10 2023-07-13 北京字跳网络技术有限公司 Image processing method and apparatus, and electronic device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496180A (en) * 2011-12-15 2012-06-13 李大锦 Method for automatically generating wash landscape painting image
CN103914871A (en) * 2014-03-06 2014-07-09 河南农业大学 Method for interactively selecting coordinate points on surface of object based on point cloud data
CN106296781A (en) * 2015-05-27 2017-01-04 深圳超多维光电子有限公司 Specially good effect image generating method and electronic equipment
CN106791016A (en) * 2016-11-29 2017-05-31 努比亚技术有限公司 A kind of photographic method and terminal
CN110930324A (en) * 2019-11-12 2020-03-27 上海航天控制技术研究所 Fuzzy star map restoration method
CN111340745A (en) * 2020-03-27 2020-06-26 成都安易迅科技有限公司 Image generation method and device, storage medium and electronic equipment
CN112422269A (en) * 2020-11-10 2021-02-26 中国科学院大学 Combined chaotic pseudo-random number generator and digital image encryption method thereof
CN113240578A (en) * 2021-05-13 2021-08-10 北京达佳互联信息技术有限公司 Image special effect generation method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI764898B (en) * 2016-08-03 2022-05-21 日商新力股份有限公司 Information processing device, information processing method and program
CN108090876B (en) * 2016-11-23 2020-09-04 北京金山云网络技术有限公司 Image processing method and device
CN111311532B (en) * 2020-03-26 2022-11-11 深圳市商汤科技有限公司 Image processing method and device, electronic device and storage medium
CN113284063A (en) * 2021-05-24 2021-08-20 维沃移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
CN114363519A (en) * 2022-01-10 2022-04-15 北京字跳网络技术有限公司 Image processing method and device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496180A (en) * 2011-12-15 2012-06-13 李大锦 Method for automatically generating wash landscape painting image
CN103914871A (en) * 2014-03-06 2014-07-09 河南农业大学 Method for interactively selecting coordinate points on surface of object based on point cloud data
CN106296781A (en) * 2015-05-27 2017-01-04 深圳超多维光电子有限公司 Specially good effect image generating method and electronic equipment
CN106791016A (en) * 2016-11-29 2017-05-31 努比亚技术有限公司 A kind of photographic method and terminal
CN110930324A (en) * 2019-11-12 2020-03-27 上海航天控制技术研究所 Fuzzy star map restoration method
CN111340745A (en) * 2020-03-27 2020-06-26 成都安易迅科技有限公司 Image generation method and device, storage medium and electronic equipment
CN112422269A (en) * 2020-11-10 2021-02-26 中国科学院大学 Combined chaotic pseudo-random number generator and digital image encryption method thereof
CN113240578A (en) * 2021-05-13 2021-08-10 北京达佳互联信息技术有限公司 Image special effect generation method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
凡鸿 等: "《Photoshop平面设计实例教程 中文版》", 30 June 2021, 华中科技大学出版社, pages: 153 - 157 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023131236A1 (en) * 2022-01-10 2023-07-13 北京字跳网络技术有限公司 Image processing method and apparatus, and electronic device

Also Published As

Publication number Publication date
WO2023131236A1 (en) 2023-07-13

Similar Documents

Publication Publication Date Title
US11170291B2 (en) Rotating data for neural network computations
US10438117B1 (en) Computing convolutions using a neural network processor
JP6275260B2 (en) A method for processing an input low resolution (LR) image into an output high resolution (HR) image
US9569822B2 (en) Removing noise from an image via efficient patch distance computations
CN108573269B (en) Image feature point matching method, matching device, electronic device and storage medium
CN112997479B (en) Method, system and computer readable medium for processing images across a phase jump connection
WO2014166377A1 (en) Image interest point detection method and device
Yang et al. Designing display pixel layouts for under-panel cameras
CN114363519A (en) Image processing method and device and electronic equipment
CN107578375B (en) Image processing method and device
CN110471607B (en) Handwriting display method, handwriting reading equipment and computer storage medium
CN112419372A (en) Image processing method, image processing device, electronic equipment and storage medium
US9319666B1 (en) Detecting control points for camera calibration
US10748248B2 (en) Image down-scaling with pixel sets selected via blue noise sampling
CN110689061A (en) Image processing method, device and system based on alignment feature pyramid network
CN111882588B (en) Image block registration method and related product
CN112070854B (en) Image generation method, device, equipment and storage medium
US20170031571A1 (en) Area-Dependent Image Enhancement
US10460189B2 (en) Method and apparatus for determining summation of pixel characteristics for rectangular region of digital image avoiding non-aligned loads using multiple copies of input data
CN115619678A (en) Image deformation correction method and device, computer equipment and storage medium
JP2021108222A (en) Image processing device, image processing method, and image processing program
CN113014991A (en) Method and device for video rotation and computer readable medium
KR102436197B1 (en) Method for detecting objects from image
US20230071368A1 (en) Image processing apparatus including pre-processor and neural network processor and operating method thereof
US20230334820A1 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination