CN110717969A - Shadow generation method and device - Google Patents

Shadow generation method and device Download PDF

Info

Publication number
CN110717969A
CN110717969A CN201810771391.5A CN201810771391A CN110717969A CN 110717969 A CN110717969 A CN 110717969A CN 201810771391 A CN201810771391 A CN 201810771391A CN 110717969 A CN110717969 A CN 110717969A
Authority
CN
China
Prior art keywords
image
affine transformation
shadow
area image
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810771391.5A
Other languages
Chinese (zh)
Inventor
郑行涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201810771391.5A priority Critical patent/CN110717969A/en
Publication of CN110717969A publication Critical patent/CN110717969A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a shadow generation method and a shadow generation device. The method comprises the following steps: determining the contour line of a target object in a target image; performing gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and affine transformation processing is carried out on the basis of the position relation of reference points before and after affine transformation; image synthesis processing is performed based on the shadow region image and the first region image, and a shadow image of the target image is generated. The method and the device have the advantages that the contour line of the target object in the two-dimensional image is determined, and gray level gradient processing and affine transformation processing are carried out on the area image in the contour line, so that the shadow area image is obtained. Compared with the scheme of obtaining the space shadow of the target object by constructing the three-dimensional model of the target object in the prior art, the method can reduce the computing resources required in the shadow generation process, improve the shadow generation efficiency and improve the accuracy of the generated shadow.

Description

Shadow generation method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a shadow generation method and apparatus.
Background
The shadow has important significance for drawing a realistic scene, not only can reflect the mutual shielding relation between objects in space, but also can reflect geometrical information of the shielding objects and the receiving surface, and the real-time drawing of the shadow also greatly increases the reality degree of the scene drawing.
At present, a shadow generation scheme needs to rely on depth information of an object, construct a three-dimensional model based on the depth information, and render a real shadow. However, in the two-dimensional image, since spatial information is hidden between pixels, it is difficult to extract accurate depth information, and it is difficult to accurately obtain a shadow image.
Disclosure of Invention
The embodiment of the specification provides a shadow generation method, which is used for solving the problems that the existing shadow generation scheme needs a large amount of computing resources and the shadow precision is low.
An embodiment of the present specification further provides a shadow generation method, including:
determining the contour line of a target object in a target image;
performing gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on the position relation of reference points before and after affine transformation;
and performing image synthesis processing based on the shadow region image and the first region image to generate a shadow image of the target image.
An embodiment of the present specification further provides a shadow generation method, including:
determining the contour line of a target object in a target image;
determining a parameter group corresponding to the type of the target object based on the contour line of the target object, wherein the parameter group comprises a light incidence angle;
performing gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on a ray incidence angle;
and performing image synthesis processing based on the shadow region image and the first region image to generate a shadow image of the target image.
An embodiment of the present specification further provides a shadow generation apparatus, including:
the determining module is used for determining the contour line of the target object in the target image;
the first processing module is used for carrying out gray level gradient processing and affine transformation processing on the basis of the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on the position relation of reference points before and after affine transformation;
and the second processing module is used for carrying out image synthesis processing on the basis of the shadow region image and the first region image to generate a shadow image of the target image.
An embodiment of the present specification further provides a shadow generation apparatus, including:
the first determining module is used for determining the contour line of the target object in the target image;
a second determining module, configured to determine, based on a contour line of the target object, a parameter group corresponding to a type of the target object, where the parameter group includes a light incident angle;
the first processing module is used for carrying out gray level gradient processing and affine transformation processing on the basis of the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on a ray incidence angle;
and the second processing module is used for carrying out image synthesis processing on the basis of the shadow region image and the first region image to generate a shadow image of the target image.
An embodiment of the present specification further provides an electronic device, including:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
determining the contour line of a target object in a target image;
performing gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on the position relation of reference points before and after affine transformation;
and performing image synthesis processing based on the shadow region image and the first region image to generate a shadow image of the target image.
The present specification embodiments also provide a computer-readable storage medium storing one or more programs that, when executed by an electronic device including a plurality of application programs, cause the electronic device to perform operations comprising:
determining the contour line of a target object in a target image;
performing gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on the position relation of reference points before and after affine transformation;
and performing image synthesis processing based on the shadow region image and the first region image to generate a shadow image of the target image.
An embodiment of the present specification further provides an electronic device, including:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
determining the contour line of a target object in a target image;
determining a parameter group corresponding to the type of the target object based on the contour line of the target object, wherein the parameter group comprises a light incidence angle;
performing gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on a ray incidence angle;
and performing image synthesis processing based on the shadow region image and the first region image to generate a shadow image of the target image.
The present specification embodiments also provide a computer-readable storage medium storing one or more programs that, when executed by an electronic device including a plurality of application programs, cause the electronic device to perform operations comprising:
determining the contour line of a target object in a target image;
determining a parameter group corresponding to the type of the target object based on the contour line of the target object, wherein the parameter group comprises a light incidence angle;
performing gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on a ray incidence angle;
and performing image synthesis processing based on the shadow region image and the first region image to generate a shadow image of the target image.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
the method comprises the steps of determining a first area image of a target object by determining a contour line of the target object in a target image, then carrying out gray gradation processing and affine transformation processing on the basis of the first area image to obtain a shadow area image corresponding to the first area image, and further combining the shadow area image and the first area image to obtain a shadow image of the target image. Compared with the scheme that the space shadow of the target object is obtained by constructing the three-dimensional model of the target object in the prior art, on one hand, the three-dimensional model of the target object is avoided, the calculation resources required in the shadow generation process are reduced, and the shadow generation efficiency is improved; on the other hand, extraction of depth information which is difficult to extract is avoided, and the accuracy of the generated shadow can be improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flow chart of a shadow generation method provided in embodiment 1 of the present specification;
fig. 2 is a schematic flow chart of a first implementation manner of step 14 provided in embodiment 1 of the present specification;
fig. 3 is a schematic flow chart of an implementation manner of step 22 provided in embodiment 1 of the present specification;
fig. 4 is a schematic flowchart of a second implementation manner of step 14 provided in embodiment 1 of the present specification;
FIG. 5 is a schematic flow chart of an implementation of step 42 provided in example 1 of this specification;
fig. 6 is a schematic diagram of a positional relationship between reference points before and after a first affine transformation provided in embodiment 1 of the present specification;
fig. 7 is a schematic diagram of a positional relationship between reference points before and after a second affine transformation provided in embodiment 1 of the present specification;
fig. 8 is a schematic flow chart of a shadow generation method provided in embodiment 2 of the present specification;
fig. 9 is a schematic structural diagram of a shadow generating apparatus provided in embodiment 3 of the present specification;
fig. 10 is a schematic structural diagram of a shadow generating device provided in embodiment 4 of the present specification;
fig. 11 is a schematic structural diagram of an electronic device provided in embodiment 5 of this specification.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As stated in the background section, in the prior art, when a shadow image of a target object is constructed, a three-dimensional model of the target object needs to be constructed depending on depth information of the target object, and then a real shadow of the target object is rendered. However, for the two-dimensional image, since the depth information of the target object is difficult to extract, enormous computing resources are required to build a three-dimensional model of the target object; also, since depth information is difficult to extract, the accuracy of the finally rendered shadow cannot be guaranteed.
Based on the above, the present invention provides a shadow generation method, in which a contour line of a target object in a two-dimensional image is determined, and a gray gradation process and an affine transformation process are performed based on an area image in the contour line to obtain a shadow area image, and the shadow area image is merged into the target image to obtain a shadow image of the target image. Compared with the prior art, the method can reduce the computing resources required in the shadow generation process, improve the shadow generation efficiency and improve the accuracy of the generated shadow.
The following exemplifies an application scenario of the present invention.
A first application scenario,
The user starts an application program client installed on the terminal device, imports an image to be processed (as a target image) through operation, and then sets related parameters, including: light incidence angle, blur radius, etc.; then, the application client executes the contour line of the target object, performs gray level gradient processing and affine transformation processing to obtain a shadow region image of the target image, and further performs image synthesis processing to obtain a shadow image with the shadow region image.
A second application scenario,
A user starts an application program client installed on terminal equipment, and imports an image to be processed (as a target image) through operation; then, the contour line of the target object is extracted by the client side of the application program, and the type of the target object is determined based on the contour line; further, finding one or more sets of related parameters corresponding to the type from the database includes: light incidence angle, blur radius, etc.; then, gray level gradient processing and affine transformation processing are carried out based on the relevant parameters to obtain a shadow area image of the target image, and then image synthesis processing is carried out to obtain a shadow image with the shadow area image.
For the two application scenarios, in order to reduce the occupied computing resources of the terminal device, after the application client sends the relevant data to the application server, one or more of the steps of contour line matting, gray level gradation processing, affine transformation processing, and the like may be performed by the application server.
The terminal device may be: PC end and mobile terminal. The mobile terminal or the mobile communication terminal refers to a computer device which can be used in moving, and broadly includes a mobile phone, a notebook, a tablet computer, a POS machine, and even a vehicle-mounted computer. But most often refer to cell phones or smart phones and tablets with multiple application functions.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Example 1
Fig. 1 is a schematic flowchart of a shadow generation method provided in an embodiment of the present specification, and referring to fig. 1, the method may be executed by an application client, and specifically includes the following steps:
step 12, determining the contour line of the target object in the target image;
it should be noted that, the first implementation manner of step 12 may be:
after the user imports the target image, clicking the 'start' option, and then automatically extracting the edge pixels of the target object by the program client based on a preset image matting algorithm to obtain the contour line.
The predetermined matting algorithm can adopt the existing matting algorithm, such as: poisson Matting, Bayesian Matting, Closed Matting, etc. Since they are mature prior art, the description of the cutout principle is omitted here.
A second implementation of step 12 may be:
after the user imports the target image, a third-party application program is called, and the edge pixels of the target object are extracted by the third-party application program to obtain the contour line.
The third-party application program may be, for example: PS, American show, etc.,
optionally, after determining the contour line of the target object, the method may further include: and (5) optimizing. The optimization step may specifically be, for example:
highlighting contours, such as: bold, display in a predetermined color, etc.
If receiving an editing instruction input by a user, for example: moving a portion of the contour line a distance adjusts the contour line adaptively.
Step 14, carrying out gray level gradient processing and affine transformation processing on the basis of the first area image to obtain a shadow area image;
the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on the position relation of reference points before and after affine transformation;
it should be noted that, with reference to fig. 2, a first implementation manner of step 14 may be:
step 22, performing gray level gradient processing based on the first area image to obtain a gray level gradient image;
and 24, carrying out affine transformation processing on the gray gradient image based on the position relation of the reference points before and after affine transformation to obtain a shadow area image.
With reference to fig. 3, one implementation of step 22 may be:
step 32, constructing a second layer, and constructing a second area image on the second layer;
the first layer is a layer where a first area image is located, and the second area image is the same as the first area image in image data, that is, the first area image of the first layer is copied to the second layer, and is a first area image different from the first layer, and here, the first area image of the second layer is referred to as the second area image.
Step 34, performing binarization processing on the second area image to obtain a third area image;
the binarization processing may specifically be:
adjusting the gray value of the pixel point in the second area image to 255 to obtain a black second area image, wherein the black second area image is referred to as a third area image in order to distinguish the second area image in step 32.
Optionally, in order to reduce the data processing amount, a binarization process may be performed on the region image other than the second region image, for example: and adjusting the gray value of the pixel point of the area outside the second area image to be 0.
And step 36, performing gray level gradient processing on the third area image to obtain a gray level gradient image.
It should be noted that, one implementation manner of step 36 may be:
determining the relative position of the pixel point and a preselected reference point based on the position of the pixel point in the third area image; and adjusting the gray value of the pixel point based on the relative position of the pixel point and the reference point.
The preselected reference point can be any point inside and outside the third area image; here, the midpoint of the bottom line of the circumscribed rectangle of the third region image is preferable. And referring to the point, based on the rule that the gray value of the pixel point farther away from the point is smaller, adjusting the gray value of each pixel point to form a shade gradual change form from deep to light from the bottom to the top of the target object.
With reference to fig. 4, a second implementation manner of step 14 may be:
step 42, carrying out affine transformation processing based on the first area image to obtain an affine transformation image;
and 44, carrying out gray level gradient processing on the affine transformation image to obtain a shadow area image.
With reference to fig. 5, one implementation of step 42 may be:
step 52, constructing a second layer, and constructing a second area image on the second layer;
the first layer is a layer where a first area image is located, and the second area image is the same as the first area image in image data, that is, the first area image of the first layer is copied to the second layer, and is a first area image different from the first layer, and here, the first area image of the second layer is referred to as the second area image.
Step 54, determining an affine transformation matrix based on the position relation of the reference points before and after the affine transformation;
before step 54 is executed, the method further includes: determining the position relation of the reference points before and after the affine transformation;
with reference to fig. 6, a first implementation of this step may be:
the reference point before affine transformation is a reference point preselected on a first plane, where the first plane is an affine plane where the second area image is located, for example: lower left corner a of circumscribed rectangle of target object1Lower right corner point a2And the top edge midpoint a3(ii) a The reference point after the affine transformation is a reference point a obtained by projectively transforming the reference point before the affine transformation to a second plane1' (corresponding to a)1)、a2' (corresponding to a)2) And a3' (corresponding to a)3) And the second plane is a preset projection transformation plane.
Determining the incident angle of the light rays and the coordinates of the reference point before the affine transformation;
determining the coordinates of the reference point after the affine transformation based on the unit vector corresponding to the incident angle of the light ray and the coordinates of the reference point before the affine transformation;
it is not difficult to understand that based on a1、a2And a3The coordinate of (a) is calculated by combining the unit vector light corresponding to the incident angle of the light ray with (x, y, z)1′、a2' and a3' coordinates of.
Accordingly, a first implementation of step 54 may be:
and determining an affine transformation matrix based on the coordinates of the reference point before the affine transformation and the coordinates of the reference point after the affine transformation.
The following describes, with reference to fig. 6, an exemplary first implementation of the "determining the positional relationship of the reference points before and after the affine transformation" and step 54:
first, assume a preselected reference point a on a first plane1Is noted as (x)1,y1,z1),a2Is noted as (x)2,y2,z2),a3Is noted as (x)3、y3、z3) (ii) a The incident angle of the light is 45 degrees in the directions of x, y and z, the unit vector light corresponding to the incident angle of the light is (x, y, z), and x is ∈ [ -1,1],y∈[0,1],z∈[0,1]。
Secondly, determining the sum a after affine transformation based on the unit vector light1、a2And a3Corresponding reference point a1′、a2' and a3' coordinates of. Wherein, due to a1And a1' is the same reference point, a2And a2' is the same reference point, therefore, a1The coordinate of' is (x)1,y1,z1) Is a is1Make a distinction, mark as (x)1′,y1′,z1′),a2The coordinate of' is (x)2,y2,z2) Is a is2Make a distinction, mark as (x)2′,y2′,z2′)。a3The coordinates of' are calculated as follows:
step S1, for convenient calculation, a coordinate system transformation is performed first, that is:
x=x′
y=-y′
z=-z′
step S2, assume a3The coordinate of' is (x)0,y0,0)
Then, the light passes through a3Linear formula of pointsComprises the following steps:
Figure BDA0001730344820000101
the line intersects the plane where y is 0 to obtain a3The coordinates of (a):
Figure BDA0001730344820000102
finally, a obtained1-a3,a1′-a3The coordinates of' are substituted into the following equation:
Figure BDA0001730344820000111
solving for affine transformation parameter m00、m01、m02、m10、m11And m12And obtaining a specific affine transformation matrix.
Optionally, the method further includes: the user editing step may specifically be:
acquiring an adjusting instruction of a light incidence angle, wherein the adjusting instruction carries the adjusted light incidence angle;
and adjusting the coordinates of the reference point after the affine transformation based on the unit vector corresponding to the adjusted light incidence angle and the coordinates of the reference point before the affine transformation.
With reference to fig. 7, a second implementation manner of the step of determining the position relationship of the reference points before and after the affine transformation may be:
determining the coordinates of a first set of reference points preselected on a first plane, for example: a is1-a3The first plane is an affine plane where the second area image is located; determining the coordinates of a second set of reference points preselected on a second plane, for example: a is1′-a3', the second plane is a preset projective transformation plane;
the first group of reference points and the second group of reference points are in one-to-one correspondence;
accordingly, a second implementation of step 54 may be:
determining an affine transformation matrix based on the coordinates of the first set of reference points and the coordinates of the second set of reference points.
The following describes, with reference to fig. 7, an exemplary second implementation of the "determining the positional relationship of the reference points before and after the affine transformation" and step 54:
step S1, determining that the user selects the reference point a on the first plane1-a3On a second plane, a reference point a is selected1′-a3The coordinates of';
wherein, a1Is noted as (x)1,y1,z1),a2Is noted as (x)2,y2,z2),a3Is noted as (x)3、y3、z3),a1' coordinate is expressed as (x)1′,y1′,z1′),a2' coordinate is expressed as (x)2′,y2′,z2′),a3' coordinate is expressed as (x)3′,y3′,z3′)。
Step S2, substituting the coordinates of the selected 6 reference points into the following formula:
Figure BDA0001730344820000121
solving for affine transformation parameter m00、m01、m02、m10、m11And m12And obtaining a specific affine transformation matrix.
And 56, carrying out affine transformation processing on the second area image based on the affine transformation matrix to obtain an affine transformation image.
It is understood that, based on the affine transformation matrix obtained in step 54, each point in the second area image or only the points on the contour line are subjected to affine transformation, so that an affine transformation image under illumination can be obtained.
With regard to the above two implementations of step 14, it is understood that the order of execution of the gradation process and the affine transformation process is not necessarily limited. Moreover, step 24 (affine transformation step) in the first implementation is similar to the implementation of step 42 in the second implementation; step 44 (gradation process) in the second implementation is similar to step 22 in the first implementation; therefore, the explanation of step 24 and step 44 will not be provided herein.
Optionally, before performing step 16, the method further includes: the fuzzy processing step may specifically be:
acquiring a fuzzy radius setting instruction, wherein the fuzzy radius setting instruction carries a fuzzy radius;
and carrying out blurring processing on the shadow area image based on the blurring radius.
Further, still include: the fuzzy adjustment step may specifically be:
acquiring an adjusting instruction of the fuzzy radius, wherein the adjusting instruction carries the adjusted fuzzy radius;
and adjusting the shadow area image after the blurring processing based on the adjusted blurring radius.
And processing the generated shadow image based on the blurring processing step and the blurring adjusting step so that the shadow area image can accord with the aesthetic law of the user.
And step 16, performing image synthesis processing based on the shadow region image and the first region image to generate a shadow image of the target image.
The first area image is located on a first image layer, the shadow area image is located on a second image layer, and the display priority of the first image layer is greater than that of the second image layer;
it should be noted that, one implementation manner of step 16 may be:
determining a first reference point of the shadow region image and a second reference point of the first region image, the first reference point and the second reference point corresponding; and merging the first image layer and the second image layer based on the first reference point and the second reference point.
Referring to fig. 6, the first reference point may be a midpoint of a bottom edge of the circumscribed matrix on the second face (i.e., a midpoint of a line segment between a1 'and a 2'), and the second matrix may be a midpoint of a bottom edge of the circumscribed matrix on the first face (i.e., a midpoint of a line segment between a1 and a 2), whereby the shadow region image and the first region image may be accurately merged when the first and second image layers are merged.
Optionally, to improve the effect of the combined shadow image and avoid the influence of the background portion of the target image, before step 16, the method further includes:
determining a region image corresponding to the shadow region image in the target image based on the first reference point; and adjusting the gray value of the pixel point in the area image corresponding to the shadow area image to be 0. That is, the area image corresponding to the shadow area image in the target image on the first image layer is adjusted to be white, so that the shadow area image is not affected by the pixel points of the background area after the images are combined.
As can be seen, in the present embodiment, the contour line of the target object in the target image is determined to determine the first area image of the target object, then, the gray-scale gradation processing and the affine transformation processing are performed based on the first area image to obtain the shadow area image corresponding to the first area image, and further, the shadow area image and the first area image are combined to obtain the shadow image of the target image. Compared with the scheme that the space shadow of the target object is obtained by constructing the three-dimensional model of the target object in the prior art, on one hand, the three-dimensional model of the target object is avoided, the calculation resources required in the shadow generation process are reduced, and the shadow generation efficiency is improved; on the other hand, extraction of depth information which is difficult to extract is avoided, and the accuracy of the generated shadow can be improved.
Example 2
Fig. 8 is a flowchart of a shadow generation method provided in an embodiment of the present specification, and referring to fig. 8, the method may be executed by an application client, and specifically includes the following steps:
step 82, analyzing the shape of the object to obtain the relation between the characteristics of the object image and the type of the object;
it should be noted that, one implementation manner of step 82 may be:
step S1, model design and data acquisition
Model design: using vgg-16 networks, the network convolutional layer portion is reused and then the full connectivity layer is redesigned.
Wherein vgg-16 is also called OxfordNet, which is a convolutional neural network structure developed by Oxford visual geometry group.
Data acquisition: crawling various object images, picking out a predetermined number of images suitable for generating shadows, for example: 1000 sheets. Among these, object images cover several main categories of items, such as: shoe type, dress shape, bottle shape, jar shape, quasi-circular shape, etc., each article category all has a certain amount of training sample and a certain amount of test sample.
Step S2, model training
Since running one vgg-16 is very expensive, running the entire network for each batch can consume significant time and space resources. Therefore, in order to improve the training efficiency, the application provides a new training step, which is as follows:
firstly, sequentially inputting vgg-16 network convolution layers of all object images (including training samples and test samples) to obtain ordered bottleneck characteristics, and storing the bottleneck characteristics in an off-line form;
then, a model is reconstructed (the structure of the model is a combination of the two connection layers (256-dimensional and category-number dimensions) and the discarding layer described earlier), the input of the model is the bottleneck feature saved before, and the corresponding output is the type label.
When training begins, the off-line bottleneck characteristics stored before are imported in sequence, and the bottleneck characteristics are recorded in sequence, so that the labels can be directly coded by One-Hot of the image labels (the correct type is 1, and the other types are 0). Training is also much faster due to the few features of the model. The effect of such training is equivalent to the effect of training the entire vgg-16 model.
And (3) importing the saved offline data for training, and defining a loss function as multi-class cross entropy (coordinated cross entropy). The optimizer uses an adam (adaptive motion estimation) optimizer, which has the advantage that after bias correction, the learning rate of each iteration has a certain range, so that the feature change is relatively smooth. And (3) performing gradient back propagation calculation by using 32 image data in each batch, wherein through 70 times of iteration, the accuracy of the data set can reach 98%, and the accuracy of the verification set can reach 90%. The full link layer model is saved offline and used when predicted.
To further enable the intelligentization of the shadow generation, step 82 further comprises: the parameter set setting step may specifically be:
for each object type, one or more sets of parameters are predefined, the sets comprising: ray entry angle, blur radius, etc.
For example: for a bottle image, the classification model inputs that it belongs to a bottle-shaped object, and it can be determined that the bottle-shaped object is defined by a parameter set with a ray incident from a direction of 45 ° in x, y, and z and a blur radius of 15. Then, the parameter set is introduced into the previous embodiment or the present embodiment, so that a bottle-shaped object with a shadow map can be obtained.
Step 84, determining the contour line of the target object in the target image;
step 86, determining a parameter group corresponding to the type of the target object based on the contour line of the target object;
wherein the parameter set comprises a ray incidence angle;
it is understood that, based on the model trained in step 82, the contour line or the region image in the contour line of the target object is used as the input of the model, i.e. the type of the target object output by the model is obtained, and the parameter set predefined by the type is determined.
Step 88, carrying out gray level gradient processing and affine transformation processing on the basis of the first area image to obtain a shadow area image;
the first region image is a region image in a contour line in the target image, and the affine transformation processing is performed based on a ray incidence angle, and includes: determining the position relation of the reference points before and after the affine transformation based on the incident angle of the light rays, and performing the affine transformation processing based on the position relation of the reference points before and after the affine transformation.
And step 810, performing image synthesis processing based on the shadow region image and the first region image to obtain a shadow image of the target image.
It should be noted that, since steps 88 to 810 correspond to the "step of determining the position relationship of the reference points before and after the affine transformation", steps 14 and 16 described in the previous embodiment, and the implementation manners thereof are also similar, the descriptions of steps 88 to 810 are omitted here.
Optionally, the method further includes:
and carrying out blurring processing on the shadow area image based on the blurring radius.
Optionally, the method further includes:
acquiring an adjusting instruction of the fuzzy radius, wherein the adjusting instruction carries the adjusted fuzzy radius;
and adjusting the shadow area image after the blurring processing based on the adjusted blurring radius.
As can be seen, in the embodiment, the classification model is constructed to determine the type of the target object and the predefined parameter group corresponding to the type, and the shadow region image of the target object is generated based on the predefined parameter group, so as to obtain the shadow image of the target image by merging. Compared with the prior art, the method can realize full automation and intellectualization of shadow image generation.
In addition, for simplicity of explanation, the above-described method embodiments are described as a series of acts or combinations, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts or steps described, as some steps may be performed in other orders or simultaneously according to the present invention. Furthermore, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Example 3
Fig. 9 is a schematic structural diagram of a shadow generating apparatus provided in embodiment 3 of this specification, and referring to fig. 9, the apparatus may specifically include: a determination module 91, a first processing module 92 and a second processing module 93, wherein:
a determining module 91, configured to determine a contour line of the target object in the target image;
a first processing module 92, configured to perform gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on the position relation of reference points before and after affine transformation;
a second processing module 93, configured to perform image synthesis processing based on the shadow region image and the first region image, and generate a shadow image of the target image.
Optionally, the first processing module 92 is specifically configured to:
performing gray level gradient processing based on the first area image to obtain a gray level gradient image;
and carrying out affine transformation processing on the gray gradient image based on the position relation of the reference points before and after affine transformation to obtain a shadow area image.
The first area image is located in a first image layer;
optionally, the first processing module 92 is specifically configured to:
constructing a second area image on a second image layer, wherein the image data of the second area image is the same as that of the first area image;
carrying out binarization processing on the second area image to obtain a third area image;
and carrying out gray level gradient processing on the third area image to obtain a gray level gradient image.
Optionally, the first processing module 92 is specifically configured to:
and adjusting the gray value of the pixel point in the second area image to be 255.
Optionally, the first processing module 92 is specifically configured to:
determining the relative position of the pixel point and a preselected reference point based on the position of the pixel point in the third area image;
and adjusting the gray value of the pixel point based on the relative position of the pixel point and the reference point.
Optionally, the first processing module 92 is specifically configured to:
carrying out affine transformation processing on the basis of the first area image to obtain an affine transformation image;
and carrying out gray level gradient processing on the affine transformation image to obtain a shadow area image.
The first area image is located in a first image layer;
optionally, the first processing module 92 is specifically configured to:
constructing a second area image on a second image layer, wherein the image data of the second area image is the same as that of the first area image;
determining an affine transformation matrix based on the position relation of the reference points before and after the affine transformation;
and performing affine transformation processing on the second area image based on the affine transformation matrix to obtain an affine transformation image.
The reference point before affine transformation is a preselected reference point on a first plane, and the first plane is an affine plane where the second area image is located; the reference point after the affine transformation is a reference point obtained by projectively transforming the reference point before the affine transformation to a second plane, wherein the second plane is a preset projective transformation plane;
then, the apparatus further comprises: a first position determination module specifically configured to:
determining the incident angle of the light rays and the coordinates of the reference point before the affine transformation;
determining the coordinates of the reference point after the affine transformation based on the unit vector corresponding to the incident angle of the light ray and the coordinates of the reference point before the affine transformation;
accordingly, the first processing module 92 is specifically configured to:
and determining an affine transformation matrix based on the coordinates of the reference point before the affine transformation and the coordinates of the reference point after the affine transformation.
Optionally, the apparatus further comprises: the first position adjustment module is specifically configured to:
acquiring an adjusting instruction of a light incidence angle, wherein the adjusting instruction carries the adjusted light incidence angle;
and adjusting the coordinates of the reference point after the affine transformation based on the unit vector corresponding to the adjusted light incidence angle and the coordinates of the reference point before the affine transformation.
The optional apparatus further comprises: a second position determination module specifically configured to:
determining the coordinates of a first group of reference points preselected on a first plane, wherein the first plane is an affine plane where the second area image is located;
determining coordinates of a second set of reference points preselected on a second plane, the second plane being a preset projective transformation plane;
the first group of reference points and the second group of reference points are in one-to-one correspondence;
accordingly, the first processing module 92 is specifically configured to:
determining an affine transformation matrix based on the coordinates of the first set of reference points and the coordinates of the second set of reference points.
Optionally, the apparatus further comprises: the fuzzy processing module is specifically configured to:
acquiring a fuzzy radius setting instruction, wherein the fuzzy radius setting instruction carries a fuzzy radius;
and carrying out blurring processing on the shadow area image based on the blurring radius.
Optionally, the apparatus further comprises: the fuzzy adjustment module is specifically configured to:
acquiring an adjusting instruction of the fuzzy radius, wherein the adjusting instruction carries the adjusted fuzzy radius;
and adjusting the shadow area image after the blurring processing based on the adjusted blurring radius.
The first area image is located on a first image layer, the shadow area image is located on a second image layer, and the display priority of the first image layer is greater than that of the second image layer;
correspondingly, the second processing module 93 is specifically configured to:
determining a first reference point of the shadow region image and a second reference point of the first region image, the first reference point and the second reference point corresponding;
and merging the first image layer and the second image layer based on the first reference point and the second reference point.
Optionally, the apparatus further comprises: an optimization module specifically configured to:
determining a region image corresponding to the shadow region image in the target image based on the first reference point;
and adjusting the gray value of the pixel point in the area image corresponding to the shadow area image to be 0.
As can be seen, in the present embodiment, the contour line of the target object in the target image is determined to determine the first area image of the target object, then, the gray-scale gradation processing and the affine transformation processing are performed based on the first area image to obtain the shadow area image corresponding to the first area image, and further, the shadow area image and the first area image are combined to obtain the shadow image of the target image. Compared with the scheme that the space shadow of the target object is obtained by constructing the three-dimensional model of the target object in the prior art, on one hand, the three-dimensional model of the target object is avoided, the calculation resources required in the shadow generation process are reduced, and the shadow generation efficiency is improved; on the other hand, extraction of depth information which is difficult to extract is avoided, and the accuracy of the generated shadow can be improved.
Example 4
Fig. 10 is a schematic structural diagram of a shadow generating apparatus provided in embodiment 4 of this specification, and referring to fig. 10, the apparatus may specifically include: a first determination module 101, a second determination module 102, a first processing module 103, and a second processing module 104, wherein:
a first determining module 101, configured to determine a contour line of a target object in a target image;
a second determining module 102, configured to determine, based on a contour line of the target object, a parameter group corresponding to a type of the target object, where the parameter group includes a light incident angle;
the first processing module 103 is configured to perform gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on a ray incidence angle;
a second processing module 104, configured to perform image synthesis processing based on the shadow region image and the first region image, and generate a shadow image of the target image.
Wherein the parameter group comprises the incident angle of the light;
optionally, the apparatus further comprises: the position adjusting module is specifically used for:
and determining the position relation of the reference points before and after the affine transformation based on the incident angle of the light rays.
Optionally, the parameter set further includes a blur radius;
then, the apparatus further comprises: the fuzzy processing module is specifically configured to:
and carrying out blurring processing on the shadow area image based on the blurring radius.
Optionally, the apparatus further comprises: the fuzzy adjustment module is specifically configured to:
acquiring an adjusting instruction of the fuzzy radius, wherein the adjusting instruction carries the adjusted fuzzy radius;
and adjusting the shadow area image after the blurring processing based on the adjusted blurring radius.
As can be seen, in the embodiment, the classification model is constructed to determine the type of the target object and the predefined parameter group corresponding to the type, and the shadow region image of the target object is generated based on the predefined parameter group, so as to obtain the shadow image of the target image by merging. Compared with the prior art, the method can realize full automation and intellectualization of shadow image generation.
In addition, as for the device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to part of the description of the method embodiment.
It should be noted that, in the respective components of the apparatus of the present invention, the components therein are logically divided according to the functions to be implemented thereof, but the present invention is not limited thereto, and the respective components may be newly divided or combined as necessary.
Example 5
Fig. 11 is a schematic structural diagram of an electronic device provided in embodiment 5 of this specification, and referring to fig. 11, the electronic device includes a processor, an internal bus, a network interface, a memory, and a nonvolatile memory, and may also include hardware required by other services. The processor reads a corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the shadow generating device on the logic level. Of course, besides the software implementation, the present application does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
The network interface, the processor and the memory may be interconnected by a bus system. The bus may be an ISA (Industry Standard Architecture) bus, a PCI (peripheral component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 11, but that does not indicate only one bus or one type of bus.
The memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both read-only memory and random access memory, and provides instructions and data to the processor. The Memory may include a Random-Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least 1 disk Memory.
The processor is used for executing the program stored in the memory and specifically executing:
determining the contour line of a target object in a target image;
performing gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on the position relation of reference points before and after affine transformation;
and performing image synthesis processing based on the shadow region image and the first region image to generate a shadow image of the target image.
Alternatively, the first and second electrodes may be,
determining the contour line of a target object in a target image;
determining a parameter group corresponding to the type of the target object based on the contour line of the target object, wherein the parameter group comprises a light incidence angle;
performing gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on a ray incidence angle;
and performing image synthesis processing based on the shadow region image and the first region image to generate a shadow image of the target image.
The method performed by the shadow generation apparatus or the Master node according to the embodiments shown in fig. 9 to 10 of the present application can be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The shadow generation apparatus may also perform the methods of fig. 1-5,8 and implement the methods performed by the administrator node.
Based on the same inventive creation, the present application also provides a computer-readable storage medium storing one or more programs that, when executed by an electronic device including a plurality of application programs, cause the electronic device to perform the shadow generation method provided in embodiments 1-2.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (20)

1. A shadow generation method, comprising:
determining the contour line of a target object in a target image;
performing gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on the position relation of reference points before and after affine transformation;
and performing image synthesis processing based on the shadow region image and the first region image to generate a shadow image of the target image.
2. The method of claim 1, wherein performing a gray-scale gradation process and an affine transformation process based on the first region image to obtain a shadow region image comprises:
performing gray level gradient processing based on the first area image to obtain a gray level gradient image;
and carrying out affine transformation processing on the gray gradient image based on the position relation of the reference points before and after affine transformation to obtain a shadow area image.
3. The method according to claim 2, wherein the first region image is located in a first layer;
wherein, performing gray scale gradation processing based on the first area image to obtain a gray scale gradation image comprises:
constructing a second area image on a second image layer, wherein the image data of the second area image is the same as that of the first area image;
carrying out binarization processing on the second area image to obtain a third area image;
and carrying out gray level gradient processing on the third area image to obtain a gray level gradient image.
4. The method according to claim 3, wherein performing a gray-scale gradation process on the third area image comprises:
determining the relative position of the pixel point and a preselected reference point based on the position of the pixel point in the third area image;
and adjusting the gray value of the pixel point based on the relative position of the pixel point and the reference point.
5. The method of claim 1, wherein performing a gray-scale gradation process and an affine transformation process based on the first region image to obtain a shadow region image comprises:
carrying out affine transformation processing on the basis of the first area image to obtain an affine transformation image;
and carrying out gray level gradient processing on the affine transformation image to obtain a shadow area image.
6. The method according to claim 5, wherein the first region image is located in a first layer;
performing affine transformation processing based on the first region image to obtain an affine transformation image, including:
constructing a second area image on a second image layer, wherein the image data of the second area image is the same as that of the first area image;
determining an affine transformation matrix based on the position relation of the reference points before and after the affine transformation;
and performing affine transformation processing on the second area image based on the affine transformation matrix to obtain an affine transformation image.
7. The method according to claim 6, wherein the reference point before affine transformation is a reference point preselected on a first plane, the first plane being an affine plane on which the second area image is located; the reference point after the affine transformation is a reference point obtained by projectively transforming the reference point before the affine transformation to a second plane, wherein the second plane is a preset projective transformation plane;
before determining the affine transformation matrix based on the position relationship of the reference points before and after the affine transformation, the method further includes:
determining the incident angle of the light rays and the coordinates of the reference point before the affine transformation;
determining the coordinates of the reference point after the affine transformation based on the unit vector corresponding to the incident angle of the light ray and the coordinates of the reference point before the affine transformation;
wherein determining the affine transformation matrix based on the positional relationship of the reference points before and after the affine transformation comprises:
and determining an affine transformation matrix based on the coordinates of the reference point before the affine transformation and the coordinates of the reference point after the affine transformation.
8. The method of claim 7, further comprising:
acquiring an adjusting instruction of a light incidence angle, wherein the adjusting instruction carries the adjusted light incidence angle;
and adjusting the coordinates of the reference point after the affine transformation based on the unit vector corresponding to the adjusted light incidence angle and the coordinates of the reference point before the affine transformation.
9. The method according to claim 6, wherein before determining the affine transformation matrix based on the positional relationship of the reference points before and after the affine transformation, further comprising:
determining the coordinates of a first group of reference points preselected on a first plane, wherein the first plane is an affine plane where the second area image is located;
determining coordinates of a second set of reference points preselected on a second plane, the second plane being a preset projective transformation plane;
the first group of reference points and the second group of reference points are in one-to-one correspondence;
wherein determining the affine transformation matrix based on the positional relationship of the reference points before and after the affine transformation comprises:
determining an affine transformation matrix based on the coordinates of the first set of reference points and the coordinates of the second set of reference points.
10. The method of claim 1, further comprising:
acquiring a fuzzy radius setting instruction, wherein the fuzzy radius setting instruction carries a fuzzy radius;
and carrying out blurring processing on the shadow area image based on the blurring radius.
11. The method according to claim 1, further comprising, before performing the gradation processing and the affine transformation processing based on the first area image:
determining the type of the target object and a parameter group corresponding to the type based on the contour line of the target object, wherein the parameter group comprises a light incidence angle;
and determining the position relation of the reference points before and after the affine transformation based on the incident angle of the light rays.
12. The method of claim 11, wherein the set of parameters further comprises a blur radius;
then, the method further comprises:
and carrying out blurring processing on the shadow area image based on the blurring radius.
13. The method of claim 10 or 12, further comprising:
acquiring an adjusting instruction of the fuzzy radius, wherein the adjusting instruction carries the adjusted fuzzy radius;
and adjusting the shadow area image after the blurring processing based on the adjusted blurring radius.
14. The method according to any one of claims 1-13, wherein the first region image is located in a first layer, the shadow region image is located in a second layer, and the first layer has a higher display priority than the second layer;
wherein performing image synthesis processing based on the shadow region image and the first region image, and generating a shadow image of the target image includes:
determining a first reference point of the shadow region image and a second reference point of the first region image, the first reference point and the second reference point corresponding;
and merging the first image layer and the second image layer based on the first reference point and the second reference point.
15. The method of claim 14, further comprising, prior to merging the first layer and the second layer:
determining a region image corresponding to the shadow region image in the target image based on the first reference point;
and adjusting the gray value of the pixel point in the area image corresponding to the shadow area image to be 0.
16. A shadow generation method, comprising:
determining the contour line of a target object in a target image;
determining a parameter group corresponding to the type of the target object based on the contour line of the target object, wherein the parameter group comprises a light incidence angle;
performing gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on a ray incidence angle;
and performing image synthesis processing based on the shadow region image and the first region image to generate a shadow image of the target image.
17. A shadow generation apparatus, comprising:
the determining module is used for determining the contour line of the target object in the target image;
the first processing module is used for carrying out gray level gradient processing and affine transformation processing on the basis of the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on the position relation of reference points before and after affine transformation;
and the second processing module is used for carrying out image synthesis processing on the basis of the shadow region image and the first region image to generate a shadow image of the target image.
18. A shadow generation apparatus, comprising:
the first determining module is used for determining the contour line of the target object in the target image;
a second determining module, configured to determine, based on a contour line of the target object, a parameter group corresponding to a type of the target object, where the parameter group includes a light incident angle;
the first processing module is used for carrying out gray level gradient processing and affine transformation processing on the basis of the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on a ray incidence angle;
and the second processing module is used for carrying out image synthesis processing on the basis of the shadow region image and the first region image to generate a shadow image of the target image.
19. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
determining the contour line of a target object in a target image;
performing gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on the position relation of reference points before and after affine transformation;
and performing image synthesis processing based on the shadow region image and the first region image to generate a shadow image of the target image.
20. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
determining the contour line of a target object in a target image;
determining a parameter group corresponding to the type of the target object based on the contour line of the target object, wherein the parameter group comprises a light incidence angle;
performing gray level gradient processing and affine transformation processing on the first area image to obtain a shadow area image; the first area image is an area image in a contour line in the target image, and the affine transformation processing is performed based on a ray incidence angle;
and performing image synthesis processing based on the shadow region image and the first region image to generate a shadow image of the target image.
CN201810771391.5A 2018-07-13 2018-07-13 Shadow generation method and device Pending CN110717969A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810771391.5A CN110717969A (en) 2018-07-13 2018-07-13 Shadow generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810771391.5A CN110717969A (en) 2018-07-13 2018-07-13 Shadow generation method and device

Publications (1)

Publication Number Publication Date
CN110717969A true CN110717969A (en) 2020-01-21

Family

ID=69209297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810771391.5A Pending CN110717969A (en) 2018-07-13 2018-07-13 Shadow generation method and device

Country Status (1)

Country Link
CN (1) CN110717969A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114626468A (en) * 2022-03-17 2022-06-14 小米汽车科技有限公司 Method and device for generating shadow in image, electronic equipment and storage medium
CN116483359A (en) * 2023-04-25 2023-07-25 成都赛力斯科技有限公司 New mimicry drawing method and device, electronic equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07336529A (en) * 1994-06-02 1995-12-22 Dainippon Screen Mfg Co Ltd Shading method for image
JP2001052208A (en) * 1999-08-06 2001-02-23 Mixed Reality Systems Laboratory Inc Method and device for processing image and storage medium
JP2008305241A (en) * 2007-06-08 2008-12-18 Samii Kk Image generation device and image generation program
US8643678B1 (en) * 2010-12-22 2014-02-04 Google Inc. Shadow generation
CN107330964A (en) * 2017-07-24 2017-11-07 广东工业大学 A kind of display methods and system of complex three-dimensional object

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07336529A (en) * 1994-06-02 1995-12-22 Dainippon Screen Mfg Co Ltd Shading method for image
JP2001052208A (en) * 1999-08-06 2001-02-23 Mixed Reality Systems Laboratory Inc Method and device for processing image and storage medium
JP2008305241A (en) * 2007-06-08 2008-12-18 Samii Kk Image generation device and image generation program
US8643678B1 (en) * 2010-12-22 2014-02-04 Google Inc. Shadow generation
CN107330964A (en) * 2017-07-24 2017-11-07 广东工业大学 A kind of display methods and system of complex three-dimensional object

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114626468A (en) * 2022-03-17 2022-06-14 小米汽车科技有限公司 Method and device for generating shadow in image, electronic equipment and storage medium
CN114626468B (en) * 2022-03-17 2024-02-09 小米汽车科技有限公司 Method, device, electronic equipment and storage medium for generating shadow in image
CN116483359A (en) * 2023-04-25 2023-07-25 成都赛力斯科技有限公司 New mimicry drawing method and device, electronic equipment and readable storage medium
CN116483359B (en) * 2023-04-25 2024-03-12 重庆赛力斯凤凰智创科技有限公司 New mimicry drawing method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN111161349B (en) Object posture estimation method, device and equipment
Xie et al. Multilevel cloud detection in remote sensing images based on deep learning
CN111027493B (en) Pedestrian detection method based on deep learning multi-network soft fusion
CN110084299B (en) Target detection method and device based on multi-head fusion attention
CN111583097A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107578439B (en) Method, device and equipment for generating target image
CN109977997B (en) Image target detection and segmentation method based on convolutional neural network rapid robustness
CN111950318A (en) Two-dimensional code image identification method and device and storage medium
CN111340745B (en) Image generation method and device, storage medium and electronic equipment
Shi et al. Robust foreground estimation via structured Gaussian scale mixture modeling
CN112907530B (en) Method and system for detecting disguised object based on grouped reverse attention
CN109348731A (en) A kind of method and device of images match
CN110910445B (en) Object size detection method, device, detection equipment and storage medium
CN109165654B (en) Training method of target positioning model and target positioning method and device
CN110570442A (en) Contour detection method under complex background, terminal device and storage medium
CN109726195A (en) A kind of data enhancement methods and device
CN111915657A (en) Point cloud registration method and device, electronic equipment and storage medium
CN112614140A (en) Method and related device for training color spot detection model
CN110717969A (en) Shadow generation method and device
CN117475416A (en) Thermal power station pointer type instrument reading identification method, system, equipment and medium
Panda et al. Kernel density estimation and correntropy based background modeling and camera model parameter estimation for underwater video object detection
CN109903246B (en) Method and device for detecting image change
CN113808033A (en) Image document correction method, system, terminal and medium
Jiang et al. AdaptMVSNet: Efficient Multi-View Stereo with adaptive convolution and attention fusion
Lin et al. Matching cost filtering for dense stereo correspondence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination