CN112950455A - Image display method and device - Google Patents
Image display method and device Download PDFInfo
- Publication number
- CN112950455A CN112950455A CN202110179146.7A CN202110179146A CN112950455A CN 112950455 A CN112950455 A CN 112950455A CN 202110179146 A CN202110179146 A CN 202110179146A CN 112950455 A CN112950455 A CN 112950455A
- Authority
- CN
- China
- Prior art keywords
- image
- color
- pixel point
- visual system
- spectral information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 241001465754 Metazoa Species 0.000 claims abstract description 175
- 230000000007 visual effect Effects 0.000 claims abstract description 134
- 230000003595 spectral effect Effects 0.000 claims abstract description 69
- 241000282414 Homo sapiens Species 0.000 description 29
- 238000010586 diagram Methods 0.000 description 17
- 230000000694 effects Effects 0.000 description 13
- 238000001228 spectrum Methods 0.000 description 9
- 241000282412 Homo Species 0.000 description 8
- 230000035945 sensitivity Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 241000283690 Bos taurus Species 0.000 description 4
- 239000003086 colorant Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 241000239290 Araneae Species 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 241000282326 Felis catus Species 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 2
- 239000010454 slate Substances 0.000 description 2
- 230000004456 color vision Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/2003—Display of colours
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The application discloses an image display method and device, and belongs to the technical field of electronic equipment. The image display method includes: acquiring a first image; determining a second image corresponding to the visual system of the target animal according to the spectral information corresponding to the first image and acquired by the multispectral sensor; the second image is displayed. The image display method and the image display device can display images by using a visual system of a target animal.
Description
Technical Field
The application belongs to the technical field of electronic equipment, and particularly relates to an image display method and device.
Background
An image is a visual picture that contains information about the object being described.
The related art images are displayed in a human visual system.
However, in the course of implementing the present application, the inventors found that at least the following problems exist in the related art: the image can be displayed only by the human visual system, and the image display mode is single.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image display method and apparatus, which can solve the problem of single image display mode.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image display method, including:
acquiring a first image;
determining a second image corresponding to the visual system of the target animal according to the spectral information corresponding to the first image and acquired by the multispectral sensor;
the second image is displayed.
In a second aspect, an embodiment of the present application provides an image display apparatus, including:
the acquisition module is used for acquiring a first image;
the first determining module is used for determining a second image corresponding to the visual system of the target animal according to the spectral information corresponding to the first image acquired by the multispectral sensor;
and the display module is used for displaying the second image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or an instruction stored on the memory and executable on the processor, and when the program or the instruction is executed by the processor, the steps of the image display method according to the first aspect are implemented.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the image display method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the image display method according to the first aspect.
In the embodiment of the application, after the first image is acquired, a second image corresponding to the visual system of the target animal is determined according to the spectral information corresponding to the first image acquired by the multispectral sensor, and then the second image is displayed. Because the obtained second image is the image corresponding to the visual system of the target animal, the image can be displayed by the visual system of the target animal, the image display mode is increased, the image is not only displayed by the visual system of human, and the user can experience the scene observed by the visual system of the target animal.
Drawings
Fig. 1 is a schematic flowchart of an image display method provided in an embodiment of the present application;
FIG. 2 is a graphical representation of the relative sensitivity of three human cones to spectra as provided by an embodiment of the present application;
FIG. 3 is a graph showing the relative sensitivity of certain animal cones to spectra as provided in the examples herein;
FIG. 4 is a schematic diagram of a first image after color replacement according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a second image provided by an embodiment of the present application;
FIG. 6 is another schematic diagram of a color-replaced first image according to an embodiment of the present disclosure;
FIG. 7 is another schematic diagram of a second image provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a first image after color replacement according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a second image provided by an embodiment of the present application;
fig. 10 is a schematic structural diagram of an image display device provided in an embodiment of the present application;
FIG. 11 is a schematic diagram of an electronic device implementing an embodiment of the present application
Fig. 12 is a hardware structure diagram of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail an image display method, an apparatus, a device, and a medium provided in the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of an image display method according to an embodiment of the present application. The image display method may include:
s101: acquiring a first image;
s102: determining a second image corresponding to the visual system of the target animal according to the spectral information corresponding to the first image and acquired by the multispectral sensor;
s103: the second image is displayed.
Specific implementations of the above steps will be described in detail below.
In the embodiment of the application, after the first image is acquired, a second image corresponding to the visual system of the target animal is determined according to the spectral information corresponding to the first image acquired by the multispectral sensor, and then the second image is displayed. Because the obtained second image is the image corresponding to the visual system of the target animal, the image can be displayed by the visual system of the target animal, the image display mode is increased, the image is not only displayed by the visual system of human, and the user can experience the scene observed by the visual system of the target animal.
In the embodiment of the present application, the spectrum is a pattern in which monochromatic light dispersed by a dispersion system (e.g., prism, grating) is arranged in order according to the size of wavelength (or frequency) after the monochromatic light is dispersed by the dispersion system. The multispectral sensor can completely sample out the spectral curve of the lens of the image acquisition device.
In S102, the visual system information of the target animal includes, but is not limited to: the color of the light wave of a certain wavelength in the visual system of the target animal and the angle of view of the target animal, etc.
In some possible implementations of the embodiments of the present application, S102 may include: and aiming at each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information corresponding to the pixel point in the visual system of the target animal according to the spectral information corresponding to the pixel point acquired by the multispectral sensor to obtain a second image.
Color perception has three major elements: a light source, an object to be viewed and an observer. After light emitted by the light source irradiates an observed object, the observed object selectively absorbs or reflects light with different wavelengths according to the property of the observed object, the light which is not absorbed is reflected to an observer, and the observer analyzes the light, so that the color is perceived. Different observers analyze light in different ways, and the perceived color is different.
For example, there are three types of cone cells in the human eye: blue cone cells, green cone cells, and red cone cells. The relative sensitivity of three human cones to the spectrum is shown in figure 2. FIG. 2 is a graphical representation of the relative sensitivity of three human cones to the spectrum as provided by the examples herein.
For a light wave with a wavelength of 580 nanometers (nm), the color perceived by humans for this light wave is a superposition of red and green, i.e., yellow.
And for some other animal, it is different from the kind of human cone cells (including different number of kinds of human cone cells, same number of kinds of human cone cells but different color of cone cells), or same kind of human cone cells but different relative sensitivity to spectrum. The color perceived by the animal may be red for light waves having a wavelength of 580 nm.
In some possible implementations of embodiments of the present application, the color of the light waves of each wavelength in the spectrum of each animal may be fitted to each animal in advance according to the relative sensitivity of its cone cells to the spectrum.
FIG. 3 is a graphical representation of the relative sensitivity of certain animal cone cells to spectra as provided in the examples herein. As can be seen in fig. 3, the animal had 5 types of cone cells: x color cone cells, Y color cone cells, gray cone cells, orange cone cells, and green cone cells. Wherein the X-color and Y-color perceived by the X-color cone cells and Y-color cone cells of the animal are not convertible to perceived colors that are understandable to humans.
In some possible implementations of embodiments of the present application, the above-described X and Y colors may be artificially labeled as some unusual colors. For example, the X color is labeled as blue of deep violet, the blue of deep violet is represented by RGB (138,43,226), the Y color is labeled as slate dark blue gray, and the slate dark blue gray is represented by RGB (106,90, 205).
For a light wave with a wavelength of 580nm, color fitting is performed according to fig. 3, and the perceived color of the animal for the light wave with a wavelength of 580nm is represented by RGB: 0.3 gray +0.25 orange +0.3 green (128,128,128) +0.25 (255,165,0) +0.3 (0,255,0) ═ 102,156, 38.
For a light wave with a wavelength of 400nm, color fitting is performed according to fig. 3, and the color perceived by the animal for the light wave with the wavelength of 400nm is represented by RGB: 0.95 × gray +0.75 × Y +0.6 × X +0.17 × orange (0.95 × 128,128,128) +0.75 × (106,90,205) +0.6 × (138,43,226) +0.17 × (255,165,0) ═ (72,243,156).
For light wave with wavelength of 320nm, color fitting is carried out according to the graph in FIG. 3, and the color of the animal perceived by the light wave with wavelength of 320nm is represented by RGB: 0.91 × X color +0.81 × Y color +0.2 × gray 0.91 (138,43,226) +0.81 (106,90,205) +0.2 (128,128,128) ═ 237,138,142.
Illustratively, for the pixel point a in the first image, assuming that the color of the pixel point a is yellow, the wavelength of the light wave incident to the lens of the image capturing device and corresponding to the pixel point is collected by the multispectral sensor to be 580nm, and the target animal is an animal corresponding to the embodiment shown in fig. 3, the color of the pixel point a is replaced by the color represented by RGB (102,156, 38).
For a pixel point B in the first image, assuming that the color of the pixel point B is purple, the wavelength of the light wave incident to the point on the lens of the image acquisition device corresponding to the pixel point is 400nm, and the target animal is an animal corresponding to the embodiment shown in fig. 3, the color of the pixel point B is replaced from purple to a color represented by RGB (72,243,156).
For a pixel point C in the first image, assuming that the color of the pixel point C is black, the wavelength of the light wave incident to the point on the lens of the image acquisition device corresponding to the pixel point is acquired by using the multispectral sensor and is 320nm, and the target animal is an animal corresponding to the embodiment shown in fig. 3, the color of the pixel point C is replaced from black to a color represented by RGB (237,138,142).
For another example, for a pixel point D in the first image, assuming that the color of the pixel point D is yellow, two light waves incident on a lens of the image acquisition device and corresponding to the pixel point are acquired by using the multispectral sensor, where the wavelength of one light wave is 580nm, and the wavelength of the other light wave is 320nm, and the colors corresponding to the two light waves are fused. The target animal is an animal corresponding to the embodiment shown in fig. 3. In some possible implementations of embodiments of the present disclosure, when fusing the light waves visible to humans and the light waves invisible to humans, respective fusion weights may be set for the light waves visible to humans and the light waves invisible to humans, wherein the fusion weights may be greater than 0 but less than 1. It is assumed that the fusion weights for the light waves that are visible to humans and the light waves that are not visible to humans are each 0.5. The color of pixel D is replaced by yellow with the color represented by RGB 0.5 (102,156,38) +0.5 (237,138,142) (170,147, 90).
It should be noted that the embodiment of the present application is described by taking an animal corresponding to the embodiment shown in fig. 3 as an example of a target animal, which is only a specific example of the present application and does not limit the present application.
In the embodiment of the application, the color of the pixel point in the first image is replaced by the color of the light corresponding to the pixel point in the visual system of the target animal to obtain the second image, and then the second image is displayed. The second image is obtained by replacing the color of the pixel point in the first image with the color of the light corresponding to the pixel point in the visual system of the target animal. Therefore, through the embodiment of the application, the image can be displayed in the color of the visual system of the target animal, the image display mode is increased, the image is not only displayed in the color perceived by human, and the user can experience the observed scene in the color of the visual system of the target animal.
In some possible implementations of the embodiments of the present application, S102 may include: aiming at each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information corresponding to the pixel point in the visual system of the target animal according to the spectral information corresponding to the pixel point obtained by the multispectral sensor to obtain a color-replaced first image; and according to the field angle corresponding to the visual system of the target animal, performing cutting processing on the first image after color replacement to obtain a second image.
Due to the different physiological configurations of different animals, the field of view of different animals is different. For example, the short spine of a pig has little neck, so that the pig cannot see the sky in normal posture.
The process of obtaining the color-replaced first image is not described herein in this embodiment, and the specific color replacement process may refer to the above embodiments.
The following describes an image display method provided in the embodiments of the present application, taking a target animal as a cow as an example. Fig. 4 is a schematic diagram of a first image after color replacement according to an embodiment of the present application. When the cattle is in a normal posture, the field angle is 60 degrees. The first image after color replacement shown in fig. 4 is subjected to a cropping process at a field angle of cattle, and the resulting second image is shown in fig. 5. Fig. 5 is a schematic diagram of a second image provided in an embodiment of the present application.
It should be noted that the present embodiment is described by taking a target animal as a cow, and is only a specific example of the present application, and does not limit the present application.
In the embodiment of the application, the image can be displayed according to the color and the angle of view of the visual system of the target animal, the image display mode is added, the image is not only displayed according to the color and the angle of view of the visual system of human eyes, and the user can experience the observed scene according to the color and the angle of view of the visual system of the target animal.
It can be understood that, in the embodiment of the application, color replacement is performed on the pixel points in the first image, and then the first image after color replacement is cut according to the field angle of the target animal, so that a user can experience the observed scene with the color and the field angle of the visual system of the target animal.
In some possible implementations of the embodiment of the application, the first image may be cut according to the field angle of the target animal, and then the color of the pixel point in the cut first image is replaced. Based on this, in some possible implementations of the embodiments of the present application, S102 may include: according to the field angle corresponding to the visual system of the target animal, cutting the first image to obtain a first image after cutting; and aiming at each pixel point in the cut first image, replacing the color of the pixel point with the color of light corresponding to the spectral information of the pixel point in the visual system of the target animal according to the spectral information corresponding to the pixel point acquired by the multispectral sensor to obtain a second image.
The color replacement and clipping process can refer to the above embodiments, and the details of the color replacement and clipping process are not described herein in this embodiment of the present application.
In the embodiment of the application, the image can be displayed according to the color and the angle of view of the visual system of the target animal, the image display mode is added, the image is not only displayed according to the color and the angle of view of the visual system of human eyes, and the user can experience the observed scene according to the color and the angle of view of the visual system of the target animal.
In some possible implementations of the embodiments of the present application, S102 may include: aiming at each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information corresponding to the pixel point in the visual system of the target animal according to the spectral information corresponding to the pixel point obtained by the multispectral sensor to obtain the first image after color replacement, and performing target processing on the first image after color replacement according to a target algorithm corresponding to the visual system of the target animal to obtain a second image; or, according to a target algorithm corresponding to the visual system of the target animal, performing target processing on the first image to obtain a first image after the target processing, and for each pixel point in the first image after the target processing, replacing the color of the pixel point with the color of light corresponding to the spectral information of the pixel point in the visual system of the target animal according to the spectral information corresponding to the pixel point obtained by the multispectral sensor to obtain a second image.
That is, the color of the pixel point in the first image may be replaced first, and then the target processing may be performed on the color-replaced first image by using a target algorithm corresponding to the visual system of the target animal; the first image can be subjected to target processing by using a target algorithm corresponding to a visual system of the target animal, and then the color of the pixel point in the first image subjected to target processing is replaced.
In some possible implementations of embodiments of the present application, the target algorithm may include an image blurring algorithm or an image distortion algorithm.
It can be understood that, when the target algorithm is an image blurring algorithm, the corresponding target processing is blurring processing; when the target algorithm is an image distortion algorithm, the corresponding target processing is distortion processing.
Due to the different physiological configurations of different animals, the resolution of the eyes of different animals differs. For the same scene, the scene seen by the animal is more blurred than the scene seen by human eyes, so that the image can be blurred by using an image blurring algorithm.
The embodiment of the present application does not limit the image blurring algorithm, and any available image blurring algorithm may be applied to the embodiment of the present application.
Because the physiological structures of different animals are different, the images seen by other animals can generate different distortion effects from human beings, and therefore, the images can be subjected to fuzzy processing by using an image distortion algorithm.
The image distortion algorithm is not limited in the embodiment of the present application, and any available image distortion algorithm may be applied to the embodiment of the present application.
The following describes the target algorithms as an image blurring algorithm and an image distortion algorithm, respectively.
In some possible implementations of embodiments of the present application, the visual system of each animal may correspond to a different image blurring algorithm, respectively.
In some possible implementations of embodiments of the present application, the visual systems of different animals may correspond to the same image blurring algorithm. However, when the image is blurred by using the image blurring algorithm, the values of the parameters for representing the animal in the image blurring algorithm are the corresponding parameter values of the target animal.
The process of obtaining the color-replaced first image is not described herein in this embodiment, and the specific color replacement process may refer to the above embodiments.
The following describes an image display method provided in the embodiments of the present application, taking a target animal as a mouse as an example. Fig. 6 is another schematic diagram of the first image after color replacement provided by the embodiment of the application. The first image after color replacement shown in fig. 6 is blurred by an image blurring algorithm corresponding to the visual system of the mouse, resulting in a second image shown in fig. 7. Fig. 7 is another schematic diagram of a second image provided in an embodiment of the present application.
It should be noted that the present embodiment is described by taking the target animal as a mouse, and is only a specific example of the present application, and does not limit the present application.
It can be understood that, in the embodiment of the application, color replacement is performed on the pixel points in the first image, and then the first image after color replacement is subjected to blurring processing according to an image blurring algorithm corresponding to the visual system of the target animal, so that a user can experience the observed scene with the color and the resolution of the visual system of the target animal.
In some possible implementations of the embodiment of the application, the first image may be blurred according to an image blurring algorithm corresponding to a visual system of the target animal, and then the color of the pixel point in the blurred first image is replaced.
The color replacement and the blurring process may refer to the above embodiments, which are not described herein again in this embodiment of the application.
In the embodiment of the application, the image can be displayed in the color and the resolution of the visual system of the target animal, the image display mode is added, the image is not only displayed in the color and the resolution of the visual system of human eyes, and the user can experience the observed scene in the color and the resolution of the visual system of the target animal.
In some possible implementations of embodiments of the present application, the visual system of each animal may correspond to a different image distortion algorithm, respectively.
In some possible implementations of embodiments of the present application, the visual systems of different animals may correspond to the same image distortion algorithm. When the image distortion algorithm is used for carrying out distortion processing on the image, the values of the parameters for representing the animal in the image distortion algorithm are the corresponding parameter values of the target animal.
Next, the image display method provided in the embodiment of the present application will be described by taking a target animal as a spider as an example. Fig. 8 is a further schematic diagram of the first image after color replacement according to the embodiment of the present application. The color-replaced first image shown in fig. 8 is subjected to distortion processing using an image distortion algorithm corresponding to the visual system of the spider, resulting in a second image as shown in fig. 9. Fig. 9 is a further schematic diagram of a second image provided in an embodiment of the present application.
The present embodiment is described with reference to the target animal as a spider, but is only a specific example of the present invention and is not intended to limit the present invention.
It can be understood that, in the embodiment of the application, color replacement is performed on the pixel points in the first image, and then distortion processing is performed on the color-replaced first image according to an image distortion algorithm corresponding to the visual system of the target animal, so that a user can experience observed scenes with the color and distortion effect of the visual system of the target animal.
In some possible implementations of the embodiment of the application, the first image may be distorted according to an image distortion algorithm corresponding to a visual system of the target animal, and then the color of the pixel point in the distorted first image is replaced.
The color replacement and the distortion processing may refer to the above embodiments, and the details of the color replacement and the distortion processing are not described herein in this embodiment.
In the embodiment of the application, the image can be displayed by the color and the distortion effect of the visual system of the target animal, the image display mode is increased, the image is not displayed by the color of the visual system of human eyes alone, and a user can experience the observed scene by the color and the distortion effect of the visual system of the target animal.
In some possible implementations of the embodiment of the application, the color of the pixel point in the image can be replaced according to the color of the light corresponding to the spectral information corresponding to the pixel point in the visual system of the target animal; clipping the image according to the field angle of the target animal; carrying out fuzzy processing on the image according to an image fuzzy algorithm corresponding to a visual system of the target animal; and carrying out distortion processing on the image according to an image distortion algorithm corresponding to the visual system of the target animal. The image is enabled to be displayed with the color, resolution, field angle and distortion effect of the visual system of the target animal, and the user is enabled to experience the observed scene with the color, resolution, field angle and distortion effect of the visual system of the target animal.
The present embodiment does not limit the order of color replacement, clipping processing, blurring processing, and distortion processing, and any available order may be applied to the present embodiment.
In some possible implementations of the embodiment of the present application, before S101, the image display method provided in the embodiment of the present application may further include: an animal selected from the plurality of animals is identified as a target animal.
Illustratively, several animals are displayed for the user to select, and if the user selects one of the animals to experience, the animal selected to experience by the user is determined as the target animal.
In some possible implementations of the embodiment of the present application, before S101, the image display method provided in the embodiment of the present application may further include: and determining the animal identified from the acquired third image as the target animal.
For example, a user first uses an electronic device to shoot a scene, the electronic device identifies an animal in the shot scene, and the identified animal is used as an animal (i.e., a target animal) that the user wants to experience. For example, if a cat is included in the scene, the cat is used as an animal (i.e., a target animal) that the user wants to experience.
The embodiments of the present application are not limited to the manner in which the animal is identified, and any available manner of identifying the animal may be applied to the embodiments of the present application.
In some possible implementations of the embodiments of the present application, the image display method provided in the embodiments of the present application may further include: the second image is stored.
It should be noted that, in the image display method provided in the embodiment of the present application, the execution subject may be an image display apparatus, or a control module in the image display apparatus for executing the image display method. The embodiment of the present application describes an image display device provided in the embodiment of the present application, by taking an example in which the image display device executes an image display method.
Fig. 10 is a schematic structural diagram of an image display device according to an embodiment of the present application. The image display device 1000 may include:
an obtaining module 1001 configured to obtain a first image;
the first determining module 1002 is configured to determine a second image corresponding to the visual system of the target animal according to the spectral information corresponding to the first image, which is acquired by the multispectral sensor;
a display module 1003 for displaying the second image.
In the embodiment of the application, after the first image is acquired, a second image corresponding to the visual system of the target animal is determined according to the spectral information corresponding to the first image acquired by the multispectral sensor, and then the second image is displayed. Because the obtained second image is the image corresponding to the visual system of the target animal, the image can be displayed by the visual system of the target animal, the image display mode is increased, the image is not only displayed by the visual system of human, and the user can experience the scene observed by the visual system of the target animal.
In some possible implementations of the embodiment of the present application, the first determining module 1002 may be specifically configured to:
and aiming at each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information corresponding to the pixel point in the visual system of the target animal according to the spectral information corresponding to the pixel point acquired by the multispectral sensor to obtain a second image.
In the embodiment of the application, the image can be displayed in the color of the visual system of the target animal, the image display mode is added, the image is not only displayed in the color perceived by human, and the user can experience the observed scene in the color of the visual system of the target animal.
In some possible implementations of the embodiments of the present application, the first determination 1002 may be specifically configured to:
aiming at each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information corresponding to the pixel point in the visual system of the target animal according to the spectral information corresponding to the pixel point acquired by the multispectral sensor to obtain the first image after color replacement, and cutting the first image after color replacement according to the angle of view corresponding to the visual system of the target animal to obtain a second image;
or the like, or, alternatively,
and for each pixel point in the cut first image, replacing the color of the pixel point with the color of light corresponding to the spectral information of the pixel point in the visual system of the target animal to obtain a second image according to the spectral information corresponding to the pixel point, which is obtained by the multispectral sensor, corresponding to the spectral information of the pixel point.
In the embodiment of the application, the image can be displayed according to the color and the angle of view of the visual system of the target animal, the image display mode is added, the image is not only displayed according to the color and the angle of view of the visual system of human eyes, and the user can experience the observed scene according to the color and the angle of view of the visual system of the target animal.
In some possible implementations of the embodiment of the present application, the first determining module 1002 may be specifically configured to:
aiming at each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information corresponding to the pixel point in the visual system of the target animal according to the spectral information corresponding to the pixel point obtained by the multispectral sensor to obtain the first image after color replacement, and performing target processing on the first image after color replacement according to a target algorithm corresponding to the visual system of the target animal to obtain a second image;
or the like, or, alternatively,
according to a target algorithm corresponding to a visual system of a target animal, performing target processing on the first image to obtain a first image after the target processing, and aiming at each pixel point in the first image after the target processing, replacing the color of the pixel point with the color of light corresponding to the spectral information of the pixel point in the visual system of the target animal according to the spectral information corresponding to the pixel point obtained by the multispectral sensor to obtain a second image.
In some possible implementations of embodiments of the present application, the target algorithm may include an image blurring algorithm or an image distortion algorithm.
In the embodiment of the application, when the target algorithm is an image blurring algorithm, the image can be displayed in the color and the resolution of the visual system of the target animal, an image display mode is added, the image is not displayed in the color and the resolution of the visual system of human eyes alone, and a user can experience an observed scene in the color and the resolution of the visual system of the target animal.
In the embodiment of the application, when the target algorithm is an image distortion algorithm, the image can be displayed by the color and distortion effect of the visual system of the target animal, an image display mode is increased, the image is not displayed by the color of the visual system of human eyes alone, and a user can experience the observed scene by the color and distortion effect of the visual system of the target animal.
In some possible implementations of the embodiments of the present application, the image display apparatus 1000 provided in the embodiments of the present application may further include:
and the second determining module is used for determining an animal selected from the plurality of animals as the target animal or determining an animal identified from the acquired third image as the target animal.
In some possible implementations of the embodiments of the present application, the image display apparatus 1000 provided in the embodiments of the present application may further include:
and the storage module is used for storing the second image.
The image display device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image display device provided in the embodiment of the present application can implement each process in the image display method embodiments of fig. 1 to 9, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 11, an electronic device 1100 is further provided in an embodiment of the present application, and includes a processor 1101, a memory 1102, and a program or an instruction stored in the memory 1102 and executable on the processor 1101, where the program or the instruction is executed by the processor 1101 to implement each process of the above-mentioned embodiment of the image display method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
In some possible implementations of embodiments of the present Application, the processor 1101 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of embodiments of the present Application.
In some possible implementations of embodiments of the present application, the Memory 1102 may include Read-Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash Memory devices, electrical, optical, or other physical/tangible Memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., a memory device) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors), it is operable to perform the operations described with reference to the image display methods according to embodiments of the application.
Fig. 12 is a hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1200 includes, but is not limited to: radio frequency unit 1201, network module 1202, audio output unit 1203, input unit 1204, sensors 1205, display unit 1206, user input unit 1207, interface unit 1208, memory 1209, and processor 1210.
Those skilled in the art will appreciate that the electronic device 1200 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1210 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 12 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Wherein, the processor 1210 is configured to obtain a first image; and determining a second image corresponding to the visual system of the target animal according to the spectral information corresponding to the first image and acquired by the multispectral sensor.
A display unit 1206 for displaying the second image.
In the embodiment of the application, after the first image is acquired, a second image corresponding to the visual system of the target animal is determined according to the spectral information corresponding to the first image acquired by the multispectral sensor, and then the second image is displayed. Because the obtained second image is the image corresponding to the visual system of the target animal, the image can be displayed by the visual system of the target animal, the image display mode is increased, the image is not only displayed by the visual system of human, and the user can experience the scene observed by the visual system of the target animal.
In some possible implementations of embodiments of the application, the processor 1210 may be specifically configured to:
and aiming at each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information corresponding to the pixel point in the visual system of the target animal according to the spectral information corresponding to the pixel point acquired by the multispectral sensor to obtain a second image.
In the embodiment of the application, the image can be displayed in the color of the visual system of the target animal, the image display mode is added, the image is not only displayed in the color perceived by human, and the user can experience the observed scene in the color of the visual system of the target animal.
In some possible implementations of embodiments of the application, the processor 1210 may be specifically configured to:
aiming at each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information corresponding to the pixel point in the visual system of the target animal according to the spectral information corresponding to the pixel point acquired by the multispectral sensor to obtain the first image after color replacement, and cutting the first image after color replacement according to the angle of view corresponding to the visual system of the target animal to obtain a second image;
or the like, or, alternatively,
and for each pixel point in the cut first image, replacing the color of the pixel point with the color of light corresponding to the spectral information of the pixel point in the visual system of the target animal to obtain a second image according to the spectral information corresponding to the pixel point, which is obtained by the multispectral sensor, corresponding to the spectral information of the pixel point.
In the embodiment of the application, the image can be displayed according to the color and the angle of view of the visual system of the target animal, the image display mode is added, the image is not only displayed according to the color and the angle of view of the visual system of human eyes, and the user can experience the observed scene according to the color and the angle of view of the visual system of the target animal.
In some possible implementations of embodiments of the application, the processor 1210 may be specifically configured to:
aiming at each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information corresponding to the pixel point in the visual system of the target animal according to the spectral information corresponding to the pixel point obtained by the multispectral sensor to obtain the first image after color replacement, and performing target processing on the first image after color replacement according to a target algorithm corresponding to the visual system of the target animal to obtain a second image;
or the like, or, alternatively,
according to a target algorithm corresponding to a visual system of a target animal, performing target processing on the first image to obtain a first image after the target processing, and aiming at each pixel point in the first image after the target processing, replacing the color of the pixel point with the color of light corresponding to the spectral information of the pixel point in the visual system of the target animal according to the spectral information corresponding to the pixel point obtained by the multispectral sensor to obtain a second image.
In some possible implementations of embodiments of the present application, the target algorithm may include an image blurring algorithm or an image distortion algorithm.
In the embodiment of the application, when the target algorithm is an image blurring algorithm, the image can be displayed in the color and the resolution of the visual system of the target animal, an image display mode is added, the image is not displayed in the color and the resolution of the visual system of human eyes alone, and a user can experience an observed scene in the color and the resolution of the visual system of the target animal.
In the embodiment of the application, the image can be displayed by the color and the distortion effect of the visual system of the target animal, the image display mode is increased, the image is not displayed by the color of the visual system of human eyes alone, and a user can experience the observed scene by the color and the distortion effect of the visual system of the target animal.
In some possible implementations of embodiments of the present application, the processor 1210 may be further configured to:
determining an animal selected from a plurality of animals as a target animal; or, the animal identified from the acquired third image is determined as the target animal.
In some possible implementations of embodiments of the present application, the memory 1209 may be used to:
the second image is stored.
It should be understood that, in the embodiment of the present application, the input Unit 1204 may include a Graphics Processing Unit (GPU) 12041 and a microphone 12042, and the Graphics Processing Unit 12041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1206 may include a display panel 12061, and the display panel 12061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1207 includes a touch panel 12071 and other input devices 12072. A touch panel 12071, also referred to as a touch screen. The touch panel 12071 may include two parts of a touch detection device and a touch controller. Other input devices 12072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1209 may be used to store software programs as well as various data, including but not limited to application programs and an operating system. Processor 1210 may integrate an application processor, which handles primarily the operating system, user interface, applications, etc., and a modem processor, which handles primarily wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1210.
The embodiment of the present application further provides a computer-readable storage medium, where a program or an instruction is stored on the computer-readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image display method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. Examples of the computer readable storage medium include non-transitory computer readable storage media such as ROM, RAM, magnetic or optical disks, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above image display method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. An image display method, characterized in that the method comprises:
acquiring a first image;
determining a second image corresponding to the visual system of the target animal according to the spectral information corresponding to the first image and acquired by the multispectral sensor;
and displaying the second image.
2. The method according to claim 1, wherein determining a second image corresponding to a visual system of the target animal based on spectral information corresponding to the first image acquired by a multispectral sensor comprises:
and for each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information in the visual system according to the spectral information corresponding to the pixel point acquired by the multispectral sensor to obtain the second image.
3. The method according to claim 1, wherein determining a second image corresponding to a visual system of the target animal based on spectral information corresponding to the first image acquired by a multispectral sensor comprises:
for each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information in the visual system according to the spectral information corresponding to the pixel point acquired by the multispectral sensor to obtain a color-replaced first image, and cutting the color-replaced first image according to the field angle corresponding to the visual system to obtain a second image;
or the like, or, alternatively,
and for each pixel point in the cut first image, replacing the color of the pixel point with the color of light corresponding to the spectral information in the visual system according to the spectral information corresponding to the pixel point acquired by the multispectral sensor, so as to obtain the second image.
4. The method according to claim 1, wherein determining a second image corresponding to a visual system of the target animal based on spectral information corresponding to the first image acquired by a multispectral sensor comprises:
for each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information in the visual system according to the spectral information corresponding to the pixel point acquired by the multispectral sensor to obtain a color-replaced first image, and performing target processing on the color-replaced first image according to a target algorithm corresponding to the visual system to obtain a second image;
or the like, or, alternatively,
according to the target algorithm, performing target processing on the first image to obtain a target-processed first image, and replacing the color of each pixel point in the target-processed first image with the color of light corresponding to the spectral information in the visual system according to the spectral information corresponding to the pixel point, which is obtained by the multispectral sensor, to obtain a second image.
5. The method of claim 4, wherein the target algorithm comprises an image blurring algorithm or an image distortion algorithm.
6. The method of claim 1, wherein prior to said acquiring the first image, the method further comprises:
determining an animal selected from a plurality of animals as the target animal; or the like, or, alternatively,
and determining the animal identified from the acquired third image as the target animal.
7. An image display apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a first image;
the first determination module is used for determining a second image corresponding to the visual system of the target animal according to the spectral information corresponding to the first image acquired by the multispectral sensor;
and the display module is used for displaying the second image.
8. The apparatus of claim 7, wherein the first determining module is specifically configured to:
and for each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information in the visual system according to the spectral information corresponding to the pixel point acquired by the multispectral sensor to obtain the second image.
9. The apparatus of claim 7, wherein the first determining module is specifically configured to:
for each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information in the visual system according to the spectral information corresponding to the pixel point acquired by the multispectral sensor to obtain a color-replaced first image, and cutting the color-replaced first image according to the field angle corresponding to the visual system to obtain a second image;
or the like, or, alternatively,
and for each pixel point in the cut first image, replacing the color of the pixel point with the color of light corresponding to the spectral information in the visual system according to the spectral information corresponding to the pixel point acquired by the multispectral sensor, so as to obtain the second image.
10. The apparatus of claim 7, wherein the first determining module is specifically configured to:
for each pixel point in the first image, replacing the color of the pixel point with the color of light corresponding to the spectral information in the visual system according to the spectral information corresponding to the pixel point acquired by the multispectral sensor to obtain a color-replaced first image, and performing target processing on the color-replaced first image according to a target algorithm corresponding to the visual system to obtain a second image;
or the like, or, alternatively,
according to the target algorithm, performing target processing on the first image to obtain a target-processed first image, and replacing the color of each pixel point in the target-processed first image with the color of light corresponding to the spectral information in the visual system according to the spectral information corresponding to the pixel point, which is obtained by the multispectral sensor, to obtain a second image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110179146.7A CN112950455A (en) | 2021-02-09 | 2021-02-09 | Image display method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110179146.7A CN112950455A (en) | 2021-02-09 | 2021-02-09 | Image display method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112950455A true CN112950455A (en) | 2021-06-11 |
Family
ID=76244937
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110179146.7A Pending CN112950455A (en) | 2021-02-09 | 2021-02-09 | Image display method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112950455A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100182598A1 (en) * | 2007-07-10 | 2010-07-22 | Choi Byung Ll | Digital filter spectrum sensor |
CN104811627A (en) * | 2015-05-21 | 2015-07-29 | 广东欧珀移动通信有限公司 | Method and device for photographing and previewing |
CN107341763A (en) * | 2017-06-30 | 2017-11-10 | 北京金山安全软件有限公司 | Image processing method and device, electronic equipment and storage medium |
US20180279967A1 (en) * | 2017-03-30 | 2018-10-04 | Expantrum Optoelectronics | Portable multispectral imaging device and method of reducing interference of displayed images thereof |
CN110363186A (en) * | 2019-08-20 | 2019-10-22 | 四川九洲电器集团有限责任公司 | A kind of method for detecting abnormality, device and computer storage medium, electronic equipment |
CN110830619A (en) * | 2019-10-28 | 2020-02-21 | 维沃移动通信有限公司 | Display method and electronic equipment |
CN111325824A (en) * | 2019-07-03 | 2020-06-23 | 杭州海康威视***技术有限公司 | Image data display method and device, electronic equipment and storage medium |
CN112153356A (en) * | 2020-09-16 | 2020-12-29 | Oppo广东移动通信有限公司 | Image parameter determination method, image sensor, device, electronic device and storage medium |
-
2021
- 2021-02-09 CN CN202110179146.7A patent/CN112950455A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100182598A1 (en) * | 2007-07-10 | 2010-07-22 | Choi Byung Ll | Digital filter spectrum sensor |
CN104811627A (en) * | 2015-05-21 | 2015-07-29 | 广东欧珀移动通信有限公司 | Method and device for photographing and previewing |
US20180279967A1 (en) * | 2017-03-30 | 2018-10-04 | Expantrum Optoelectronics | Portable multispectral imaging device and method of reducing interference of displayed images thereof |
CN107341763A (en) * | 2017-06-30 | 2017-11-10 | 北京金山安全软件有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111325824A (en) * | 2019-07-03 | 2020-06-23 | 杭州海康威视***技术有限公司 | Image data display method and device, electronic equipment and storage medium |
CN110363186A (en) * | 2019-08-20 | 2019-10-22 | 四川九洲电器集团有限责任公司 | A kind of method for detecting abnormality, device and computer storage medium, electronic equipment |
CN110830619A (en) * | 2019-10-28 | 2020-02-21 | 维沃移动通信有限公司 | Display method and electronic equipment |
CN112153356A (en) * | 2020-09-16 | 2020-12-29 | Oppo广东移动通信有限公司 | Image parameter determination method, image sensor, device, electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7226851B2 (en) | Image processing method, apparatus and device | |
KR102126300B1 (en) | Method and apparatus for generating an all-in-focus image | |
CN109961453B (en) | Image processing method, device and equipment | |
CN111654635A (en) | Shooting parameter adjusting method and device and electronic equipment | |
US20230005254A1 (en) | Image detection method and apparatus, and electronic device | |
CN104125397B (en) | A kind of data processing method and electronic equipment | |
CN112532881A (en) | Image processing method and device and electronic equipment | |
CN113709519A (en) | Method and equipment for determining live broadcast shielding area | |
CN112037160A (en) | Image processing method, device and equipment | |
CN113794831B (en) | Video shooting method, device, electronic equipment and medium | |
CN113052923B (en) | Tone mapping method, tone mapping apparatus, electronic device, and storage medium | |
CN112437237B (en) | Shooting method and device | |
CN113284063A (en) | Image processing method, image processing apparatus, electronic device, and readable storage medium | |
CN111968605A (en) | Exposure adjusting method and device | |
JP2016144049A (en) | Image processing apparatus, image processing method, and program | |
CN112511890A (en) | Video image processing method and device and electronic equipment | |
CN112950455A (en) | Image display method and device | |
CN107005643A (en) | Image processing apparatus, image processing method and program | |
CN112529766B (en) | Image processing method and device and electronic equipment | |
CN111866476B (en) | Image shooting method and device and electronic equipment | |
CN113487497A (en) | Image processing method and device and electronic equipment | |
CN114207669A (en) | Human face illumination image generation device and method | |
CN112468794B (en) | Image processing method and device, electronic equipment and readable storage medium | |
CN112419218B (en) | Image processing method and device and electronic equipment | |
CN114302057B (en) | Image parameter determining method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |