CN110049337B - Compression processing method and system for capsule endoscope bayer image - Google Patents

Compression processing method and system for capsule endoscope bayer image Download PDF

Info

Publication number
CN110049337B
CN110049337B CN201910439009.5A CN201910439009A CN110049337B CN 110049337 B CN110049337 B CN 110049337B CN 201910439009 A CN201910439009 A CN 201910439009A CN 110049337 B CN110049337 B CN 110049337B
Authority
CN
China
Prior art keywords
channel
original
reference point
value
missing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910439009.5A
Other languages
Chinese (zh)
Other versions
CN110049337A (en
Inventor
袁文金
陈俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ankon Technologies Co Ltd
Original Assignee
Ankon Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ankon Technologies Co Ltd filed Critical Ankon Technologies Co Ltd
Priority to CN201910439009.5A priority Critical patent/CN110049337B/en
Publication of CN110049337A publication Critical patent/CN110049337A/en
Application granted granted Critical
Publication of CN110049337B publication Critical patent/CN110049337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Endoscopes (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a compression processing method and a compression processing system for a bayer image of a capsule endoscope, wherein the method comprises the following steps: s1, collecting image data in an original bayer format; s2, supplementing the missing gray value of each pixel point in the image data in the bayer format according to the gradient information of the data in each channel to form an RGB image; and S3, compressing the RGB image data and outputting the compressed RGB image data. The capsule endoscope bayer image compression processing method and system can retain image details, reduce edge area mosaic effect, and improve image quality so as to carry out more comprehensive diagnosis.

Description

Compression processing method and system for capsule endoscope bayer image
Technical Field
The invention relates to the field of medical equipment imaging, in particular to a compression processing method and a compression processing system for a capsule endoscope bayer image.
Background
The capsule endoscope integrates core devices such as a camera, a wireless transmission antenna and the like into a capsule which can be swallowed by a human body, and is swallowed into the body in the examination process, and acquires digestive tract images in the body and synchronously transmits the images to the outside of the body so as to carry out medical examination according to the acquired image data; in the working process of the capsule endoscope, as many and comprehensive digestive tract images as possible need to be acquired; to achieve this, it is necessary to compress the storage space of the digestive tract image as small as possible and to ensure the image quality, thereby saving the transmission time, increasing the number of images to be taken, and improving the diagnostic quality.
The wireless capsule endoscope is powered by a battery, and a compression algorithm with low complexity is required to ensure low cost and low power consumption; the compression algorithm used at present mostly adopts the conversion of RGB color channels, such as YCrCb and the like, so as to remove redundant information in the RGB color channels and achieve the effect of improving the compression efficiency, however, the color of the digestive tract image obtained by the algorithm is relatively single, so that the color has larger redundancy and larger compression space.
For example: patent publication No. CN102457722A entitled "processing method and apparatus for BAYER image", which uses a weighted average method to interpolate a missing part in BAYER image to obtain RGB image; however, the weighted average method, which is equivalent to low-pass filtering, may lose details and cause mosaic phenomenon in the edge region of the image.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide a method and a system for compressing a bayer image of a capsule endoscope.
In order to achieve one of the above objects, an embodiment of the present invention provides a method for compressing a bayer image of a capsule endoscope, the method including: s1, collecting image data in an original bayer format;
s2, supplementing the missing gray value of each pixel point in the image data in the bayer format according to the gradient information of the data in each channel to form an RGB image;
and S3, compressing the RGB image data and outputting the compressed RGB image data.
As a further improvement of an embodiment of the present invention, the step S2 specifically includes:
s21, expanding the original edge data in the image data in the bayer format;
s22, respectively taking an original red channel R and an original blue channel B in the original image data in the bayer format as reference points, and obtaining the gray value of the green channel G missing from each reference point according to the gray values of the reference points in the horizontal direction and the vertical direction and the gradient value representing the change of the gray value;
s23, respectively taking the original green channel G in the image data in the original bayer format as a reference point, and obtaining the gray values corresponding to the missing red channel R and blue channel B on each reference point according to the gray values of the reference point in the horizontal direction or the vertical direction;
and respectively taking the original red channel R and the original blue channel B in the image data in the original bayer format as reference points, and obtaining the gray value of the blue channel B missing on the original red channel R and the gray value of the red channel R missing on the original blue channel B according to the gray value of the reference points in the diagonal direction and the gradient value representing the change of the gray value.
As a further improvement of an embodiment of the present invention, the step S22 specifically includes:
taking the current position of any pixel point lacking the green channel G as a reference point, and acquiring a channel value which is closest to the reference point and has the same channel color and a value of an original green channel G between the reference point and the channel value of the same color in the row or column direction;
then, the gray value of the green channel G missing from any reference point is represented as G(i,j)
Figure GDA0002957676440000021
valv=|G(i,j-1)+R/B(i,j)+G(i,j+1)|×2-R/B(i,j-2)-R/B(i,j+2)
valh=|G(i-1,j)+R/B(i,j)+G(i+1,j)|×2-R/B(i-2,j)-R/B(i+2,j)
Figure GDA0002957676440000031
Figure GDA0002957676440000032
Wherein (i, j) represents the coordinate value of the reference point, and if the current reference point is the red channel R, R/B(*)The coordinate value of the original red channel R is represented, if the current reference point is the blue channel B, R/B(*)Coordinate value, G, representing original blue channel B(*)Coordinate values representing a green channel; round (x) indicates rounding, and mean (x) indicates sorting the values therein and taking the value at the middle position.
As a further improvement of an embodiment of the present invention, the step S23 "taking the original green channel G in the original image data in the bayer format as a reference point, and obtaining the gray values corresponding to the missing red channel R and blue channel B on each reference point according to the gray values of the reference point in the horizontal direction or the vertical direction" specifically includes:
respectively acquiring channel values of an original red channel R and an original blue channel B which are closest to the current reference point in the row or column direction of the original green channel G by taking the current position of the pixel point of any original green channel G as the reference point, and acquiring the channel values of the green channels G corresponding to the original red channel R and the original blue channel B;
the gray value of the red channel R and the blue channel B missing from the pixel point of any original green channel G is represented as R/B(i,j)
If the channel values of the original red channel R or the original blue channel B closest to the reference point are arranged in the horizontal direction, the channel values of the original red channel R or the original blue channel B are arranged in the horizontal direction
R/B(i,j)=(R/B(i,j-1)+R/B(i,j+1)+G(i,j)×2-G(i,j-1)-G(i,j+1))÷2,
If the channel values of the original red channel R or the original blue channel B which are closest to the reference point are arranged in the vertical direction, the channel values are determined to be the most similar to the reference point
R/B(i,j)=(R/B(i-1,j)+R/B(i+1,j)+G(i,j)×2-G(i-1)-G(i+1,j))÷2,
Wherein, if the red channel R with the missing current reference point is obtained, then R/B(*)The coordinate value of the red channel R is represented, and if the blue channel B with the missing current reference point is obtained, the R/B(*)Coordinate values representing the blue channel B.
As a further improvement of an embodiment of the present invention, the step S23 "taking the original red channel R and the original blue channel B in the original image data in the bayer format as reference points, respectively, and obtaining the gray value of the blue channel B missing on the original red channel R and obtaining the gray value of the red channel R missing on the original blue channel B according to the gray value of the reference points in the diagonal direction and the gradient value representing the change of the gray value" specifically includes:
taking the current position of a pixel point of any original red channel R as a reference point, respectively acquiring a channel value of an original blue channel B which is closest to the current reference point in the diagonal direction of the reference point, and channel values of green channels G which respectively correspond to the current reference point and the original blue channel B;
taking the current position of the pixel point of any original blue channel B as a reference point, and respectively acquiring the channel value of an original red channel R which is closest to the current reference point in the diagonal direction of the pixel point, and the channel values of green channels G which respectively correspond to the current reference point and the original red channel R;
the gray value of the missing red channel R on any original blue channel B is represented as R(i,j)
Figure GDA0002957676440000041
val1=R(i-1,j-1)+R(i+1,j+1)+2×G(i,j)-G(i-1,j-1)-G(i+1,j+1)
val2=R(i+1,j-1)+R(i-1,j+1)+2×G(i,j)-G(i+1,j-1)-G(i-1,j+1)
grad1=|R(i-1,j-1)+R(i+1,j+1)|+|G(i-1,j-1)-G(i,j)|+|G(i,j)-G(i+1,j+1)|,
grad2=|R(i+1,j-1)+R(i-1,j+1)|+|G(i+1,j-1)-G(i,j)|+|G(i,j)-G(i-1,j+1)|,
The missing blue channel B on any original red channel R has a gray value denoted B(i,j)
Figure GDA0002957676440000042
val3=B(i-1,j-1)+B(i+1,j+1)+2×G(i,j)-G(i-1,j-1)-G(i+1,j+1)
val4=B(i+1,j-1)+B(i-1,j+1)+2×G(i,j)-G(i+1,j-1)-G(i-1,j+1)
grad3=|B(i-1,j-1)+B(i+1,j+1)|+|G(i-1,j-1)-G(i,j)|+|G(i,j)-G(i+1,j+1)|,
grad4=|B(i+1,j-1)+B(i-1,j+1)|+|G(i+1,j-1)-G(i,j)|+|G(i,j)-G(i-1,j+1)|,
Wherein (i, j) represents a coordinate value of the reference point, R(*)Coordinate value, G, representing the red color channel(*)Coordinate values representing a green channel; b is(*)Coordinate values representing the blue channel.
In order to solve the above object, according to another aspect of the present invention, there is provided a system for processing a capsule endoscope image, the system including: the image data acquisition module is used for acquiring image data in an original bayer format;
the bayer-to-RGB module is used for supplementing the missing gray value in each pixel point in the image data in bayer format according to the gradient information of the data in each channel to form an RGB image;
and the data compression and output module is used for compressing the RGB image data and then outputting the RGB image data.
As a further improvement of an embodiment of the present invention, the image data acquisition module specifically includes:
the edge expansion module is used for expanding original edge data in the image data in the bayer format;
the first supplementing unit is used for respectively taking an original red channel R and an original blue channel B in the image data in the original bayer format as reference points, and obtaining the gray value of the green channel G missing from each reference point according to the gray values of the reference points in the horizontal direction and the vertical direction and the gradient value representing the change of the gray value;
the second supplementing unit is used for respectively taking an original green channel G in the image data in the original bayer format as a reference point, and obtaining gray values respectively corresponding to a red channel R and a blue channel B which are missing on each reference point according to the gray value of the reference point in the horizontal direction or the vertical direction;
and the third supplementing unit is used for respectively taking the original red channel R and the original blue channel B in the image data in the original bayer format as reference points, and obtaining the gray value of the blue channel B missing on the original red channel R and the gray value of the red channel R missing on the original blue channel B according to the gray value of the reference points in the diagonal direction and the gradient value representing the change of the gray value.
As a further improvement of an embodiment of the present invention, the first supplement unit is specifically configured to:
taking the current position of any pixel point lacking the green channel G as a reference point, and acquiring a channel value which is closest to the reference point and has the same channel color and a value of an original green channel G between the reference point and the channel value of the same color in the row or column direction;
then, the gray value of the green channel G missing from any reference point is represented as G(i,j)
Figure GDA0002957676440000061
valv=|G(i,j-1)+R/B(i,j)+G(i,j+1)|×2-R/B(i,j-2)-R/B(i,j+2)
valh=|G(i-1,j)+R/B(i,j)+G(i+1,j)|×2-R/B(i-2,j)-R/B(i+2,j)
Figure GDA0002957676440000062
Figure GDA0002957676440000063
Wherein (i, j) represents the coordinate value of the reference point, and if the current reference point is the red channel R, R/B(*)Representing the original red channel RCoordinate value, R/B if the current reference point is blue channel B(*)Coordinate value, G, representing original blue channel B(*)Coordinate values representing a green channel; round (.))Indicating rounding off and mean indicates sorting the values therein to take the value in the middle.
As a further improvement of an embodiment of the present invention, the second supplementary unit is specifically configured to:
respectively acquiring channel values of an original red channel R and an original blue channel B which are closest to the current reference point in the row or column direction of the original green channel G by taking the current position of the pixel point of any original green channel G as the reference point, and acquiring the channel values of the green channels G corresponding to the original red channel R and the original blue channel B;
the gray value of the red channel R and the blue channel B missing from the pixel point of any original green channel G is represented as R/B(i,j)
If the channel values of the original red channel R or the original blue channel B closest to the reference point are arranged in the horizontal direction, the channel values of the original red channel R or the original blue channel B are arranged in the horizontal direction
R/B(i,j)=(R/B(i,j-1)+R/B(i,j+1)+G(i,j)×2-G(i,j-1)-G(i,j+1))÷2,
If the channel values of the original red channel R or the original blue channel B which are closest to the reference point are arranged in the vertical direction, the channel values are determined to be the most similar to the reference point
R/B(i,j)=(R/B(i-1,j)+R/B(i+1,j)+G(i,j)×2-G(i-1)-G(i+1,j))÷2,
Wherein, if the red channel R with the missing current reference point is obtained, then R/B(*)The coordinate value of the red channel R is represented, and if the blue channel B with the missing current reference point is obtained, the R/B(*)Coordinate values representing the blue channel B.
As a further improvement of an embodiment of the present invention, the third supplementary unit is specifically configured to:
taking the current position of a pixel point of any original red channel R as a reference point, respectively acquiring a channel value of an original blue channel B which is closest to the current reference point in the diagonal direction of the reference point, and channel values of green channels G which respectively correspond to the current reference point and the original blue channel B;
and taking the current position of the pixel point of any original blue channel B as a reference point, S respectively obtaining the channel value of an original red channel R which is closest to the current reference point in the diagonal direction of the original blue channel B, and the channel values of green channels G which respectively correspond to the current reference point and the original red channel R;
the gray value of the missing red channel R on any original blue channel B is represented as R(i,j)
Figure GDA0002957676440000071
val1=R(i-1,j-1)+R(i+1,j+1)+2×G(i,j)-G(i-1,j-1)-G(i+1,j+1)
val2=R(i+1,j-1)+R(i-1,j+1)+2×G(i,j)-G(i+1,j-1)-G(i-1,j+1)
grad1=|R(i-1,j-1)+R(i+1,j+1)|+|G(i-1,j-1)-G(i,j)|+|G(i,j)-G(i+1,j+1)|,
grad2=|R(i+1,j-1)+R(i-1,j+1)|+|G(i+1,j-1)-G(i,j)|+|G(i,j)-G(i-1,j+1)|,
The missing blue channel B on any original red channel R has a gray value denoted B(i,j)
Figure GDA0002957676440000072
val3=B(i-1,j-1)+B(i+1,j+1)+2×G(i,j)-G(i-1,j-1)-G(i+1,j+1)
val4=B(i+1,j-1)+B(i-1,j+1)+2×G(i,j)-G(i+1,j-1)-G(i-1,j+1)
grad3=|B(i-1,j-1)+B(i+1,j+1)|+|G(i-1,j-1)-G(i,j)|+|G(i,j)-G(i+1,j+1)|,
grad4=|B(i+1,j-1)+B(i-1,j+1)|+|G(i+1,j-1)-G(i,j)|+|G(i,j)-G(i-1,j+1)|,
Wherein (i, j) represents a coordinate value of the reference point, R(*)Coordinate value, G, representing the red color channel(*)Coordinate values representing a green channel; b is(*)Coordinate values representing the blue channel.
Compared with the prior art, the invention has the beneficial effects that: the capsule endoscope bayer image compression processing method and system can retain image details, reduce edge area mosaic effect, and improve image quality so as to carry out more comprehensive diagnosis.
Drawings
FIG. 1 is a schematic flow chart of a method for compressing a bayer image in a capsule endoscope according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a specific implementation of step S2 in fig. 1;
FIG. 3 is a schematic diagram of image data in a bayer format in one particular example of the invention;
fig. 4 is a schematic diagram of the image in the bayer format after the data edge is expanded in step S21 according to a specific example of the present invention;
fig. 5 is a schematic diagram of the step S22 after supplementing the missing gray-level value of the green channel G in one specific example of the present invention;
fig. 6 is a schematic diagram of the step S23 adopted in one embodiment of the present invention after supplementing the grayscale values of the missing red channel R and blue channel B in the original green channel G;
fig. 7 is a diagram illustrating a step S23 of supplementing the grayscale value of the blue channel B missing from the original red channel R and supplementing the grayscale value of the red channel R missing from the original blue channel B according to a specific example of the present invention;
fig. 8 is a block diagram of a system for compressing a bayer image in a capsule endoscope according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to specific embodiments shown in the drawings. These embodiments are not intended to limit the present invention, and structural, methodological, or functional changes made by those skilled in the art according to these embodiments are included in the scope of the present invention.
As shown in fig. 1, a first embodiment of the present invention provides a method for compressing a bayer image of a capsule endoscope, the method including:
s1, collecting image data in an original bayer format;
s2, supplementing the missing gray value of each pixel point in the image data in the bayer format according to the gradient information of the data in each channel to form an RGB image;
and S3, compressing the RGB image data and outputting the compressed RGB image data.
Referring to fig. 2, each cell of the image data represents a pixel point, and each pixel point retains a gray value of 3 channels of RGB, and gray values of different channels have differences, so that the image data in the bayer format has larger discontinuity.
As for step S2, referring to fig. 2, in a specific implementation manner of the present invention, step S2 includes step S21, step S22, and step S23, which are executed successively.
And S21, expanding the original edge data in the image data in the bayer format.
And S22, respectively taking the original red channel R and the original blue channel B in the original image data in the bayer format as reference points, and obtaining the gray value of the green channel G missing from each reference point according to the gray values of the reference points in the horizontal direction and the vertical direction and the gradient value representing the change of the gray value.
S23, respectively taking the original green channel G in the image data in the original bayer format as a reference point, and obtaining the gray values corresponding to the missing red channel R and blue channel B on each reference point according to the gray values of the reference point in the horizontal direction or the vertical direction;
and respectively taking the original red channel R and the original blue channel B in the image data in the original bayer format as reference points, and obtaining the gray value of the blue channel B missing on the original red channel R and the gray value of the red channel R missing on the original blue channel B according to the gray value of the reference points in the diagonal direction and the gradient value representing the change of the gray value.
For step S21, there are many implementations of expanding the original edge data in the image data in the bayer format, and preferred implementations of the present invention mainly include two types.
In a first implementation manner, as shown in fig. 4, pixel points with a gray value of 0 are spliced to an original edge position of image data in a bayer format to expand the image data in the bayer format; in a specific implementation of the present invention, the extension requirements for the edge data are three columns for left edge extension, two columns for right edge extension, 3 rows for upper edge extension, and 3 rows for lower edge extension.
In another implementation mode, original edge data in the image data in the bayer format is copied and then spliced to an original edge position so as to expand the image data in the bayer format; specifically, taking left edge expansion as an example, three columns of data with B11, G12 and B13 as the first data are copied and spliced on the left edge side in its entirety to expand the left edge; in addition, taking right edge expansion as an example, three columns of data with B17 and G18 as the first data are copied and spliced on the right edge side in the whole to expand the right edge; other edge extensions are not described in detail, and it should be noted that when the edge is extended by using this method, the edge needs to be extended according to the arrangement rule of B, G, R data in the image data in the original bayer format to keep the data consistent, which is not described herein further.
It should be noted that, in other embodiments of the present invention, the expansion requirement may be increased or decreased correspondingly, and according to the two embodiments, the expansion manner of increasing data or decreasing data can be derived without any doubt, which is not further described herein.
For step S22, the method specifically includes: taking the current position of any pixel point lacking the green channel G as a reference point, and acquiring a channel value which is closest to the reference point and has the same channel color and a value of an original green channel G between the reference point and the channel value of the same color in the row or column direction;
then, the gray value of the green channel G missing from any reference point is represented as G(i,j)
Figure GDA0002957676440000101
valv=|G(i,j-1)+R/B(i,j)+G(i,j+1)|×2-R/B(i,j-2)-R/B(i,j+2)
valh=|G(i-1,j)+R/B(i,j)+G(i+1,j)|×2-R/B(i-2,j)-R/B(i+2,j)
Figure GDA0002957676440000111
Figure GDA0002957676440000112
Wherein, valv、valhRepresenting the new gray values, grad, calculated in the horizontal and vertical directions, respectivelyv、gradhRespectively representing the gradient values in the horizontal direction and the vertical direction, (i, j) representing the coordinate values of the reference point, if the current reference point is a red channel R, then R/B(*)The coordinate value of the original red channel R is represented, if the current reference point is the blue channel B, R/B(*)Coordinate value, G, representing original blue channel B(*)Coordinate values representing a green channel; round (x) indicates rounding, and mean (x) indicates sorting the values therein and taking the value at the middle position.
For ease of understanding, the present invention is described with reference to a specific example shown in fig. 5, in which a specific description is given by taking the example of the green channel G44 missing on the original red channel R44 and the green channel G55 missing on the original blue channel B55.
Correspondingly, the specific process of obtaining the gray value of the green channel G44 missing on the original red channel R44 is as follows:
Figure GDA0002957676440000113
valv=|G43+R44+G45|×2-R42-R46,
valh=|G34+R44+G54|×2-R24-R64,
Figure GDA0002957676440000114
Figure GDA0002957676440000115
the specific process of obtaining the gray value of the missing green channel G55 on the original blue channel B55 is as follows:
Figure GDA0002957676440000121
valv=|G54+B55+G56|×2-B53-B57,
valh=|G45+B55+G65|×2-B35-R75,
Figure GDA0002957676440000122
Figure GDA0002957676440000123
as described above, the gray values of the B channel and the missing G channel on the G channel of the present invention can be calculated by using the direction in which the gradient change is small, so that the continuity of the gradient change can be maintained, and thus the details can be maintained without causing the mosaic effect of the image.
It should be noted that, in step S22 and the corresponding examples of the above embodiments, the calculation is performed in the 3 × 3 neighborhood where the reference point is located, and in other embodiments of the present invention, the reference point neighborhood region may be expanded, for example, expanded to the 5 × 5 neighborhood.
For step S23, the implementation process thereof specifically includes step S231 and step S232, wherein it should be noted that the implementation orders of step S231 and step S232 may be exchanged, and the exchange of the steps does not affect the final result.
Step S231 includes: respectively taking an original green channel G in the image data in the original bayer format as a reference point, and obtaining gray values corresponding to a red channel R and a blue channel B which are missing on each reference point according to the gray value of the reference point in the horizontal direction or the vertical direction; preferably, in an embodiment of the present invention, the step S231 specifically includes: respectively acquiring channel values of an original red channel R and an original blue channel B which are closest to the current reference point in the row or column direction of the original green channel G by taking the current position of the pixel point of any original green channel G as the reference point, and acquiring the channel values of the green channels G corresponding to the original red channel R and the original blue channel B;
the gray value of the red channel R and the blue channel B missing from the pixel point of any original green channel G is represented as R/B(i,j)
If the channel values of the original red channel R or the original blue channel B closest to the reference point are arranged in the horizontal direction, the channel values of the original red channel R or the original blue channel B are arranged in the horizontal direction
R/B(i,j)=(R/B(i,j-1)+R/B(i,j+1)+G(i,j)×2-G(i,j-1)-G(i,j+1))÷2,
If the channel values of the original red channel R or the original blue channel B which are closest to the reference point are arranged in the vertical direction, the channel values are determined to be the most similar to the reference point
R/B(i,j)=(R/B(i-1,j)+R/B(i+1,j)+G(i,j)×2-G(i-1)-G(i+1,j))÷2,
Wherein, if the red channel R with the missing current reference point is obtained, then R/B(*)The coordinate value of the red channel R is represented, and if the blue channel B with the missing current reference point is obtained, the R/B(*)Coordinate values representing the blue channel B.
For convenience of understanding, the present invention is described with reference to fig. 6, and in this example, the original green channel G45 is taken as a reference point, and the grayscale value of the red channel R45 that is missing thereon and the grayscale value of the blue channel B45 are taken as examples for specific description.
Correspondingly, the original red channel R closest to G45 is arranged in the horizontal direction
R45=(R44+R46+G45×2-G44-G46)÷2;
The original blue channel B closest to G45 is arranged in the vertical direction
B45=(B35+B55+G45×2-G35-G55)÷2。
Step S232 includes: respectively taking an original red channel R and an original blue channel B in the image data in the original bayer format as reference points, and obtaining a gray value of a blue channel B missing on the original red channel R and a gray value of a red channel R missing on the original blue channel B according to a gray value of the reference points in the diagonal direction and a gradient value representing the change of the gray value; preferably, in an embodiment of the present invention, the step S232 specifically includes: taking the current position of a pixel point of any original red channel R as a reference point, respectively acquiring a channel value of an original blue channel B which is closest to the current reference point in the diagonal direction of the reference point, and channel values of green channels G which respectively correspond to the current reference point and the original blue channel B;
taking the current position of the pixel point of any original blue channel B as a reference point, and respectively acquiring the channel value of an original red channel R which is closest to the current reference point in the diagonal direction of the pixel point, and the channel values of green channels G which respectively correspond to the current reference point and the original red channel R;
the gray value of the missing red channel R on any original blue channel B is represented as R(i,j)
Figure GDA0002957676440000141
val1=R(i-1,j-1)+R(i+1,j+1)+2×G(i,j)-G(i-1,j-1)-G(i+1,j+1)
val2=R(i+1,j-1)+R(i-1,j+1)+2×G(i,j)-G(i+1,j-1)-G(i-1,j+1)
grad1=|R(i-1,j-1)+R(i+1,j+1)|+|G(i-1,j-1)-G(i,j)|+|G(i,j)-G(i+1,j+1)|,
grad2=|R(i+1,j-1)+R(i-1,j+1)|+|G(i+1,j-1)-G(i,j)|+|G(i,j)-G(i-1,j+1)|,
The gray scale value of the missing blue channel B on any original red channel R is represented as R(i,j)
Figure GDA0002957676440000142
val3=B(i-1,j-1)+B(i+1,j+1)+2×G(i,j)-G(i-1,j-1)-G(i+1,j+1)
val4=B(i+1,j-1)+B(i-1,j+1)+2×G(i,j)-G(i+1,j-1)-G(i-1,j+1)
grad3=|B(i-1,j-1)+B(i+1,j+1)|+|G(i-1,j-1)-G(i,j)|+|G(i,j)-G(i+1,j+1)|,
grad4=|B(i+1,j-1)+B(i-1,j+1)|+|G(i+1,j-1)-G(i,j)|+|G(i,j)-G(i-1,j+1)|,
Wherein, val1, val2, val3 and val4 respectively represent new gray scale values obtained by calculation in diagonal directions, grad1, grad2, grad3 and grad4 respectively represent gradient values in diagonal directions, (i, j) represent coordinate values of the reference point, R represents coordinate value of the reference point, and(*)coordinate value, G, representing the red color channel(*)Coordinate values representing a green channel; b is(*)Coordinate values representing the blue channel.
For convenience of understanding, referring to fig. 7, a specific example is described for reference, in which the original red channel R44 is used as a reference point to obtain the gray-level value of the blue channel B44 missing thereon, and the original blue channel B55 is used as a reference point to obtain the gray-level value of the red channel R55 missing thereon.
Correspondingly, the specific process of obtaining the gray value of the missing red channel R55 on the original blue channel B55 is as follows:
Figure GDA0002957676440000151
val1=R44+R66+2×G55-G44-G66,
val2=R64+R46+2×G55-G64-G46,
grad1=|R44+R66|+|G44-G55|+|G55-G66|,
grad2=|R64+R46|+|G64-G55|+|G55-G46|,
the specific process of obtaining the gray value of the missing blue channel B44 on the original red channel R44 is as follows:
Figure GDA0002957676440000152
val3=B33+B55+2×G44-G33-G55,
val4=B53+B35+2×G44-G53-G35,
grad3=|B33+B55|+|G33-G55|+|G55-G66|,
grad4=|B53+B35|+|G53-G44|+|G44-G35|,
as described above, the gray value of the missing R channel on the B channel and the gray value of the missing B channel on the R channel are calculated in the direction in which the gradient change is small, so that the continuity of the gradient change can be further maintained, and thus the details are maintained, and the mosaic effect of the image is not caused.
It should be noted that, in step S23 and the corresponding examples of the above embodiments, the calculation is performed in the 3 × 3 neighborhood where the reference point is located, and in other embodiments of the present invention, the reference point neighborhood region may be expanded, for example, expanded to the 5 × 5 neighborhood.
In step S3, in an embodiment of the present invention, the RGB image data may be compressed by JPEG2000 lossless compression, JPEG compression, or the like. In the specific embodiment of the invention, JPEG is adopted to compress the processed RGB image data.
Further, the result of the compression processing is sent to an external device for operations such as storage, display, output, and the like, and is used for subsequent retrieval, which is convenient for diagnosis and is not described herein again.
As shown in fig. 8, the present invention provides a system for processing capsule endoscope images, the system including: an image data acquisition module 100, a bayer-to-RGB module 200, and a data compression and output module 300.
The image data acquisition module 100 is used for acquiring original image data in a bayer format.
The bayer to RGB module 200 is configured to supplement the missing gray value in each pixel point in the image data in bayer format according to the gradient information of the data in each channel to form an RGB image.
The data compression and output module 300 is configured to compress the RGB image data and then output the RGB image data.
In a specific implementation manner of the present invention, the image data acquisition module 200 specifically includes: an edge extension module 201, a first supplementary unit 203, a second supplementary unit 205 and a third supplementary unit 207.
The edge extension module 201 is configured to extend original edge data in the image data in the bayer format.
The first supplementing unit 203 is configured to obtain, by using the original red channel R and the original blue channel B in the original image data in the bayer format as reference points, a gray value of the green channel G missing from each reference point according to gray values of the reference points in the horizontal direction and the vertical direction and a gradient value representing a change of the gray value.
The second supplementing unit 205 is configured to use the original green channel G in the original bayer pattern image data as a reference point, and obtain gray values corresponding to the missing red channel R and blue channel B at each reference point according to the gray value of the reference point in the horizontal direction or the vertical direction.
The third supplementing unit 207 is configured to obtain a gray value of the blue channel B missing on the original red channel R and obtain a gray value of the red channel R missing on the original blue channel B according to a gray value of the reference point in the diagonal direction and a gradient value representing a change of the gray value, with the original red channel R and the original blue channel B in the original bayer-format image data as reference points, respectively.
Preferably, the edge extension module 201 can extend the original edge data in the image data in the bayer format in two ways.
In a first implementation manner, pixel points with a gray value of 0 are spliced to the original edge position of the image data in the bayer format to expand the image data in the bayer format; in another implementation, original edge data in the image data in the bayer format is copied and then spliced to an original edge position, so as to expand the image data in the bayer format.
Preferably, the first supplementing unit 203 is specifically configured to: taking the current position of any pixel point lacking the green channel G as a reference point, and acquiring a channel value which is closest to the reference point and has the same channel color and a value of an original green channel G between the reference point and the channel value of the same color in the row or column direction;
then, any one ofThe gray value of the green channel G missing from the reference point is represented as G(i,j)
Figure GDA0002957676440000171
valv=|G(i,j-1)+R/B(i,j)+G(i,j+1)|×2-R/B(i,j-2)-R/B(i,j+2)
valh=|G(i-1,j)+R/B(i,j)+G(i+1,j)|×2-R/B(i-2,j)-R/B(i+2,j)
Figure GDA0002957676440000172
Figure GDA0002957676440000173
Wherein, valv、valhRepresenting the new gray values, grad, calculated in the horizontal and vertical directions, respectivelyv、gradhRespectively representing the gradient values in the horizontal direction and the vertical direction, (i, j) representing the coordinate values of the reference point, if the current reference point is a red channel R, then R/B(*)The coordinate value of the original red channel R is represented, if the current reference point is the blue channel B, R/B(*)Coordinate value, G, representing original blue channel B(*)Coordinate values representing a green channel; round (x) indicates rounding, and mean (x) indicates sorting the values therein and taking the value at the middle position.
Preferably, the second supplementing unit 205 is specifically configured to: respectively acquiring channel values of an original red channel R and an original blue channel B which are closest to the current reference point in the row or column direction of the original green channel G by taking the current position of the pixel point of any original green channel G as the reference point, and acquiring the channel values of the green channels G corresponding to the original red channel R and the original blue channel B;
of any original green channel GThe gray values of the red channel R and the blue channel B which are missed by the pixel points are represented as R/B(i,j)
If the channel values of the original red channel R or the original blue channel B closest to the reference point are arranged in the horizontal direction, the channel values of the original red channel R or the original blue channel B are arranged in the horizontal direction
R/B(i,j)=(R/B(i,j-1)+R/B(i,j+1)+G(i,j)×2-G(i,j-1)-G(i,j+1))÷2,
If the channel values of the original red channel R or the original blue channel B which are closest to the reference point are arranged in the vertical direction, the channel values are determined to be the most similar to the reference point
R/B(i,j)=(R/B(i-1,j)+R/B(i+1,j)+G(i,j)×2-G(i-1)-G(i+1,j))÷2,
Wherein, if the red channel R with the missing current reference point is obtained, then R/B(*)The coordinate value of the red channel R is represented, and if the blue channel B with the missing current reference point is obtained, the R/B(*)Coordinate values representing the blue channel B.
Preferably, the third supplementing unit 207 is specifically configured to: taking the current position of a pixel point of any original red channel R as a reference point, respectively acquiring a channel value of an original blue channel B which is closest to the current reference point in the diagonal direction of the reference point, and channel values of green channels G which respectively correspond to the current reference point and the original blue channel B; taking the current position of the pixel point of any original blue channel B as a reference point, and respectively acquiring the channel value of an original red channel R which is closest to the current reference point in the diagonal direction of the pixel point, and the channel values of green channels G which respectively correspond to the current reference point and the original red channel R;
the gray value of the missing red channel R on any original blue channel B is represented as R(i,j)
Figure GDA0002957676440000191
val1=R(i-1,j-1)+R(i+1,j+1)+2×G(i,j)-G(i-1,j-1)-G(i+1,j+1)
val2=R(i+1,j-1)+R(i-1,j+1)+2×G(i,j)-G(i+1,j-1)-G(i-1,j+1)
grad1=|R(i-1,j-1)+R(i+1,j+1)|+|G(i-1,j-1)-G(i,j)|+|G(i,j)-G(i+1,j+1)|,
grad2=|R(i+1,j-1)+R(i-1,j+1)|+|G(i+1,j-1)-G(i,j)|+|G(i,j)-G(i-1,j+1)|,
The missing blue channel B on any original red channel R has a gray value denoted B(i,j)
Figure GDA0002957676440000192
val3=B(i-1,j-1)+B(i+1,j+1)+2×G(i,j)-G(i-1,j-1)-G(i+1,j+1)
val4=B(i+1,j-1)+B(i-1,j+1)+2×G(i,j)-G(i+1,j-1)-G(i-1,j+1)
grad3=|B(i-1,j-1)+B(i+1,j+1)|+|G(i-1,j-1)-G(i,j)|+|G(i,j)-G(i+1,j+1)|,
grad4=|B(i+1,j-1)+B(i-1,j+1)|+|G(i+1,j-1)-G(i,j)|+|G(i,j)-G(i-1,j+1)|,
Wherein, val1, val2, val3 and val4 respectively represent new gray scale values obtained by calculation in diagonal directions, grad1, grad2, grad3 and grad4 respectively represent gradient values in diagonal directions, (i, j) represent coordinate values of the reference point, R represents coordinate value of the reference point, and(*)coordinate value, G, representing the red color channel(*)Coordinate values representing a green channel; b is(*)Coordinate values representing the blue channel.
Preferably, the data compression and output module 300 can perform compression processing on the RGB image data by JPEG2000 lossless compression, JPEG compression, and the like. In the embodiment of the present invention, the data compression and output module 300 compresses the processed RGB image data by using JPEG.
Further, the data compression and output module 300 is further configured to send the result of the compression processing to an external device for operations such as storage, display, and output, and for subsequent retrieval, which is convenient for diagnosis and is not described herein again.
In summary, the method and system for compressing the bayer image of the capsule endoscope rearrange the image data in bayer format, and increase the continuity of the image by using the correlation between the color channels, thereby improving the data compression efficiency, increasing the service time of the battery, and obtaining more images of the digestive tract to perform more comprehensive diagnosis.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, the functionality of the various modules may be implemented in the same one or more software and/or hardware implementations of the invention.
The above-described embodiments of the apparatus are merely illustrative, and the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
It should be understood that although the present description refers to embodiments, not every embodiment contains only a single technical solution, and such description is for clarity only, and those skilled in the art should make the description as a whole, and the technical solutions in the embodiments can also be combined appropriately to form other embodiments understood by those skilled in the art.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.

Claims (6)

1. A compression processing method of a capsule endoscope bayer image is characterized by comprising the following steps:
s1, collecting image data in an original bayer format;
s2, supplementing the missing gray value of each pixel point in the image data in the bayer format according to the gradient information of the data in each channel to form an RGB image;
s3, compressing the RGB image data and outputting the compressed RGB image data;
the step S2 specifically includes:
s21, expanding the original edge data in the image data in the bayer format;
s22, respectively taking an original red channel R and an original blue channel B in the original image data in the bayer format as reference points, and obtaining the gray value of the green channel G missing from each reference point according to the gray values of the reference points in the horizontal direction and the vertical direction and the gradient value representing the change of the gray value;
s23, respectively taking the original green channel G in the image data in the original bayer format as a reference point, and obtaining the gray values corresponding to the missing red channel R and blue channel B on each reference point according to the gray values of the reference point in the horizontal direction or the vertical direction;
respectively taking an original red channel R and an original blue channel B in the image data in the original bayer format as reference points, and obtaining the gray value of the blue channel B missing on the original red channel R and the gray value of the red channel R missing on the original blue channel B according to the gray value of the reference points in the diagonal direction and the gradient value representing the change of the gray value;
wherein, the step S22 specifically includes:
taking the current position of any pixel point lacking the green channel G as a reference point, and acquiring a channel value which is closest to the reference point and has the same channel color and a value of an original green channel G between the reference point and the channel value of the same color in the row or column direction;
then, the gray value of the green channel G missing from any reference point is represented as G(i,j)
Figure FDA0002957676430000011
valv=|G(i,j-1)+R/B(i,j)+G(i,j+1)|×2-R/B(i,j-2)-R/B(i,j+2)
valh=|G(i-1,j)+R/B(i,j)+G(i+1,j)|×2-R/B(i-2,j)-R/B(i+2,j)
Figure FDA0002957676430000021
Figure FDA0002957676430000022
Wherein (i, j) represents the coordinate value of the reference point, and if the current reference point is the red channel R, R/B(*)The coordinate value of the original red channel R is represented, if the current reference point is the blue channel B, R/B(*)Coordinate value, G, representing original blue channel B(*)Coordinate values representing a green channel; round (x) indicates rounding, and mean (x) indicates sorting the values therein and taking the value at the middle position.
2. The method for compressing a bayer image for a capsule endoscope according to claim 1, wherein the step S23 "taking an original green channel G in the original bayer format image data as a reference point, and obtaining gray values corresponding to a red channel R and a blue channel B missing from each reference point according to gray values of the reference point in a horizontal direction or a vertical direction" specifically includes:
respectively acquiring channel values of an original red channel R and an original blue channel B which are closest to the current reference point in the row or column direction of the original green channel G by taking the current position of the pixel point of any original green channel G as the reference point, and acquiring the channel values of the green channels G corresponding to the original red channel R and the original blue channel B;
the gray value of the red channel R and the blue channel B missing from the pixel point of any original green channel G is represented as R/B(i,j)
If the channel values of the original red channel R or the original blue channel B closest to the reference point are arranged in the horizontal direction, the channel values of the original red channel R or the original blue channel B are arranged in the horizontal direction
R/B(i,j)=(R/B(i,j-1)+R/B(i,j+1)+G(i,j)×2-G(i,j-1)-G(i,j+1))÷2,
If the channel values of the original red channel R or the original blue channel B which are closest to the reference point are arranged in the vertical direction, the channel values are determined to be the most similar to the reference point
R/B(i,j)=(R/B(i-1,j)+R/B(i+1,j)+G(i,j)×2-G(i-1)-G(i+1,j))÷2,
Wherein, if the red channel R with the missing current reference point is obtained, then R/B(*)The coordinate value of the red channel R is represented, and if the blue channel B with the missing current reference point is obtained, the R/B(*)Coordinate values representing the blue channel B.
3. The method for compressing a bayer image for a capsule endoscope according to claim 1, wherein the step S23 "obtaining a gray value of a blue channel B missing from an original red channel R and obtaining a gray value of a red channel R missing from an original blue channel B based on a gray value of an original red channel R and an original blue channel B in image data of an original bayer format in a diagonal direction and a gradient value representing a change in the gray value" includes:
taking the current position of a pixel point of any original red channel R as a reference point, respectively acquiring a channel value of an original blue channel B which is closest to the current reference point in the diagonal direction of the reference point, and channel values of green channels G which respectively correspond to the current reference point and the original blue channel B;
taking the current position of the pixel point of any original blue channel B as a reference point, and respectively acquiring the channel value of an original red channel R which is closest to the current reference point in the diagonal direction of the pixel point, and the channel values of green channels G which respectively correspond to the current reference point and the original red channel R;
the gray value of the missing red channel R on any original blue channel B is represented as R(i,j)
Figure FDA0002957676430000031
val1=R(i-1,j-1)+R(i+1,j+1)+2×G(i,j)-G(i-1,j-1)-G(i+1,j+1)
val2=R(i+1,j-1)+R(i-1,j+1)+2×G(i,j)-G(i+1,j-1)-G(i-1,j+1)
grad1=|R(i-1,j-1)+R(i+1,j+1)|+|G(i-1,j-1)-G(i,j)|+|G(i,j)-G(i+1,j+1)|,
grad2=|R(i+1,j-1)+R(i-1,j+1)|+|G(i+1,j-1)-G(i,j)|+|G(i,j)-G(i-1,j+1)|,
The missing blue channel B on any original red channel R has a gray value denoted B(i,j)
Figure FDA0002957676430000041
val3=B(i-1,j-1)+B(i+1,j+1)+2×G(i,j)-G(i-1,j-1)-G(i+1,j+1)
val4=B(i+1,j-1)+B(i-1,j+1)+2×G(i,j)-G(i+1,j-1)-G(i-1,j+1)
grad3=|B(i-1,j-1)+B(i+1,j+1)|+|G(i-1,j-1)-G(i,j)|+|G(i,j)-G(i+1,j+1)|,
grad4=|B(i+1,j-1)+B(i-1,j+1)|+|G(i+1,j-1)-G(i,j)|+|G(i,j)-G(i-1,j+1)|,
Wherein (i, j) represents a coordinate value of the reference point, R(*)Coordinate value, G, representing the red color channel(*)Coordinate values representing a green channel; b is(*)Coordinate values representing the blue channel.
4. A system for compressing a bayer image of a capsule endoscope, the system comprising:
the image data acquisition module is used for acquiring image data in an original bayer format;
the bayer-to-RGB module is used for supplementing the missing gray value in each pixel point in the image data in bayer format according to the gradient information of the data in each channel to form an RGB image;
the data compression and output module is used for compressing the RGB image data and then outputting the RGB image data;
the image data acquisition module specifically comprises:
the edge expansion module is used for expanding original edge data in the image data in the bayer format;
the first supplementing unit is used for respectively taking an original red channel R and an original blue channel B in the image data in the original bayer format as reference points, and obtaining the gray value of the green channel G missing from each reference point according to the gray values of the reference points in the horizontal direction and the vertical direction and the gradient value representing the change of the gray value;
the second supplementing unit is used for respectively taking an original green channel G in the image data in the original bayer format as a reference point, and obtaining gray values respectively corresponding to a red channel R and a blue channel B which are missing on each reference point according to the gray value of the reference point in the horizontal direction or the vertical direction;
a third supplementing unit, configured to obtain, by using an original red channel R and an original blue channel B in the original bayer-format image data as reference points, a grayscale value of the blue channel B missing on the original red channel R and a grayscale value of the red channel R missing on the original blue channel B according to a grayscale value of the reference points in a diagonal direction and a gradient value representing a change in the grayscale value;
wherein the first supplementary unit is specifically configured to:
taking the current position of any pixel point lacking the green channel G as a reference point, and acquiring a channel value which is closest to the reference point and has the same channel color and a value of an original green channel G between the reference point and the channel value of the same color in the row or column direction;
then, the gray value of the green channel G missing from any reference point is represented as G(i,j)
Figure FDA0002957676430000051
valv=|G(i,j-1)+R/B(i,j)+G(i,j+1)|×2-R/B(i,j-2)-R/B(i,j+2)
valh=|G(i-1,j)+R/B(i,j)+G(i+1,j)|×2-R/B(i-2,j)-R/B(i+2,j)
Figure FDA0002957676430000052
Figure FDA0002957676430000053
Wherein (i, j) represents the coordinate value of the reference point, and if the current reference point is the red channel R, R/B(*)The coordinate value of the original red channel R is represented, if the current reference point is the blue channelB, then R/B(*)Coordinate value, G, representing original blue channel B(*)Coordinate values representing a green channel; round (x) indicates rounding, and mean (x) indicates sorting the values therein and taking the value at the middle position.
5. The system for compressing a bayer image for a capsule endoscope according to claim 4, wherein the second supplementing unit is specifically configured to:
respectively acquiring channel values of an original red channel R and an original blue channel B which are closest to the current reference point in the row or column direction of the original green channel G by taking the current position of the pixel point of any original green channel G as the reference point, and acquiring the channel values of the green channels G corresponding to the original red channel R and the original blue channel B;
the gray value of the red channel R and the blue channel B missing from the pixel point of any original green channel G is represented as R/B(i,j)
If the channel values of the original red channel R or the original blue channel B closest to the reference point are arranged in the horizontal direction, the channel values of the original red channel R or the original blue channel B are arranged in the horizontal direction
R/B(i,j)=(R/B(i,j-1)+R/B(i,j+1)+G(i,j)×2-G(i,j-1)-G(i,j+1))÷2,
If the channel values of the original red channel R or the original blue channel B which are closest to the reference point are arranged in the vertical direction, the channel values are determined to be the most similar to the reference point
R/B(i,j)=(R/B(i-1,j)+R/B(i+1,j)+G(i,j)×2-G(i-1)-G(i+1,j))÷2,
Wherein, if the red channel R with the missing current reference point is obtained, then R/B(*)The coordinate value of the red channel R is represented, and if the blue channel B with the missing current reference point is obtained, the R/B(*)Coordinate values representing the blue channel B.
6. The system for compressing a bayer image for a capsule endoscope according to claim 4, wherein the third supplementing unit is specifically configured to:
taking the current position of a pixel point of any original red channel R as a reference point, respectively acquiring a channel value of an original blue channel B which is closest to the current reference point in the diagonal direction of the reference point, and channel values of green channels G which respectively correspond to the current reference point and the original blue channel B;
and taking the current position of the pixel point of any original blue channel B as a reference point, S respectively obtaining the channel value of an original red channel R which is closest to the current reference point in the diagonal direction of the original blue channel B, and the channel values of green channels G which respectively correspond to the current reference point and the original red channel R;
the gray value of the missing red channel R on any original blue channel B is represented as R(i,j)
Figure FDA0002957676430000061
val1=R(i-1,j-1)+R(i+1,j+1)+2×G(i,j)-G(i-1,j-1)-G(i+1,j+1)
val2=R(i+1,j-1)+R(i-1,j+1)+2×G(i,j)-G(i+1,j-1)-G(i-1,j+1)
grad1=|R(i-1,j-1)+R(i+1,j+1)|+|G(i-1,j-1)-G(i,j)|+|G(i,j)-G(i+1,j+1)|,
grad2=|R(i+1,j-1)+R(i-1,j+1)|+|G(i+1,j-1)-G(i,j)|+|G(i,j)-G(i-1,j+1)|,
The missing blue channel B on any original red channel R has a gray value denoted B(i,j)
Figure FDA0002957676430000071
val3=B(i-1,j-1)+B(i+1,j+1)+2×G(i,j)-G(i-1,j-1)-G(i+1,j+1)
val4=B(i+1,j-1)+B(i-1,j+1)+2×G(i,j)-G(i+1,j-1)-G(i-1,j+1)
grad3=|B(i-1,j-1)+B(i+1,j+1)|+|G(i-1,j-1)-G(i,j)|+|G(i,j)-G(i+1,j+1)|,
grad4=|B(i+1,j-1)+B(i-1,j+1)|+|G(i+1,j-1)-G(i,j)|+|G(i,j)-G(i-1,j+1)|,
Wherein (i, j) represents a coordinate value of the reference point, R(*)Coordinate value, G, representing the red color channel(*)Coordinate values representing a green channel; b is(*)Coordinate values representing the blue channel.
CN201910439009.5A 2019-05-24 2019-05-24 Compression processing method and system for capsule endoscope bayer image Active CN110049337B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910439009.5A CN110049337B (en) 2019-05-24 2019-05-24 Compression processing method and system for capsule endoscope bayer image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910439009.5A CN110049337B (en) 2019-05-24 2019-05-24 Compression processing method and system for capsule endoscope bayer image

Publications (2)

Publication Number Publication Date
CN110049337A CN110049337A (en) 2019-07-23
CN110049337B true CN110049337B (en) 2021-05-25

Family

ID=67283469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910439009.5A Active CN110049337B (en) 2019-05-24 2019-05-24 Compression processing method and system for capsule endoscope bayer image

Country Status (1)

Country Link
CN (1) CN110049337B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114202486B (en) * 2022-02-16 2022-05-20 安翰科技(武汉)股份有限公司 Mosaic removal method and system for capsule endoscope image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1799492A (en) * 2005-12-02 2006-07-12 清华大学 Quasi-lossless image compression and decompression method of wireless endoscope system
CN102457722A (en) * 2010-10-26 2012-05-16 珠海全志科技股份有限公司 Processing method and device for Bayer image
CN105577981A (en) * 2015-12-22 2016-05-11 深圳大学 Edge self-adaptive color restoration method and system
CN108156461A (en) * 2017-12-28 2018-06-12 上海通途半导体科技有限公司 A kind of Bayer method for compressing image and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6099603B2 (en) * 2014-08-04 2017-03-22 富士フイルム株式会社 MEDICAL IMAGE PROCESSING DEVICE, ITS OPERATION METHOD, AND ENDOSCOPE SYSTEM

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1799492A (en) * 2005-12-02 2006-07-12 清华大学 Quasi-lossless image compression and decompression method of wireless endoscope system
CN102457722A (en) * 2010-10-26 2012-05-16 珠海全志科技股份有限公司 Processing method and device for Bayer image
CN105577981A (en) * 2015-12-22 2016-05-11 深圳大学 Edge self-adaptive color restoration method and system
CN108156461A (en) * 2017-12-28 2018-06-12 上海通途半导体科技有限公司 A kind of Bayer method for compressing image and device

Also Published As

Publication number Publication date
CN110049337A (en) 2019-07-23

Similar Documents

Publication Publication Date Title
US6236433B1 (en) Scaling algorithm for efficient color representation/recovery in video
US8395657B2 (en) Method and system for stitching two or more images
CN110149520B (en) Capsule endoscope bayer image YUV lossless compression processing method and system
CN102598651B (en) Motion video processing unit and method, the camera head of motion video processing unit is installed
JPH09284798A (en) Signal processor
WO2023284401A1 (en) Image beautification processing method and apparatus, storage medium, and electronic device
JP4368835B2 (en) Image processing apparatus, imaging apparatus, and image processing system
CN110049337B (en) Compression processing method and system for capsule endoscope bayer image
KR20030004143A (en) Digital Camera With Electronic Zooming Function
CN110139039B (en) Compression processing method and system for capsule endoscope bayer image
CN112584075B (en) Image transmission method and system based on image resolution
CN101729884B (en) Image acquiring device and image preprocessing method
CN108769583A (en) A kind of superfine electric scope high definition interpolating module and method based on FPGA
KR101878891B1 (en) Image processing method of capsule endoscope, capsule endoscope apparatus, receiver interworking with capsule endoscope, image processing method of the receiver, and capsule endoscope system
JP2010251882A (en) Image capturing apparatus and image reproducing device
CN106780429B (en) Method for extracting key frame of WCE video time sequence redundant image data based on perception color space and key corner
CN114189689B (en) Image compression processing method, device, electronic equipment and storage medium
CN107146269B (en) Image filling method and system
CN114189690A (en) Image compression processing method and device, electronic equipment and storage medium
WO2023206098A1 (en) Light field data transmission method, light field communication device and system
CN113313632A (en) Image reconstruction method, system and related equipment
JP3795672B2 (en) Image processing system
JPH02272972A (en) Still picture transmitter
JP5650133B6 (en) Method and apparatus for reducing the size of image data
JP5650133B2 (en) Method and apparatus for reducing the size of image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant