CN115834792B - Video data processing method and system based on artificial intelligence - Google Patents

Video data processing method and system based on artificial intelligence Download PDF

Info

Publication number
CN115834792B
CN115834792B CN202310146591.2A CN202310146591A CN115834792B CN 115834792 B CN115834792 B CN 115834792B CN 202310146591 A CN202310146591 A CN 202310146591A CN 115834792 B CN115834792 B CN 115834792B
Authority
CN
China
Prior art keywords
complementary
gray
gray value
value
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310146591.2A
Other languages
Chinese (zh)
Other versions
CN115834792A (en
Inventor
***
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Lopulo Technology Co ltd
Original Assignee
Hunan Lopulo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Lopulo Technology Co ltd filed Critical Hunan Lopulo Technology Co ltd
Priority to CN202310146591.2A priority Critical patent/CN115834792B/en
Publication of CN115834792A publication Critical patent/CN115834792A/en
Application granted granted Critical
Publication of CN115834792B publication Critical patent/CN115834792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image encryption, in particular to a video data processing method and system based on artificial intelligence. The method comprises the following steps: acquiring an initial gray level histogram of an image to be processed, and acquiring a target gray level value corresponding to a gray level value with a frequency other than 0 according to the frequency corresponding to each gray level value in the initial gray level histogram and the maximum gray level value and the minimum gray level value of a pixel point in the image to be processed, so as to acquire a stretched gray level histogram; obtaining complementary parameters of each gray value according to the frequency corresponding to each gray value in the stretched gray histogram and the average frequency of all gray values; and obtaining each optimal gray value set based on the complementary parameters, and carrying out complementary operation on the frequencies corresponding to the gray values in each optimal gray value set according to the frequencies corresponding to the gray values in each optimal gray value set and the average frequencies of all gray values to obtain an encrypted image. The invention reduces the risk of leakage of video data.

Description

Video data processing method and system based on artificial intelligence
Technical Field
The invention relates to the technical field of image encryption, in particular to a video data processing method and system based on artificial intelligence.
Background
With the advent of the informatization age, various industries began to use artificial intelligence technology to store data in video data and conduct anomaly analysis, and the change and anomaly existing in the video are accurately identified through the strong computing capacity of artificial intelligence; however, these video data may be tampered with and stolen before being transmitted to the artificial intelligence terminal, so the encryption protection during the video data transmission process becomes a research hot spot. At present, more data transmission uses an entropy coding method, and aiming at the advantages of encryption of video data with gray values distributed between 0 and 255, better encryption efficiency can be obtained by using entropy coding, however, the entropy coding exposes the statistical characteristics of the data, and the possibility of deducing the original data through the inverse coding of the statistical characteristics exists, so how to unify the video data to ensure that the video data is not leaked is a problem to be solved.
Disclosure of Invention
In order to solve the problem that the prior method cannot break the statistical characteristics of data to cause the risk of leakage of the video data when the video data is encrypted and protected, the invention aims to provide an artificial intelligence-based video data processing method and system, and the adopted technical scheme is as follows:
In a first aspect, the present invention provides an artificial intelligence based video data processing method comprising the steps of:
acquiring video data to be processed, and marking any frame of gray level image in the video data as an image to be processed;
acquiring an initial gray level histogram corresponding to the image to be processed, acquiring a target gray level value corresponding to each gray level value which is not 0 in the initial gray level histogram according to the frequency corresponding to each gray level value in the initial gray level histogram, the maximum gray level value of the pixel point in the image to be processed and the minimum gray level value of the pixel point in the image to be processed, and acquiring a stretched gray level histogram based on the target gray level value; obtaining complementary parameters of each gray value in the stretched gray histogram according to the frequency corresponding to each gray value in the stretched gray histogram and the average frequency of all gray values;
and obtaining each optimal gray value set based on the complementary parameters, and carrying out complementary operation on the frequencies corresponding to the gray values in each optimal gray value set according to the frequencies corresponding to the gray values in each optimal gray value set and the average frequencies of all gray values to obtain an encrypted image.
In a second aspect, the present invention provides an artificial intelligence based video data processing system, comprising a memory and a processor, the processor executing a computer program stored in the memory to implement the above mentioned artificial intelligence based video data processing method.
Preferably, the obtaining the complementary parameter of each gray value in the stretched gray histogram according to the frequency corresponding to each gray value in the stretched gray histogram and the average frequency of all gray values includes:
respectively calculating the difference value between the frequency corresponding to each gray value and the average frequency of all gray values, and marking the difference value as a first difference value; and taking the ratio of the first difference value to the average frequency of all the gray values as the complementary parameter of each gray value.
Preferably, the obtaining each optimal gray value set based on the complementary parameters includes:
sequencing all complementary parameters in the sequence from big to small to generate a complementary parameter sequence;
selecting the 1 st complementary parameter and the last 1 complementary parameter in the complementary parameter sequence as initial values, calculating the sum of the two initial values to be used as complementary judgment values of the 1 st complementary parameter and the last 1 complementary parameter, if the absolute value of the complementary judgment values of the 1 st complementary parameter and the last 1 complementary parameter is smaller than a statistical characteristic threshold value and the complementary judgment values of the 1 st complementary parameter and the last 1 complementary parameter are smaller than 0, traversing downwards from the gray value 255, calculating the complementary judgment values of the 1 st complementary parameter and the complementary parameters of all gray values until the complementary judgment values of the 1 st complementary parameter and the complementary parameters of the previous gray value are smaller than the complementary judgment values of the 1 st complementary parameter and the complementary parameters of the next gray value, taking the gray value corresponding to the 1 st complementary parameter and the corresponding previous gray value as an optimal gray value set, and deleting the complementary parameters of all gray values in the optimal gray value set in the complementary parameter sequence; if the absolute value of the complementary judgment value of the 1 st complementary parameter and the last 1 complementary parameter is smaller than the statistical characteristic threshold value, and the complementary judgment value of the 1 st complementary parameter and the last 1 complementary parameter is larger than or equal to 0, traversing upwards from the gray value 0 until the complementary judgment value of the 1 st complementary parameter and the complementary judgment value of the complementary parameter of the previous gray value is smaller than the complementary judgment value of the 1 st complementary parameter and the complementary judgment value of the complementary parameter of the subsequent gray value, and taking the gray value corresponding to the 1 st complementary parameter and the corresponding previous gray value as an optimal gray value set; and by analogy, updating the initial value, traversing all complementary parameters in the complementary parameter sequence to obtain a plurality of optimal gray value sets, and deleting the complementary parameters of each gray value in the optimal gray value sets in the complementary parameter sequence.
Preferably, if complementary parameters in the complementary parameter sequence after the complementary parameter operation of each gray value in the optimal gray value set are deleted, the complementary parameters in the complementary parameter sequence are ordered according to the sequence from big to small, a parameter sequence to be matched is constructed, according to the complementary judgment value of the 1 st complementary parameter and other complementary parameters in the parameter sequence to be matched, an optimal gray value complementary to the gray value corresponding to the 1 st complementary parameter is obtained, the gray value corresponding to the 1 st complementary parameter and each optimal gray value complementary to the 1 st complementary parameter form an optimal gray value set, and the other complementary parameters sequentially traverse forwards from the last 1 complementary parameters in the parameter sequence to be matched; and similarly, traversing the gray values corresponding to all complementary parameters in the parameter sequence to be matched to obtain each optimal gray value set.
Preferably, obtaining an optimal gray value complementary to the gray value corresponding to the 1 st complementary parameter according to the complementary determination value of the 1 st complementary parameter and other complementary parameters in the parameter sequence to be matched, includes:
if it is
Figure SMS_10
Then calculate
Figure SMS_2
Until it meets
Figure SMS_6
Last in the parameter sequence to be matched
Figure SMS_8
The gray value corresponding to the complementary parameter is taken as the optimal gray value complementary to the gray value corresponding to the 1 st complementary parameter, wherein,
Figure SMS_13
In order to take the absolute value of the value,
Figure SMS_14
for the 1 st complementary parameter in the sequence of parameters to be matched,
Figure SMS_16
for the last 1 complementary parameter in the sequence of parameters to be matched,
Figure SMS_9
for the number of complementary parameters in the sequence of parameters to be matched,
Figure SMS_12
for the 2 nd complementary parameter in the sequence of parameters to be matched,
Figure SMS_1
for the 3 rd complementary parameters in the sequence of parameters to be matched,
Figure SMS_5
for the last but one in the parameter sequence to be matched
Figure SMS_4
The number of complementary parameters is chosen,
Figure SMS_7
for the 1 st complementary parameter in the parameter sequence to be matched and the last complementary parameter in the parameter sequence to be matched
Figure SMS_11
The complementary determination values of the respective complementary parameters,
Figure SMS_15
for the 1 st complementary parameter in the parameter sequence to be matched and the last complementary parameter in the parameter sequence to be matched
Figure SMS_3
Complementary determination values of the respective complementary parameters;
if it is
Figure SMS_17
Then calculate
Figure SMS_18
Obtaining an optimal gray value complementary to the gray value corresponding to the 1 st complementary parameter, wherein,
Figure SMS_19
for the front in the parameter sequence to be matched
Figure SMS_20
The complementary parameter and the first parameter in the parameter sequence to be matched
Figure SMS_21
Complementary determination values of the respective complementary parameters.
Preferably, the obtaining the target gray value corresponding to each gray value of the initial gray histogram, which is not 0, according to the frequency corresponding to each gray value in the initial gray histogram, the maximum gray value of the pixel point in the image to be processed, and the minimum gray value of the pixel point in the image to be processed, includes:
Calculating a target gray value corresponding to the ith gray value with the frequency of non-0 in the initial gray histogram by adopting the following formula:
Figure SMS_22
wherein ,
Figure SMS_23
for a target gray value corresponding to the i-th gray value of frequency non-0,
Figure SMS_24
an ith gray value of frequency non-0,
Figure SMS_25
for the maximum gray value of the pixel point in the image to be processed,
Figure SMS_26
for the minimum gray value of the pixel point in the image to be processed,
Figure SMS_27
is rounded downwards.
Preferably, the obtaining the stretched gray level histogram based on the target gray level value includes: and replacing each gray value with a target gray value corresponding to the gray value, which is not 0, in the image to be processed, reconstructing a gray histogram based on the gray values of the pixel points in the image to be processed after the replacement is completed, and marking the gray histogram as a stretched gray histogram.
Preferably, the performing a complementary operation on the frequencies corresponding to the gray values in the optimal gray value sets according to the frequencies corresponding to the gray values in the optimal gray value sets and the average frequency of all the gray values to obtain the encrypted image includes:
if the number of gray values in the optimal gray value set is equal to 2, marking one gray value in the optimal gray value set as a first gray value, and marking the other gray value in the optimal gray value set as a second gray value; calculating the sum of the frequency corresponding to the first gray value before complementation and the frequency corresponding to the second gray value before complementation, marking the sum as a frequency index, taking the difference value of the frequency index and the average frequency of all gray values as the frequency corresponding to the first gray value after complementation, and taking the average frequency of all gray values as the frequency corresponding to the second gray value after complementation;
If the number of gray values in the optimal gray value set is greater than 2, marking the gray value with the maximum number of the corresponding optimal complementary gray values in the optimal gray value set as a third gray value, marking all other gray values except the third gray value in the optimal gray value set as a fourth gray value, taking the average frequency of all the gray values as the frequency corresponding to the third gray value after complementation, calculating the product of the average frequency of all the gray values and the number of the fourth gray value, marking the product as a first product, and taking the difference value of the frequency corresponding to the third gray value before complementation and the first product as the frequency corresponding to the fourth gray value after complementation;
an encrypted image is obtained based on the frequency at which each gradation value corresponds after complementation.
The invention has at least the following beneficial effects:
1. according to the distribution condition of the initial gray histogram corresponding to the image to be processed, a plurality of optimal gray value sets are obtained, the frequency corresponding to each gray value in each optimal gray value set is subjected to complementary operation based on a frequency complementary method, the statistic characteristics of image data are eliminated to generate an encrypted image, the problem of subjectivity and computational complexity of the threshold value is solved by the conventional method for average distribution and elimination of the statistic characteristics, the frequency corresponding to each gray value is subjected to complementary operation only once, compared with the conventional method, the method reduces operation cost, avoids the influence of human subjective factors on image encryption results, achieves the purposes of eliminating the statistic characteristics of the image data and encrypting, and reduces the risk of leakage of video data.
2. The invention obtains the optimal gray value set to carry out complementary operation on the frequency corresponding to each gray value in the optimal gray value set so as to eliminate the statistical characteristic of image data, considers that the gray values of pixel points in an image to be processed are possibly more intensively distributed in a certain gray interval, and can lead the gray values of the image to be uniformly distributed between 0 and 255 only through histogram equalization, but the statistical characteristic cannot be weakened, and the information in the image cannot be better protected, thus obtaining the target gray value corresponding to each gray value with frequency other than 0 in the image to be processed according to the frequency corresponding to each gray value in the image to be processed, the maximum gray value and the minimum gray value of the pixel points, and further carrying out stretching operation on the initial gray histogram; when the frequency corresponding to the gray value of the pixel point in the image to be processed is similar to the average frequency of all the gray values, the statistical characteristic of the image data is lower, so that the complementary parameters of all the gray values are obtained according to the frequency corresponding to each gray value in the stretched gray histogram and the average frequency of all the gray values, and further an optimal gray value set is obtained based on the complementary parameters, the statistical characteristic of the image data can be broken, and the risks of tampering and leakage of the video data are effectively reduced.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an artificial intelligence based video data processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an initial gray level histogram;
fig. 3 is a schematic diagram of a gray level histogram after stretching.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description is given below of an artificial intelligence-based video data processing method according to the present invention with reference to the accompanying drawings and the preferred embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of an artificial intelligence-based video data processing method and system.
Video data processing method embodiment based on artificial intelligence:
the embodiment provides an artificial intelligence-based video data processing method, as shown in fig. 1, which includes the following steps:
step S1, obtaining video data to be processed, and marking any frame of gray level image in the video data as the image to be processed.
The specific scene aimed at by this embodiment is: in the video data transmission process, encryption processing is required to be carried out on the data, the statistical characteristics of the image data cannot be changed in the traditional entropy coding, so that important information can be intercepted in the video data transmission process, and therefore, a method capable of uniformly processing the video data to enable the video data not to have the statistical characteristics is required. According to the embodiment, channel images are uniformly mapped into gray distribution according to the statistical characteristics of each channel of the image data, and gray values of all pixel points are converted through functions according to the frequency number to eliminate the statistical characteristics of the channel images, so that the aim of encrypting the image data is fulfilled.
The video data is composed of successive multi-frame RGB images, and is transmitted by encoding each channel data of each frame RGB image, and converting the encoded data into a digital signal during the transmission of the video data. The RGB image is composed of three channels of images, the gray distribution frequency of different images has statistical characteristics in the gray histogram, and the statistical characteristics reflect image information, so that the statistical information of the frequency needs to be eliminated by an encryption method, and the aim of protecting data is achieved. Because the data contained in each image is different, the gray scale distribution of the image channel may have regionality, for example, the gray scale value of the pixel point in the gray scale image of a certain channel is in the range of 0-40, the number of the pixel point is almost 0, the information reflected in the dark place of the channel is very little, and the gray scale value of the image can be uniformly distributed between 0 and 255 only through histogram equalization, but the statistical characteristic cannot be weakened, and the information in the image cannot be well protected.
In this embodiment, the video data to be processed is obtained by the video data acquisition module, and because the video data to be processed is composed of multiple frames of RGB images, each frame of RGB image needs to be split, that is, an R channel image, a G channel image and a B channel image of each frame of image are respectively extracted. The present embodiment will next be described taking one frame of RGB image in video data to be processed as an image to be encrypted as an example. As other embodiments, each frame of gray-scale image in the video data to be detected may be directly obtained, and each frame of gray-scale image may be encrypted, that is, each frame of image may not be split.
Step S2, obtaining an initial gray level histogram corresponding to the image to be processed, obtaining a target gray level value corresponding to each gray level value with a frequency other than 0 in the initial gray level histogram according to the frequency corresponding to each gray level value in the initial gray level histogram, the maximum gray level value of the pixel point in the image to be processed and the minimum gray level value of the pixel point in the image to be processed, and obtaining a stretched gray level histogram based on the target gray level value; and obtaining complementary parameters of each gray value in the stretched gray histogram according to the frequency corresponding to each gray value in the stretched gray histogram and the average frequency of all gray values.
Conventional histogram equalization counts histogram distribution based on cumulative distribution function, but since the duty ratio of each gray value needs to be calculated, if the duty ratio of a certain gray value is extremely small, the cumulative distribution function is calculatedThere is a possibility that the data is merged, i.e., the image data is changed. For example, when the duty ratio of a certain gradation value is 0.0001, when the map is calculated by the cumulative distribution function,
Figure SMS_28
and (3) with
Figure SMS_29
The difference between the calculated results is less than the gray value minimum unit 1, wherein
Figure SMS_30
Representing gray values, two gray levels are combined, namely original image data is lost, compared with the original image, the histogram equalization result only changes the gray distribution of the image and does not eliminate the statistical characteristic of the image, the purpose of the embodiment is to encrypt the image, although the gray distribution is changed to enable the image to have certain confidentiality, the image information is not changed, and therefore the embodiment achieves the purpose of eliminating the statistical characteristic by calculating the frequency distribution of each gray value and performing self-adaptive frequency conversion on different gray levels on the basis of equalization.
Any channel image corresponding to the image to be encrypted is recorded as an image to be processed, and the image to be processed will be taken as an example in the embodiment, and the encryption processing can be performed on other channel images corresponding to the image to be encrypted and each channel image corresponding to other RGB images by adopting the method provided by the embodiment; counting the gray value of each pixel point in the image to be processed, constructing a corresponding gray histogram based on the gray values of all the pixel points in the image to be processed, and marking the gray histogram as an initial gray histogram, as shown in fig. 2, wherein the horizontal axis is the gray value, the horizontal axis is 0-255, the vertical axis is the frequency corresponding to each gray value, but due to the randomness of the image, part of gray values do not appear frequently, so that the corresponding frequency is 0, and most of the gray values are extremely dark and extremely bright areas, the gray distribution of the pixel points in the image is generally not 0-255, the gray of the image is more concentrated, and the existence information of the highlight area and the extremely dark area is less; meanwhile, in order to increase the difference between the encrypted image and the original image, the contrast of the image can be increased by stretching, and the detail information of the image is embodied. Stretching the initial gray level histogram to ensure that gray levels are distributed in an image more uniformly, and constructing a formula to adjust each gray level value with the frequency of non-0 in the initial gray level histogram to obtain a target gray level value corresponding to each gray level value with the frequency of non-0; the target gray value corresponding to the ith gray value of frequency non-0 is:
Figure SMS_31
wherein ,
Figure SMS_32
for a target gray value corresponding to the i-th gray value of frequency non-0,
Figure SMS_33
an ith gray value of frequency non-0,
Figure SMS_34
for the maximum gray value of the pixel point in the image to be processed,
Figure SMS_35
for the minimum gray value of the pixel point in the image to be processed,
Figure SMS_36
Figure SMS_37
is rounded downwards.
The formula constructed in this embodiment can distribute the target gray value over
Figure SMS_49
By stretching the gray distribution, the contrast of the image is increased, and the image information is highlighted; by adopting the above formula, a target gray value corresponding to the ith gray value with a frequency other than 0 can be obtained, for example: if it is
Figure SMS_41
At the point of 50 a,
Figure SMS_42
at the time of 40 a, the number of the components is,
Figure SMS_48
180, then
Figure SMS_53
18; if it is
Figure SMS_50
At the point of time of 60 a,
Figure SMS_54
at the time of 40 a, the number of the components is,
Figure SMS_47
200, then
Figure SMS_51
31; if it is
Figure SMS_38
At the point of 150 a,
Figure SMS_45
at the point of 50 a,
Figure SMS_39
180, then
Figure SMS_43
192; if it is
Figure SMS_46
At the point of 180 a, the number of the components is,
Figure SMS_52
at the point of time of 30 a,
Figure SMS_40
200, then
Figure SMS_44
255. Replacing gray values with frequency other than 0 in the image to be processed with corresponding target gray values, marking the image with the replaced gray values as a stretched gray image, and reconstructing gray straight based on the gray values of all pixel points in the stretched gray imageThe gray level histogram constructed at this time is denoted as a stretched gray level histogram as shown in fig. 3. The gray values of the pixel points are changed through histogram stretching, but the gray values are the same as the frequency distribution of the original image, the information of the image is still reserved, complementary parameters are determined according to the frequency corresponding to each target gray value, and the target gray values with more frequency are supplemented to the target gray values with less frequency according to the complementary parameters, so that the statistical characteristics are purposefully eliminated under the condition of not changing the total gray distribution.
The conventional method for eliminating statistical characteristics is to set a frequency threshold value, if the frequency corresponding to a gray value is higher than the threshold value, uniformly distributing the number of pixels higher than the threshold value in the gray value to other gray values, and repeating the steps. Therefore, this embodiment proposes a targeted frequency complementation method, and the frequencies corresponding to different gray values are selected to be optimally complemented by analyzing the frequency distribution in the histogram, where the optimal judgment standard is as follows: both gray frequencies can eliminate statistical properties after complementation; when the gray distribution of each gray value is more similar to the average frequency of all gray values, the weaker the statistical characteristic is, namely, the smaller the information quantity of the image exists; thus calculating the average frequency of all gray values, i.e
Figure SMS_55
, wherein ,
Figure SMS_56
for the average frequency of all gray values,
Figure SMS_57
for the total number of rows of pixels in the image to be processed,
Figure SMS_58
the total column number of pixel points in the image to be processed.
When the frequency corresponding to each gray value is similar to the average frequency of all gray values, the statistical characteristic is reduced. For the j-th gray value: calculating the difference value between the frequency corresponding to the gray value and the average frequency of all the gray values, marking the difference value as a first difference value, and taking the ratio of the first difference value to the average frequency of all the gray values as a complementary parameter of the gray values; the calculation formula of the complementary parameter corresponding to the jth gray value is as follows:
Figure SMS_59
wherein ,
Figure SMS_60
for the frequency corresponding to the jth gray value,
Figure SMS_61
for the average frequency of all gray values,
Figure SMS_62
is the complementary parameter of the jth gray value.
When the frequency corresponding to the jth gray value is greater than the average frequency of all gray values, the complementary parameters
Figure SMS_63
The larger the frequency corresponding to the jth gray value is, the larger the value of the complementary parameter is; when the frequency corresponding to the jth gray value is smaller than or equal to the average frequency of all gray values, the complementary parameters
Figure SMS_64
And the smaller the frequency corresponding to the jth gray value, the more the value of the complementary parameter approaches to-1.
By adopting the method, the complementary parameter of each gray value in the stretched gray histogram can be obtained.
And step S3, obtaining each optimal gray value set based on the complementary parameters, and carrying out complementary operation on the frequencies corresponding to the gray values in each optimal gray value set according to the frequencies corresponding to the gray values in each optimal gray value set and the average frequencies of all gray values to obtain an encrypted image.
In this embodiment, in step S2, complementary parameters of each gray value in the stretched gray histogram are obtained, all complementary parameters are ordered in order from large to small, a complementary parameter sequence is generated, and the maximum value and the minimum value of all complementary parameters in the complementary parameter sequence are distributed at two ends of the sequence respectively.
The specific acquisition process of the complementary gray value pair is as follows: setting statistical characteristic threshold
Figure SMS_81
In the present embodiment
Figure SMS_87
The value of (2) is 0.1, and in the specific application, the implementer can set according to specific conditions; selecting the 1 st complementary parameter in the complementary parameter sequence
Figure SMS_88
And the last 1 complementary parameter in the complementary parameter sequence
Figure SMS_66
As an initial value, calculate
Figure SMS_69
Figure SMS_75
Is the complementary determination value of the 1 st complementary parameter and the kth complementary parameter in the complementary parameter sequence,
Figure SMS_76
is the total number of complementary parameters in the complementary parameter sequence; if it is
Figure SMS_78
The frequency of the two gray values is complementary, and the optimal gray value set needs to be found when
Figure SMS_82
When the absolute value of the difference value representing the minimum frequency and the average frequency is larger than the absolute value of the difference value of the maximum frequency and the average frequency, the selection method of the optimal gray complementary pair comprises the following steps: from gray value 255, go down until calculated
Figure SMS_85
Figure SMS_90
Is the 1 st complementary parameter in the complementary parameter sequence
Figure SMS_86
And a complementary determination value of a complementary parameter of the gradation value b,
Figure SMS_89
is the 1 st complementary parameter in the complementary parameter sequence
Figure SMS_93
And a complementary determination value of a complementary parameter of the gradation value b-1,
Figure SMS_94
to take absolute value, the 1 st complementary parameter in the complementary parameter sequence
Figure SMS_80
The corresponding gray value and gray value b form an optimal gray value set, and the 1 st complementary parameter in the complementary parameter sequence is deleted
Figure SMS_84
And the complementary parameters of the gray value b, such as:
Figure SMS_91
then the 1 st complementary parameter in the complementary parameter sequence
Figure SMS_92
The corresponding gray values and gray values 238 form an optimal gray value set; when (when)
Figure SMS_68
And is also provided with
Figure SMS_71
When the absolute value of the difference value representing the minimum frequency and the average frequency is smaller than or equal to the absolute value of the difference value of the maximum frequency and the average frequency, the selection method of the optimal gray complementary pair comprises the following steps: traversing from gray value 0 upwards until calculated
Figure SMS_73
Figure SMS_79
Is the complementary determination value of the complementary parameter of the gray value a and the kth complementary parameter in the complementary parameter sequence,
Figure SMS_67
Is the complementary determination value of the complementary parameter of the gray value a and the kth complementary parameter in the complementary parameter sequence,
Figure SMS_72
in order to take the absolute value, the gray value a and the gray value corresponding to the kth complementary parameter in the complementary parameter sequence form an optimal gray value set, and the kth complementary parameter in the complementary parameter sequence and the complementary parameter of the gray value a are deleted, for example:
Figure SMS_77
the gray value corresponding to the kth complementary parameter in the sequence of complementary parameters and the gray value 15 form an optimal gray value set. By adopting the method, the gray values corresponding to all the complementary parameters in the complementary parameter sequence are traversed, and the optimal gray value set is searched to obtain a plurality of optimal gray value sets; if the number of gray values in the optimal gray value set is equal to 2, marking one gray value in the optimal gray value set as a first gray value, and marking the other gray value in the optimal gray value set as a second gray value; calculating the sum of the frequency corresponding to the first gray value before complementation and the frequency corresponding to the second gray value before complementation, and marking the sum as a frequency index, and taking the difference between the frequency index and the average frequency of all gray values as the first gray value And taking the average frequency of all the gray values as the frequency corresponding to the second gray value after complementation. For the p-th optimal gray value set, the two gray values in the optimal gray value set are respectively gray values
Figure SMS_83
And gray value
Figure SMS_65
Gray value of
Figure SMS_70
And gray value
Figure SMS_74
The corresponding frequencies after complementation are respectively:
Figure SMS_95
Figure SMS_96
wherein ,
Figure SMS_99
is a gray value
Figure SMS_102
After the complementation the corresponding frequency is given,
Figure SMS_104
is a gray value
Figure SMS_98
After the complementation the corresponding frequency is given,
Figure SMS_101
is a gray value
Figure SMS_103
The corresponding frequency before the complementation,
Figure SMS_105
is a gray value
Figure SMS_97
The corresponding frequency before the complementation,
Figure SMS_100
is the average frequency of all gray values.
Since the gray distribution of the pixels in the image has randomness, gray values with larger difference from the average frequency may exist, so that
Figure SMS_108
Is not constant, i.e
Figure SMS_110
Is equal to or greater than
Figure SMS_114
If the complementary parameters corresponding to the optimal gray value set are deleted, the complementary parameters in the complementary parameter sequence do not exist, and then the subsequent processing is not carried out; if the complementary parameters corresponding to the optimal gray value set are deleted and partial complementary parameters exist in the complementary parameter sequence after the complementary parameter operation, the remaining complementary parameters in the complementary parameter sequence are ordered from big to small, a to-be-matched parameter sequence is constructed, the data in the to-be-matched parameter sequence is the remaining complementary parameters in the complementary parameter sequence, and the 1 st complementary parameter in the to-be-matched parameter sequence is selected
Figure SMS_107
And the last 1 complementary parameters
Figure SMS_111
, wherein
Figure SMS_115
For the total number of complementary parameters in the sequence of parameters to be matched, if
Figure SMS_121
Calculation of
Figure SMS_109
Until it meets
Figure SMS_113
, wherein ,
Figure SMS_117
for the number of complementary parameters in the sequence of parameters to be matched,
Figure SMS_120
for the 2 nd complementary parameter in the sequence of parameters to be matched,
Figure SMS_119
for the 3 rd complementary parameters in the sequence of parameters to be matched,
Figure SMS_123
for the penultimate in the parameter sequence to be matched
Figure SMS_125
The number of complementary parameters is chosen,
Figure SMS_129
for the 1 st complementary parameter in the parameter sequence to be matched and the last complementary parameter in the parameter sequence to be matched
Figure SMS_124
The complementary determination values of the respective complementary parameters,
Figure SMS_126
for the 1 st complementary parameter in the parameter sequence to be matched and the last complementary parameter in the parameter sequence to be matched
Figure SMS_128
Complementary determination values of the complementary parameters, wherein the gray value corresponding to the 1 st complementary parameter in the parameter sequence to be matched and the last complementary parameter in the parameter sequence to be matched
Figure SMS_133
Gray value formation corresponding to complementary parametersThe optimal gray value set, i.e. the one-to-many relationship. If the number of gray values in the optimal gray value set is greater than 2, marking the gray value with the maximum number of the corresponding optimal complementary gray values in the optimal gray value set as a third gray value, marking all other gray values except the third gray value in the optimal gray value set as a fourth gray value, taking the average frequency of all the gray values as the frequency corresponding to the third gray value after complementation, calculating the product of the average frequency of all the gray values and the number of the fourth gray value, marking the product as a first product, and taking the difference value between the frequency corresponding to the third gray value before complementation and the first product as the frequency corresponding to the fourth gray value after complementation. For the optimal gray value set, the corresponding frequency after the gray values of the 1 st complementary parameter in the parameter sequence to be matched are complementary is
Figure SMS_106
, wherein ,
Figure SMS_112
for the corresponding frequency after the gray value of the 1 st complementary parameter in the parameter sequence to be matched is complementary, the last parameter in the parameter sequence to be matched
Figure SMS_116
The corresponding frequencies after the gray values of the complementary parameters are complementary are equal and are the average frequency of all the gray values
Figure SMS_118
The method comprises the steps of carrying out a first treatment on the surface of the If it is
Figure SMS_122
Calculation of
Figure SMS_127
Figure SMS_130
For the front in the parameter sequence to be matched
Figure SMS_132
The complementary parameter and the first parameter in the parameter sequence to be matched
Figure SMS_131
And repeating the method to determine the optimal gray value set, and obtaining the corresponding frequency after each gray value in the optimal gray value set is complemented.
Since the average frequency of all gray values is a constant value
Figure SMS_134
After complementation, the frequency is necessarily within the distribution range of the frequency with weak statistical characteristics, so that all gray values can find the optimal gray value complementary with the gray values, and the frequency corresponding to the complementary gray values is obtained. By adopting the method, the frequency distribution range of the two gray values in each optimal gray value set is made to be the same by complementing the frequency of each optimal gray value set
Figure SMS_135
I.e. the statistical properties of the gray values of the pixels in the image are eliminated.
So far, the corresponding frequency after all gray values are complemented is obtained, and the original image is processed according to the corresponding frequency after each gray value is complemented, wherein the specific processing process is as follows: if the number of gray values in the optimal gray value set is 2, for any one of the optimal gray value sets, calculating the absolute value of the difference value between the frequency corresponding to each gray value before complementation and the frequency corresponding to each gray value after complementation, taking the absolute value as the complementation number of each gray value, randomly selecting the complementation pixel points with the number equal to the complementation number, and enabling the gray values of the selected complementation pixel points to be equal to the optimal complementation gray value. For any one "one-to-many" optimal gray value set, i.e., the number of gray values in the optimal gray value set is greater than 2, divided by
Figure SMS_136
The number of the other pixel points except the gray value is the average frequency. In this embodiment, by performing statistical characteristic analysis on the frequency of each gray value, an optimal gray value set is found so that all gray values are obtainedThe values can find the gray value which is the optimal complement to the value, and compared with the existing method for eliminating the statistical characteristics, the method provided by the embodiment avoids repeated threshold distribution calculation, only performs one transformation operation on each gray value, and reduces the calculated amount and the calculated cost.
The method provided by the embodiment can obtain the encrypted image of each channel of the image to be encrypted by encrypting the image data by extracting the statistical characteristics.
The encrypted image is generated uniquely according to the original image, has uniqueness, and changes the original image to cause the frequency corresponding to the gray value to change, so that a new image constructed through frequency analysis changes, namely the method provided by the embodiment has sensitivity to the image; the generated encrypted image eliminates the statistical characteristic through frequency complementation, so that the possibility of being cracked is greatly reduced, and the encrypted image obtained by the embodiment has a good encryption effect.
The original image of each channel of the image to be encrypted is respectively
Figure SMS_137
Figure SMS_143
Figure SMS_146
The encrypted image of each channel of the image to be encrypted is respectively
Figure SMS_139
Figure SMS_142
Figure SMS_145
Three-channel original image passing through original image
Figure SMS_147
Figure SMS_140
Figure SMS_141
And corresponding encrypted image
Figure SMS_144
Figure SMS_148
Figure SMS_138
Exclusive-or operation is performed to generate a key, namely:
Figure SMS_149
Figure SMS_150
Figure SMS_151
wherein ,
Figure SMS_152
an encrypted image for the R channel of the image to be encrypted,
Figure SMS_153
an encrypted image for the G channel of the image to be encrypted,
Figure SMS_154
for the B-channel encrypted image of the image to be encrypted,
Figure SMS_155
is an exclusive or operator.
Since the exclusive-or operation is reversible, and the image is encrypted
Figure SMS_158
Figure SMS_161
Figure SMS_165
Has uniqueness, so the embodiment performs exclusive OR operation on the original image and the encrypted image to obtain a corresponding secret key
Figure SMS_156
Figure SMS_160
Figure SMS_163
. In the embodiment, the ciphertext image is obtained by splitting the image and analyzing the frequency and encrypting the image
Figure SMS_167
Figure SMS_159
Figure SMS_162
And a key
Figure SMS_164
Figure SMS_166
Figure SMS_157
And transmitting the secret key and the ciphertext, storing the ciphertext and the secret key, decrypting the ciphertext by using the secret key and performing anomaly detection by using artificial intelligence.
After the image is encrypted to generate the secret key and the ciphertext, the image is integrated into video to be transmitted and stored in the server, when the artificial intelligence needs to perform anomaly analysis on the video, a data downloading request is sent to the server, after the server receives the downloading request, the encrypted video and the secret key are transmitted to an anomaly analysis system, the analysis system splits the video into images according to frames after receiving the encrypted video, and decrypts the image, and for the image to be encrypted, the decryption process is as follows:
Figure SMS_170
Figure SMS_172
Figure SMS_175
Obtaining an original image of each channel of an image to be encrypted
Figure SMS_168
Figure SMS_171
Figure SMS_174
Original image of three channels
Figure SMS_176
Figure SMS_169
Figure SMS_173
Combining to obtain an original RGB image; by adopting the method, all original RGB images can be obtained, all the original RGB images are combined to obtain the original video data, and the original video data is sent to an artificial intelligent anomaly analysis module for anomaly analysis.
According to the method, a plurality of optimal gray value sets are obtained according to the distribution condition of an initial gray histogram corresponding to an image to be processed, the frequency corresponding to each gray value in each optimal gray value set is subjected to complementary operation based on a frequency complementary method, the statistical characteristics of image data are eliminated to generate an encrypted image, the existing threshold average distribution statistical characteristic elimination method has the problems of threshold subjectivity and computational complexity, the frequency corresponding to each gray value is subjected to complementary operation only once, compared with the existing method, the operation cost is reduced, the influence of human subjective factors on an image encryption result is avoided, the purposes of eliminating the statistical characteristics of the image data and encrypting are achieved, and the risk of leakage of video data is reduced. The method comprises the steps that an optimal gray value set is obtained, so that the frequency corresponding to each gray value in the optimal gray value set is subjected to complementary operation, the statistical characteristics of image data are eliminated, the gray values of pixel points in an image to be processed are possibly more intensively distributed in a certain gray interval, the statistical characteristics cannot be weakened and information in the image cannot be well protected only by histogram equalization although the gray values of the image can be uniformly distributed between 0 and 255, and therefore, a target histogram gray value corresponding to each gray value, which is not 0, in the image to be processed is obtained according to the frequency corresponding to each gray value in the image to be processed, the maximum gray value and the minimum gray value of the pixel points, and then the initial gray is subjected to stretching operation; when the frequency corresponding to the gray value of the pixel point in the image to be processed is similar to the average frequency of all the gray values, the statistical characteristic of the image data is low, so that the complementary parameters of all the gray values are obtained according to the frequency corresponding to each gray value in the stretched gray histogram and the average frequency of all the gray values, and further an optimal gray value set is obtained based on the complementary parameters, the statistical characteristic of the image data can be broken, and the risks of tampering and leakage of the video data are effectively reduced.
Video data processing system embodiments based on artificial intelligence:
the video data processing system based on artificial intelligence of the embodiment comprises a memory and a processor, wherein the processor executes a computer program stored in the memory to realize the video data processing method based on artificial intelligence.
Since the video data processing method based on the artificial intelligence has been described in the embodiment of the video data processing method based on the artificial intelligence, the embodiment does not describe the video data processing method based on the artificial intelligence again.

Claims (8)

1. A method for processing video data based on artificial intelligence, the method comprising the steps of:
acquiring video data to be processed, and marking any frame of gray level image in the video data as an image to be processed;
acquiring an initial gray level histogram corresponding to the image to be processed, acquiring a target gray level value corresponding to each gray level value which is not 0 in the initial gray level histogram according to the frequency corresponding to each gray level value in the initial gray level histogram, the maximum gray level value of the pixel point in the image to be processed and the minimum gray level value of the pixel point in the image to be processed, and acquiring a stretched gray level histogram based on the target gray level value; obtaining complementary parameters of each gray value in the stretched gray histogram according to the frequency corresponding to each gray value in the stretched gray histogram and the average frequency of all gray values;
Obtaining each optimal gray value set based on the complementary parameters, and carrying out complementary operation on the frequencies corresponding to the gray values in each optimal gray value set according to the frequencies corresponding to the gray values in each optimal gray value set and the average frequencies of all gray values to obtain an encrypted image;
and obtaining complementary parameters of each gray value in the stretched gray histogram according to the frequency corresponding to each gray value in the stretched gray histogram and the average frequency of all gray values, wherein the complementary parameters comprise:
respectively calculating the difference value between the frequency corresponding to each gray value and the average frequency of all gray values, and marking the difference value as a first difference value; and taking the ratio of the first difference value to the average frequency of all the gray values as the complementary parameter of each gray value.
2. The artificial intelligence based video data processing method of claim 1, wherein the obtaining each optimal gray value set based on the complementary parameters comprises:
sequencing all complementary parameters in the sequence from big to small to generate a complementary parameter sequence;
selecting the 1 st complementary parameter and the last 1 complementary parameter in the complementary parameter sequence as initial values, calculating the sum of the two initial values to be used as complementary judgment values of the 1 st complementary parameter and the last 1 complementary parameter, if the absolute value of the complementary judgment values of the 1 st complementary parameter and the last 1 complementary parameter is smaller than a statistical characteristic threshold value and the complementary judgment values of the 1 st complementary parameter and the last 1 complementary parameter are smaller than 0, traversing downwards from the gray value 255, calculating the complementary judgment values of the 1 st complementary parameter and the complementary parameters of all gray values until the complementary judgment values of the 1 st complementary parameter and the complementary parameters of the previous gray value are smaller than the complementary judgment values of the 1 st complementary parameter and the complementary parameters of the next gray value, taking the gray value corresponding to the 1 st complementary parameter and the corresponding previous gray value as an optimal gray value set, and deleting the complementary parameters of all gray values in the optimal gray value set in the complementary parameter sequence; if the absolute value of the complementary judgment value of the 1 st complementary parameter and the last 1 complementary parameter is smaller than the statistical characteristic threshold value, and the complementary judgment value of the 1 st complementary parameter and the last 1 complementary parameter is larger than or equal to 0, traversing upwards from the gray value 0 until the complementary judgment value of the 1 st complementary parameter and the complementary judgment value of the complementary parameter of the previous gray value is smaller than the complementary judgment value of the 1 st complementary parameter and the complementary judgment value of the complementary parameter of the subsequent gray value, and taking the gray value corresponding to the 1 st complementary parameter and the corresponding previous gray value as an optimal gray value set; and by analogy, updating the initial value, traversing all complementary parameters in the complementary parameter sequence to obtain a plurality of optimal gray value sets, and deleting the complementary parameters of each gray value in the optimal gray value sets in the complementary parameter sequence.
3. The method for processing video data based on artificial intelligence according to claim 2, wherein if complementary parameters remain in the complementary parameter sequence after the complementary parameter operation of each gray value in the optimal gray value set is deleted, the complementary parameters remaining in the complementary parameter sequence are ordered in the order from big to small, a parameter sequence to be matched is constructed, an optimal gray value complementary to the gray value corresponding to the 1 st complementary parameter is obtained according to the complementary judgment value of the 1 st complementary parameter and other complementary parameters in the parameter sequence to be matched, the gray value corresponding to the 1 st complementary parameter and each optimal gray value complementary to the 1 st complementary parameter form an optimal gray value set, and the other complementary parameters sequentially traverse forward from the last 1 complementary parameters in the parameter sequence to be matched; and similarly, traversing the gray values corresponding to all complementary parameters in the parameter sequence to be matched to obtain each optimal gray value set.
4. A video data processing method based on artificial intelligence according to claim 3, wherein obtaining an optimal gray value complementary to the gray value corresponding to the 1 st complementary parameter according to the complementary determination value of the 1 st complementary parameter and other complementary parameters in the parameter sequence to be matched, comprises:
If it is
Figure QLYQS_6
Then calculate
Figure QLYQS_2
Until it meets
Figure QLYQS_8
Last +.>
Figure QLYQS_4
The gray value corresponding to the complementary parameter is used as the optimal gray value complementary to the gray value corresponding to the 1 st complementary parameter, wherein +.>
Figure QLYQS_5
To take absolute value, +.>
Figure QLYQS_11
For the 1 st complementary parameter in the sequence of parameters to be matched,/->
Figure QLYQS_14
For the last 1 complementary parameters in the sequence of parameters to be matched, < > in->
Figure QLYQS_12
For the number of complementary parameters in the sequence of parameters to be matched, < +.>
Figure QLYQS_16
For the 2 nd complementary parameter in the sequence of parameters to be matched,>
Figure QLYQS_3
for the 3 rd complementary parameter of the sequence of parameters to be matched,>
Figure QLYQS_7
for the last but one in the parameter sequence to be matched
Figure QLYQS_10
Complementary parameters (a)>
Figure QLYQS_15
For the 1 st complementary parameter in the parameter sequence to be matched and the last +.>
Figure QLYQS_9
Complementary determination value of the respective complementary parameter, +.>
Figure QLYQS_13
For the 1 st complementary parameter in the parameter sequence to be matched and the last +.>
Figure QLYQS_1
Complementary determination values of the respective complementary parameters;
if it is
Figure QLYQS_17
Calculate +.>
Figure QLYQS_18
Obtaining an optimal gray value complementary to the gray value corresponding to the 1 st complementary parameter, wherein +_>
Figure QLYQS_19
For the first +.>
Figure QLYQS_20
The complementary parameters are the first of the sequence of parameters to be matched>
Figure QLYQS_21
Complementary determination values of the respective complementary parameters.
5. The method for processing video data based on artificial intelligence according to claim 1, wherein the obtaining the target gray value corresponding to each gray value with a frequency other than 0 in the initial gray histogram according to the frequency corresponding to each gray value in the initial gray histogram, the maximum gray value of the pixel point in the image to be processed, and the minimum gray value of the pixel point in the image to be processed comprises:
Calculating a target gray value corresponding to the ith gray value with the frequency of non-0 in the initial gray histogram by adopting the following formula:
Figure QLYQS_22
wherein ,
Figure QLYQS_23
for the target gray value corresponding to the ith gray value of frequency non-0,/th gray value>
Figure QLYQS_24
An ith gray value of frequency non-0,
Figure QLYQS_25
for the maximum gray value of the pixel point in the image to be processed,/for>
Figure QLYQS_26
The minimum gray value of the pixel point in the image to be processed is ⌊ ⌋, which is rounded downwards.
6. The artificial intelligence based video data processing method of claim 1, wherein the obtaining a stretched gray level histogram based on the target gray level value comprises: and replacing each gray value with a target gray value corresponding to the gray value, which is not 0, in the image to be processed, reconstructing a gray histogram based on the gray values of the pixel points in the image to be processed after the replacement is completed, and marking the gray histogram as a stretched gray histogram.
7. The method for processing video data based on artificial intelligence according to claim 1, wherein the performing complementary operation on the frequencies corresponding to the gray values in the optimal gray value sets to obtain the encrypted image according to the frequencies corresponding to the gray values in the optimal gray value sets and the average frequency of all the gray values comprises:
If the number of gray values in the optimal gray value set is equal to 2, marking one gray value in the optimal gray value set as a first gray value, and marking the other gray value in the optimal gray value set as a second gray value; calculating the sum of the frequency corresponding to the first gray value before complementation and the frequency corresponding to the second gray value before complementation, marking the sum as a frequency index, taking the difference value of the frequency index and the average frequency of all gray values as the frequency corresponding to the first gray value after complementation, and taking the average frequency of all gray values as the frequency corresponding to the second gray value after complementation;
if the number of gray values in the optimal gray value set is greater than 2, marking the gray value with the maximum number of the corresponding optimal complementary gray values in the optimal gray value set as a third gray value, marking all other gray values except the third gray value in the optimal gray value set as a fourth gray value, taking the average frequency of all the gray values as the frequency corresponding to the third gray value after complementation, calculating the product of the average frequency of all the gray values and the number of the fourth gray value, marking the product as a first product, and taking the difference value of the frequency corresponding to the third gray value before complementation and the first product as the frequency corresponding to the fourth gray value after complementation;
And processing the original image based on the frequency after the complementation of each gray value, taking the frequency difference value of the complementation and the complementation gray value as the complementation frequency number, randomly selecting the complementation gray value of the complementation frequency number, and enabling the gray value of the selected complementation pixel point to be equal to the optimal complementation gray value, thereby obtaining the encrypted image.
8. An artificial intelligence based video data processing system comprising a memory and a processor, wherein the processor executes a computer program stored in the memory to implement the artificial intelligence based video data processing method of any one of claims 1 to 7.
CN202310146591.2A 2023-02-22 2023-02-22 Video data processing method and system based on artificial intelligence Active CN115834792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310146591.2A CN115834792B (en) 2023-02-22 2023-02-22 Video data processing method and system based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310146591.2A CN115834792B (en) 2023-02-22 2023-02-22 Video data processing method and system based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN115834792A CN115834792A (en) 2023-03-21
CN115834792B true CN115834792B (en) 2023-05-12

Family

ID=85522025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310146591.2A Active CN115834792B (en) 2023-02-22 2023-02-22 Video data processing method and system based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN115834792B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116033088B (en) * 2023-03-27 2023-06-16 山东爱特云翔计算机有限公司 Safe transmission method and system for video big data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101586956A (en) * 2009-06-18 2009-11-25 上海交通大学 River water level monitoring method based on monocular camera
CN101751564A (en) * 2010-02-04 2010-06-23 华南理工大学 Intravenous grain extraction method based on maximal intra-neighbor difference vector diagram
CN102306375A (en) * 2011-08-31 2012-01-04 北京航空航天大学 Segmentation method for synthetic aperture radar (SAR) and visible light pixel-level fused image
WO2021093648A1 (en) * 2019-11-11 2021-05-20 阿里巴巴集团控股有限公司 Watermark information embedding method and apparatus
CN114943848A (en) * 2022-07-25 2022-08-26 南通德晋昌光电科技有限公司 Crack identification method in nickel screen laser cladding process

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7532755B2 (en) * 2004-01-22 2009-05-12 Lexmark International, Inc. Image classification using concentration ratio
CA2571666A1 (en) * 2006-12-12 2008-06-12 Diversinet Corp. Secure identity and personal information storage and transfer

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101586956A (en) * 2009-06-18 2009-11-25 上海交通大学 River water level monitoring method based on monocular camera
CN101751564A (en) * 2010-02-04 2010-06-23 华南理工大学 Intravenous grain extraction method based on maximal intra-neighbor difference vector diagram
CN102306375A (en) * 2011-08-31 2012-01-04 北京航空航天大学 Segmentation method for synthetic aperture radar (SAR) and visible light pixel-level fused image
WO2021093648A1 (en) * 2019-11-11 2021-05-20 阿里巴巴集团控股有限公司 Watermark information embedding method and apparatus
CN114943848A (en) * 2022-07-25 2022-08-26 南通德晋昌光电科技有限公司 Crack identification method in nickel screen laser cladding process

Also Published As

Publication number Publication date
CN115834792A (en) 2023-03-21

Similar Documents

Publication Publication Date Title
Chai et al. An image encryption algorithm based on chaotic system and compressive sensing
Adam et al. A 3des double–layer based message security scheme
CN115242475B (en) Big data safety transmission method and system
CN115297363B (en) Video data encryption transmission method based on Huffman coding
Ping et al. Generating visually secure encrypted images by partial block pairing-substitution and semi-tensor product compressed sensing
CN115834792B (en) Video data processing method and system based on artificial intelligence
Ren et al. Reversible data hiding in encrypted binary images by pixel prediction
Guan et al. An efficient high-capacity reversible data hiding scheme for encrypted images
Su et al. Visualized multiple image selection encryption based on log chaos system and multilayer cellular automata saliency detection
Wang et al. TPE-ISE: approximate thumbnail preserving encryption based on multilevel DWT information self-embedding
Ratan et al. Security Analysis of Bit-plane Level Image Encryption Schemes.
CN113382128B (en) Bit plane compressed encrypted image reversible information hiding method
Islam et al. Denoising and error correction in noisy AES-encrypted images using statistical measures
CN116996628B (en) Network data transmission protection method
CN115225771B (en) Method and device for hiding reversible information of encrypted image and computer equipment
CN114969796A (en) Image steganography method and system combining QR (quick response) code and S-box chaotic scrambling
Zhang et al. Color Image Encryption Based on LSS-Type Coupled Mapped Lattice
Mostafa et al. A new chaos based medical image encryption scheme
Tralic et al. Robust image encryption based on balanced cellular automaton and pixel separation
RU2288544C2 (en) Method for embedding additional information into digital images
Yassin Data Hiding Technique for Color Images using Pixel Value Differencing and Chaotic Map
Zou et al. A meaningful image encryption algorithm based on prediction error and wavelet transform
Waghmare et al. Holo-entropy and advanced encryption standard for wavelet-based image steganography
CN113949782B (en) Complex chaotic system data encryption method based on cubic memristor
CN115426199B (en) Website data safety protection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant