CN111583357A - Object motion image capturing and synthesizing method based on MATLAB system - Google Patents

Object motion image capturing and synthesizing method based on MATLAB system Download PDF

Info

Publication number
CN111583357A
CN111583357A CN202010429896.0A CN202010429896A CN111583357A CN 111583357 A CN111583357 A CN 111583357A CN 202010429896 A CN202010429896 A CN 202010429896A CN 111583357 A CN111583357 A CN 111583357A
Authority
CN
China
Prior art keywords
frame
object motion
pixel point
image
threshold value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010429896.0A
Other languages
Chinese (zh)
Inventor
钱雅楠
陈吉
周欣
苟毅
高铭
罗佳
沈昌洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Institute of Engineering
Original Assignee
Chongqing Institute of Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Institute of Engineering filed Critical Chongqing Institute of Engineering
Priority to CN202010429896.0A priority Critical patent/CN111583357A/en
Publication of CN111583357A publication Critical patent/CN111583357A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing, and particularly relates to an object motion image capturing and synthesizing method based on an MATLAB system, which comprises the following steps: carrying out feature detection on a known continuous three-frame object motion image, giving a threshold value T, carrying out binarization processing by adopting a threshold value judgment method, then capturing the object motion image, carrying out an interframe gray level difference algorithm on each pixel point of the known continuous three-frame object motion image, classifying all the pixel points to obtain a foreground target image pixel point set, then removing all the foreground target image pixel point sets, and finally reconstructing and fusing the background image pixel point set to obtain a synthesized image; the invention solves the problems that the prior picture synthesis method usually uses software to synthesize pictures, has low precision and can not lead people to deeply understand the steps and the method for synthesizing the pictures.

Description

Object motion image capturing and synthesizing method based on MATLAB system
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an object motion image capturing and synthesizing method based on an MATLAB system.
Background
The image synthesis mainly utilizes relevant semantic information such as brightness, outline, texture and the like among different objects in an image, and obtains a meaningful image conforming to human perception through simple interactive verification after different objects are combined.
Disclosure of Invention
The purpose of the invention is: the method aims to provide an object moving image capturing and synthesizing method based on an MATLAB system, and solves the problems that the existing picture synthesizing method generally uses software to synthesize pictures, the precision is not high, and people cannot deeply understand the steps and the method for synthesizing the pictures.
In order to achieve the technical purpose, the technical scheme adopted by the invention is as follows:
an object motion image capturing and synthesizing method based on a MATLAB system, the capturing and synthesizing method comprising the steps of:
step 1: carrying out feature detection on a known continuous three-frame object motion image, wherein the object motion image is introduced into an MATLAB system, graying processing is carried out on the object motion image by an rgb2gray tool, the object motion image is converted into a black-and-white image according to a gray value weighted average method, and an edge point is determined by adopting a gradient solving method;
step 2: giving a threshold value T to the gray value of the pixel point of the object motion image subjected to the graying processing in the step 1, and performing binarization processing by adopting a threshold value judgment method:
when the gray value of a pixel point of the object motion image is smaller than a threshold value T, the pixel point is set to be black; when the gray value of the object motion image is larger than a threshold value T, the pixel point is set to be white;
and step 3: capturing the object motion image subjected to the binarization processing in the step 2;
and 4, step 4: respectively marking the known continuous three-frame object motion images processed in the steps 1 to 3 as a t-1 frame, a t frame and a t +1 frame, wherein the t-1 frame and the t frame of the continuous two-frame object motion images are taken as a first group, the t frame and the t +1 frame are taken as a second group, performing an interframe gray difference subtraction algorithm on each pixel point corresponding to the two object motion images in the first group, storing the absolute value of the interframe gray difference subtraction algorithm of the two object motion images in the first group in a number group { D1}, performing the interframe gray difference subtraction algorithm on each pixel point corresponding to the two object motion images in the second group, and storing the absolute value of the interframe gray difference subtraction algorithm of the two object motion images in the second group in a number group { D2 };
and 5: setting pixel point sets { Bi, j } and { Oi, j } as a background image pixel point set and a foreground object image pixel point set of the T-th frame respectively, and comparing values in an array { D1} and an array { D2} with a threshold value T respectively:
when the absolute value of the corresponding interframe gray level difference subtraction algorithm in the array { D1} and the array { D2} is larger than the threshold value T for a certain point (i, j), the point is in motion in the current continuous T-1 th frame, T-th frame and T +1 th frame images and belongs to the pixel point set { Oi, j } of the foreground object image;
step 6: classifying all pixel points in the t frame by adopting the method in the step 5 to obtain all foreground target image pixel point sets { Oi, j } in the t frame, then removing all foreground target image pixel point sets { Oi, j }, wherein the rest is a background image pixel point set { Bi, j } in the t frame;
and 7: repeating the operation by utilizing the collected t-1 th frame and t +1 th frame object motion images to obtain a background image pixel point set { Bi-1, j-1} of the t-1 th frame object motion image and a background image pixel point set { Bi +1, j +1} of the t +1 th frame object motion image, and combining the obtained background image pixel point sets { Bi-1, j-1} and { Bi +1, j +1} with the previously obtained background image pixel point set { Bi, j };
and 8: and (3) reconstructing and fusing the background image pixel point set { Bi-1, j-1}, { Bi, j } and { Bi +1, j +1} through a reshape function tool in an MATLAB system to obtain a synthetic image.
Further, the gray value weighted average method in step 1 is assigned according to the following weights:
0.2989R + 0.5870G + 0.1140B, the gradient operator used in step 1 is a sobel operator.
Further, the threshold given by the threshold determination method in step 2 is selected by a manual threshold selection method, and the threshold T is equal to the threshold calculated by the sobel operator multiplied by 0.5 to serve as a binary threshold.
Further limiting, the step 2 further includes performing noise reduction processing and foreground expansion processing on the object moving picture, wherein the object moving picture is subjected to noise reduction by using a kalman filter, an algorithm of the algorithm is X (k | k-1) ═ a (k-1| k-1) + B × (k), and the foreground expansion processing can make the object motion appear smoother and more obvious.
Further defined, the capturing of the moving image of the object in the step 3 comprises the following sub-steps:
step 3.1: thinning edges and removing discrete points on the basis of the object motion image processed in the step 2, wherein firstly, a bright structure connecting the edges of the images is inhibited;
step 3.2: on the basis of the step 3.1, bright color details smaller than structural elements are removed by adopting open operation, narrow parts are weakened, and thin protrusions are removed;
step 3.3: on the basis of the step 3.2, removing dark details smaller than the structural elements by adopting closed operation, connecting narrow parts, and filling gaps of the outlines so as to determine a maximum communication area;
step 3.4: on the basis of the step 3.3, filling the maximum connected region, adopting an edge segmentation technology, calling a bwphimm function tool in an MATLAB system to segment an object moving image, finding a moving object, and extracting an object moving edge;
step 3.5: and 3.4, calculating the center point of the object according to the coordinates of the object extracted and segmented from the edge, storing the center point of the object in an array of the motion trail of the object, and drawing a motion trail curve of the object according to the center point of the object stored in the array.
By adopting the technical scheme of the invention, based on an MATLAB system, the existing algorithm is called, the moving object in the object moving image can be captured by detecting the characteristics, carrying out binarization processing and comparing and classifying the pixel points of the object moving image, the object moving images of three continuous frames can be fused through a pixel point set, the picture synthesis effect is good, each step and method for capturing and synthesizing the object moving images are embodied in detail, and the called algorithm is clarified, so that people can deeply know the steps and the method for synthesizing the pictures;
the invention solves the problems that the prior picture synthesis method usually uses software to synthesize pictures, has low precision and can not lead people to deeply understand the steps and the method for synthesizing the pictures.
Compared with the prior art, the invention has the following advantages:
1. based on an MATLAB system, the existing algorithm is called, objects moving in the object moving images can be captured by processing, comparing and classifying the pixel points of the object moving images, the object moving images of three continuous frames can be fused through a pixel point set, and the picture synthesis effect is good;
2. the steps and the method for capturing and synthesizing the object moving images are embodied in detail, and the called algorithm is clarified, so that people can deeply understand the principles, the steps and the method for synthesizing the images.
Drawings
The invention is further illustrated by the non-limiting examples given in the accompanying drawings;
FIG. 1 is a flow chart illustrating the general steps of an embodiment of the method for capturing and synthesizing object motion images based on the MATLAB system;
FIG. 2 is a schematic flow chart of step 3 of an embodiment of a method for capturing and synthesizing object motion images based on the MATLAB system according to the present invention;
FIG. 3 is a flow chart of step 2 of an embodiment of the method for capturing and synthesizing object motion images based on the MATLAB system according to the present invention;
Detailed Description
In order that those skilled in the art can better understand the present invention, the following technical solutions are further described with reference to the accompanying drawings and examples.
As shown in fig. 1 to fig. 3, the capturing and synthesizing method of the present invention for capturing and synthesizing the moving images of an object based on the MATLAB system includes the following steps:
step 1: carrying out feature detection on a known continuous three-frame object motion image, wherein the object motion image is introduced into an MATLAB system, graying processing is carried out on the object motion image by an rgb2gray tool, the object motion image is converted into a black-and-white image according to a gray value weighted average method, and an edge point is determined by adopting a gradient solving method;
step 2: giving a threshold value T to the gray value of the pixel point of the object motion image subjected to the graying processing in the step 1, and performing binarization processing by adopting a threshold value judgment method:
when the gray value of a pixel point of the object motion image is smaller than a threshold value T, the pixel point is set to be black; when the gray value of the object motion image is larger than a threshold value T, the pixel point is set to be white;
and step 3: capturing the object motion image subjected to the binarization processing in the step 2;
and 4, step 4: respectively marking the known continuous three-frame object motion images processed in the steps 1 to 3 as a t-1 frame, a t frame and a t +1 frame, wherein the t-1 frame and the t frame of the continuous two-frame object motion images are taken as a first group, the t frame and the t +1 frame are taken as a second group, performing an interframe gray difference subtraction algorithm on each pixel point corresponding to the two object motion images in the first group, storing the absolute value of the interframe gray difference subtraction algorithm of the two object motion images in the first group in a number group { D1}, performing the interframe gray difference subtraction algorithm on each pixel point corresponding to the two object motion images in the second group, and storing the absolute value of the interframe gray difference subtraction algorithm of the two object motion images in the second group in a number group { D2 };
and 5: setting pixel point sets { Bi, j } and { Oi, j } as a background image pixel point set and a foreground object image pixel point set of the T-th frame respectively, and comparing values in an array { D1} and an array { D2} with a threshold value T respectively:
when the absolute value of the corresponding interframe gray level difference subtraction algorithm in the array { D1} and the array { D2} is larger than the threshold value T for a certain point (i, j), the point is in motion in the current continuous T-1 th frame, T-th frame and T +1 th frame images and belongs to the pixel point set { Oi, j } of the foreground object image;
step 6: classifying all pixel points in the t frame by adopting the method in the step 5 to obtain all foreground target image pixel point sets { Oi, j } in the t frame, then removing all foreground target image pixel point sets { Oi, j }, wherein the rest is a background image pixel point set { Bi, j } in the t frame;
and 7: repeating the operation by utilizing the collected t-1 th frame and t +1 th frame object motion images to obtain a background image pixel point set { Bi-1, j-1} of the t-1 th frame object motion image and a background image pixel point set { Bi +1, j +1} of the t +1 th frame object motion image, and combining the obtained background image pixel point sets { Bi-1, j-1} and { Bi +1, j +1} with the previously obtained background image pixel point set { Bi, j };
and 8: and (3) reconstructing and fusing the background image pixel point set { Bi-1, j-1}, { Bi, j } and { Bi +1, j +1} through a reshape function tool in an MATLAB system to obtain a synthetic image.
Preferably, the gray scale value weighted average method in step 1 is assigned according to the following weights:
0.2989R + 0.5870G + 0.1140B, the gradient operator used in step 1 was the sobel operator.
Preferably, the threshold given by the threshold determination method in step 2 is selected by a manual threshold selection method, and the threshold T is equal to the threshold calculated by the sobel operator multiplied by 0.5 to serve as the binary threshold.
Preferably, the step 2 further includes performing noise reduction processing and foreground expansion processing on the object moving picture, wherein the object moving picture is subjected to noise reduction by using a kalman filter, an algorithm of the algorithm is X (k | k-1) ═ a × X (k-1| k-1) + B × u (k), and the foreground expansion processing can make the object motion appear smoother and more obvious.
Preferably, the object moving image capturing in step 3 comprises the following sub-steps:
step 3.1: thinning edges and removing discrete points on the basis of the object motion image processed in the step 2, wherein firstly, a bright structure connecting the edges of the images is inhibited;
step 3.2: on the basis of the step 3.1, bright color details smaller than structural elements are removed by adopting open operation, narrow parts are weakened, and thin protrusions are removed;
step 3.3: on the basis of the step 3.2, removing dark details smaller than the structural elements by adopting closed operation, connecting narrow parts, and filling gaps of the outlines so as to determine a maximum communication area;
step 3.4: on the basis of the step 3.3, filling the maximum connected region, adopting an edge segmentation technology, calling a bwphimm function tool in an MATLAB system to segment an object moving image, finding a moving object, and extracting an object moving edge;
step 3.5: and 3.4, calculating the center point of the object according to the coordinates of the object extracted and segmented from the edge, storing the center point of the object in an array of the motion trail of the object, and drawing a motion trail curve of the object according to the center point of the object stored in the array.
By adopting the technical scheme of the invention, based on an MATLAB system, the existing algorithm is called, the moving object in the object moving image can be captured by detecting the characteristics, carrying out binarization processing and comparing and classifying the pixel points of the object moving image, the object moving images of three continuous frames can be fused through a pixel point set, the picture synthesis effect is good, each step and method for capturing and synthesizing the object moving images are embodied in detail, and the called algorithm is clarified, so that people can deeply know the steps and the method for synthesizing the pictures;
the invention solves the problems that the prior picture synthesis method usually uses software to synthesize pictures, has low precision and can not lead people to deeply understand the steps and the method for synthesizing the pictures.
The foregoing embodiments are merely illustrative of the principles of the present invention and its efficacy, and are not to be construed as limiting the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (5)

1. An object motion image capturing and synthesizing method based on MATLAB system is characterized in that: the capture synthesis method comprises the following steps:
step 1: carrying out feature detection on a known continuous three-frame object motion image, wherein the object motion image is introduced into an MATLAB system, graying processing is carried out on the object motion image by an rgb2gray tool, the object motion image is converted into a black-and-white image according to a gray value weighted average method, and an edge point is determined by adopting a gradient solving method;
step 2: giving a threshold value T to the gray value of the pixel point of the object motion image subjected to the graying processing in the step 1, and performing binarization processing by adopting a threshold value judgment method:
when the gray value of a pixel point of the object motion image is smaller than a threshold value T, the pixel point is set to be black; when the gray value of the object motion image is larger than a threshold value T, the pixel point is set to be white;
and step 3: capturing the object motion image subjected to the binarization processing in the step 2;
and 4, step 4: respectively marking the known continuous three-frame object motion images processed in the steps 1 to 3 as a t-1 frame, a t frame and a t +1 frame, wherein the t-1 frame and the t frame of the continuous two-frame object motion images are taken as a first group, the t frame and the t +1 frame are taken as a second group, performing an interframe gray difference subtraction algorithm on each pixel point corresponding to the two object motion images in the first group, storing the absolute value of the interframe gray difference subtraction algorithm of the two object motion images in the first group in a number group { D1}, performing the interframe gray difference subtraction algorithm on each pixel point corresponding to the two object motion images in the second group, and storing the absolute value of the interframe gray difference subtraction algorithm of the two object motion images in the second group in a number group { D2 };
and 5: setting pixel point sets { Bi, j } and { Oi, j } as a background image pixel point set and a foreground object image pixel point set of the T-th frame respectively, and comparing values in an array { D1} and an array { D2} with a threshold value T respectively:
when the absolute value of the corresponding interframe gray level difference subtraction algorithm in the array { D1} and the array { D2} is larger than the threshold value T for a certain point (i, j), the point is in motion in the current continuous T-1 th frame, T-th frame and T +1 th frame images and belongs to the pixel point set { Oi, j } of the foreground object image;
step 6: classifying all pixel points in the t frame by adopting the method in the step 5 to obtain all foreground target image pixel point sets { Oi, j } in the t frame, then removing all foreground target image pixel point sets { Oi, j }, wherein the rest is a background image pixel point set { Bi, j } in the t frame;
and 7: repeating the operation by utilizing the collected t-1 th frame and t +1 th frame object motion images to obtain a background image pixel point set { Bi-1, j-1} of the t-1 th frame object motion image and a background image pixel point set { Bi +1, j +1} of the t +1 th frame object motion image, and combining the obtained background image pixel point sets { Bi-1, j-1} and { Bi +1, j +1} with the previously obtained background image pixel point set { Bi, j };
and 8: and (3) reconstructing and fusing the background image pixel point set { Bi-1, j-1}, { Bi, j } and { Bi +1, j +1} through a reshape function tool in an MATLAB system to obtain a synthetic image.
2. The MATLAB system-based object motion image capture synthesis method of claim 1, wherein: the gray value weighted average method in the step 1 is distributed according to the following weight:
0.2989R + 0.5870G + 0.1140B, the gradient operator used in step 1 is a sobel operator.
3. The MATLAB system based object motion picture capturing and composing method according to any of claims 1-2, wherein: and (3) selecting a threshold value given by the threshold value judgment method in the step (2) by a manual threshold value selection method, wherein the threshold value T is equal to the threshold value calculated by the sobel operator and multiplied by 0.5 to be used as a binaryzation threshold value.
4. The MATLAB system-based object motion image capture synthesis method of claim 1, wherein: and the step 2 further comprises the steps of carrying out noise reduction processing and foreground expansion processing on the object moving picture, wherein the object moving picture is subjected to noise reduction by using a Kalman filter, the algorithm is X (k | k-1) ═ A X (k-1| k-1) + B X U (k), and the foreground expansion processing can enable the object motion to be more smooth and obvious.
5. The MATLAB system-based object motion image capture synthesis method of claim 1, wherein: the object moving image capturing in said step 3 comprises the following sub-steps:
step 3.1: thinning edges and removing discrete points on the basis of the object motion image processed in the step 2, wherein firstly, a bright structure connecting the edges of the images is inhibited;
step 3.2: on the basis of the step 3.1, bright color details smaller than structural elements are removed by adopting open operation, narrow parts are weakened, and thin protrusions are removed;
step 3.3: on the basis of the step 3.2, removing dark details smaller than the structural elements by adopting closed operation, connecting narrow parts, and filling gaps of the outlines so as to determine a maximum communication area;
step 3.4: on the basis of the step 3.3, filling the maximum connected region, adopting an edge segmentation technology, calling a bwphimm function tool in an MATLAB system to segment an object moving image, finding a moving object, and extracting an object moving edge;
step 3.5: and 3.4, calculating the center point of the object according to the coordinates of the object extracted and segmented from the edge, storing the center point of the object in an array of the motion trail of the object, and drawing a motion trail curve of the object according to the center point of the object stored in the array.
CN202010429896.0A 2020-05-20 2020-05-20 Object motion image capturing and synthesizing method based on MATLAB system Pending CN111583357A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010429896.0A CN111583357A (en) 2020-05-20 2020-05-20 Object motion image capturing and synthesizing method based on MATLAB system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010429896.0A CN111583357A (en) 2020-05-20 2020-05-20 Object motion image capturing and synthesizing method based on MATLAB system

Publications (1)

Publication Number Publication Date
CN111583357A true CN111583357A (en) 2020-08-25

Family

ID=72126734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010429896.0A Pending CN111583357A (en) 2020-05-20 2020-05-20 Object motion image capturing and synthesizing method based on MATLAB system

Country Status (1)

Country Link
CN (1) CN111583357A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113091759A (en) * 2021-03-11 2021-07-09 安克创新科技股份有限公司 Pose processing and map building method and device
CN116434126A (en) * 2023-06-13 2023-07-14 清华大学 Method and device for detecting micro-vibration speed of crops
CN116448754A (en) * 2023-06-13 2023-07-18 清华大学 Crop lodging resistance measurement method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000082145A (en) * 1998-01-07 2000-03-21 Toshiba Corp Object extraction device
CN1756312A (en) * 2004-09-30 2006-04-05 中国科学院计算技术研究所 A kind of image synthesizing method with sport foreground
CN103077530A (en) * 2012-09-27 2013-05-01 北京工业大学 Moving object detection method based on improved mixing gauss and image cutting
CN106997598A (en) * 2017-01-06 2017-08-01 陕西科技大学 The moving target detecting method merged based on RPCA with three-frame difference
CN107346536A (en) * 2017-07-04 2017-11-14 广东工业大学 A kind of method and apparatus of image co-registration
CN107833242A (en) * 2017-10-30 2018-03-23 南京理工大学 One kind is based on marginal information and improves VIBE moving target detecting methods
CN108133188A (en) * 2017-12-22 2018-06-08 武汉理工大学 A kind of Activity recognition method based on motion history image and convolutional neural networks
CN110415268A (en) * 2019-06-24 2019-11-05 台州宏达电力建设有限公司 A kind of moving region foreground image algorithm combined based on background differential technique and frame difference method
US20200065981A1 (en) * 2018-08-24 2020-02-27 Incorporated National University Iwate University Moving object detection apparatus and moving object detection method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000082145A (en) * 1998-01-07 2000-03-21 Toshiba Corp Object extraction device
CN1756312A (en) * 2004-09-30 2006-04-05 中国科学院计算技术研究所 A kind of image synthesizing method with sport foreground
CN103077530A (en) * 2012-09-27 2013-05-01 北京工业大学 Moving object detection method based on improved mixing gauss and image cutting
CN106997598A (en) * 2017-01-06 2017-08-01 陕西科技大学 The moving target detecting method merged based on RPCA with three-frame difference
CN107346536A (en) * 2017-07-04 2017-11-14 广东工业大学 A kind of method and apparatus of image co-registration
CN107833242A (en) * 2017-10-30 2018-03-23 南京理工大学 One kind is based on marginal information and improves VIBE moving target detecting methods
CN108133188A (en) * 2017-12-22 2018-06-08 武汉理工大学 A kind of Activity recognition method based on motion history image and convolutional neural networks
US20200065981A1 (en) * 2018-08-24 2020-02-27 Incorporated National University Iwate University Moving object detection apparatus and moving object detection method
CN110415268A (en) * 2019-06-24 2019-11-05 台州宏达电力建设有限公司 A kind of moving region foreground image algorithm combined based on background differential technique and frame difference method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘秀进等: "基于图像融合的运动目标检测与跟踪方法研究", 《机械工程与自动化》 *
章华等: "基于MATLAB的运动目标检测研究", 《佳木斯大学学报(自然科学版)》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113091759A (en) * 2021-03-11 2021-07-09 安克创新科技股份有限公司 Pose processing and map building method and device
CN113091759B (en) * 2021-03-11 2023-02-28 安克创新科技股份有限公司 Pose processing and map building method and device
CN116434126A (en) * 2023-06-13 2023-07-14 清华大学 Method and device for detecting micro-vibration speed of crops
CN116448754A (en) * 2023-06-13 2023-07-18 清华大学 Crop lodging resistance measurement method and device, electronic equipment and storage medium
CN116448754B (en) * 2023-06-13 2023-09-19 清华大学 Crop lodging resistance measurement method and device, electronic equipment and storage medium
CN116434126B (en) * 2023-06-13 2023-09-19 清华大学 Method and device for detecting micro-vibration speed of crops

Similar Documents

Publication Publication Date Title
US8280165B2 (en) System and method for segmenting foreground and background in a video
EP1800259B1 (en) Image segmentation method and system
US7796822B2 (en) Foreground/background segmentation in digital images
US7860311B2 (en) Video object segmentation method applied for rainy situations
KR101670282B1 (en) Video matting based on foreground-background constraint propagation
CN102567727B (en) Method and device for replacing background target
Wang et al. Automatic natural video matting with depth
US8773548B2 (en) Image selection device and image selecting method
CN111583357A (en) Object motion image capturing and synthesizing method based on MATLAB system
CN108447068B (en) Ternary diagram automatic generation method and foreground extraction method using ternary diagram
CN107204006A (en) A kind of static target detection method based on double background difference
CN111563908B (en) Image processing method and related device
JP2020129276A (en) Image processing device, image processing method, and program
KR101906796B1 (en) Device and method for image analyzing based on deep learning
CN107392879B (en) A kind of low-light (level) monitoring image Enhancement Method based on reference frame
CN104966266A (en) Method and system to automatically blur body part
JP2002522836A (en) Still image generation method and apparatus
CN112288780B (en) Multi-feature dynamically weighted target tracking algorithm
JP5286215B2 (en) Outline extracting apparatus, outline extracting method, and outline extracting program
KR101600617B1 (en) Method for detecting human in image frame
CN106951831B (en) Pedestrian detection tracking method based on depth camera
CN112532938B (en) Video monitoring system based on big data technology
CN116263942A (en) Method for adjusting image contrast, storage medium and computer program product
Sung et al. Feature based ghost removal in high dynamic range imaging
CN116029895B (en) AI virtual background implementation method, system and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200825