Skip to content
This repository has been archived by the owner on Aug 10, 2022. It is now read-only.

Added a barebones GUI file selector #21

Open
wants to merge 13 commits into
base: main
Choose a base branch
from
64 changes: 50 additions & 14 deletions opticalFlow/farnebackOpticalFlow.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,17 +5,43 @@
Right now, in the result visualization, the intensity of a pixel's motion will change both it's color and magnitude.
Brighter pixels have more motion.
The output visualization is stored in the same location as the input video with the name <input_vid_filename>_FB_FLOW.mp4

The idea is that perhaps the data about how certain pixels/features are moving across the screen could be used to figure out how the player camera / aim was changing.
"""

from tkinter import *
from tkinter import filedialog
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you import all and then this separately?
Can you not just use tkinter.filedialog? Do you even need the import *?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, sorry about that, I got lazy about my package imports.
All we need for the GUI file selector to work properly is: from tkinter import (Tk, Button, filedialog) and from tkinter.messagebox import showinfo. I can update the code and generate and new pull request later today when I have some time if you're cool with that.

Thanks for the review.

Copy link
Member

@krnbrz krnbrz Nov 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix the importy's and should be good to go

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, I fixed the package imports. Everything should be set now.

from tkinter.messagebox import showinfo
import numpy as np
import cv2 as cv

# GUI FILE BROWSER------------------------------------------------------------

window = Tk()
window.geometry('300x150') # sets the size of the GUI window
window.title('Select a Video File') # creates a title for the window

# function allowing you to find/select video in GUI
def get_file_path():
global file_path
# Open and return file path
file_path = filedialog.askopenfilename(title = "Select a Video File", filetypes = (("mp4", "*.mp4"), ("mov files", "*.mov") ,("wmv", "*.wmv"), ("avi", "*.avi")))
showinfo(title='Selected File', message=file_path)

# function allowing you to select the output path in the GUI
def output():
global outpath
outpath = filedialog.asksaveasfilename(filetypes=[("mp4", '*.mp4')])
window.destroy()

# Creating a button to search for the input file and to select the output destinatio and file name
b1 = Button(window, text = 'Open a File', command = get_file_path).pack()
b2 = Button(window, text = 'Save File Name', command = output).pack()
window.mainloop()

# PARAMETERS--------------------------------

# path to input video file
vidpath = r""
vidpath = file_path

# do you want to save the output video?
savevid = True
Expand Down Expand Up @@ -44,11 +70,15 @@
# create black result image
hsv_img = np.zeros_like(old_frame)
hsv_img[...,1] = 255
# get features from first frame
print(f"\nRunning farneback Optical Flow on: {vidpath}")

# if saving video
if savevid:
# path to save output video
savepath = vidpath.split('.')[0] + '_FB_FLOW' + '.mp4'
filename = outpath
savepath = filename + '_FB_FLOW' + '.mp4'
print(f"Saving Output video to: {savepath}")

# get shape of video frames
height, width, channels = old_frame.shape
Expand All @@ -57,33 +87,36 @@
fourcc = cv.VideoWriter_fourcc(*'mp4v')
videoOut = cv.VideoWriter(savepath, fourcc, fps, (width, height))


# PROCESS VIDEO ---------------------------
while(True):
# get frame and convert to grayscale
_, new_frame = cap.read()
new_frame_gray = cv.cvtColor(new_frame, cv.COLOR_BGR2GRAY)
if _:
new_frame_gray = cv.cvtColor(new_frame, cv.COLOR_BGR2GRAY)

# do Farneback optical flow
flow = cv.calcOpticalFlowFarneback(old_frame_gray, new_frame_gray, None, pyr_scale, levels, winsize, iterations, poly_n, poly_sigma, flags)
flow = cv.calcOpticalFlowFarneback(old_frame_gray, new_frame_gray, None, pyr_scale, levels, winsize, iterations, poly_n, poly_sigma, flags)

# conversion
mag, ang = cv.cartToPolar(flow[...,0], flow[...,1])
mag, ang = cv.cartToPolar(flow[...,0], flow[...,1])

# draw onto the result image - color is determined by direction, brightness is by magnitude of motion
#hsv_img[...,0] = ang*180/np.pi/2
#hsv_img[...,2] = cv.normalize(mag, None, 0, 255, cv.NORM_MINMAX)

# color and brightness by magnitude
hsv_img[...,0] = cv.normalize(mag, None, 0, 255, cv.NORM_MINMAX)
hsv_img[...,1] = cv.normalize(mag, None, 0, 255, cv.NORM_MINMAX)
hsv_img[...,2] = cv.normalize(mag, None, 0, 255, cv.NORM_MINMAX)
hsv_img[...,0] = cv.normalize(mag, None, 0, 255, cv.NORM_MINMAX)
hsv_img[...,1] = cv.normalize(mag, None, 0, 255, cv.NORM_MINMAX)
hsv_img[...,2] = cv.normalize(mag, None, 0, 255, cv.NORM_MINMAX)

bgr_img = cv.cvtColor(hsv_img, cv.COLOR_HSV2BGR)
bgr_img = cv.cvtColor(hsv_img, cv.COLOR_HSV2BGR)

# show the image and break out if ESC pressed
cv.imshow('Farneback Optical Flow', bgr_img)
k = cv.waitKey(30) & 0xff
if k == 27:
cv.imshow('Farneback Optical Flow', bgr_img)
k = cv.waitKey(30) & 0xff
else:
k == 27
break

# write frames to new output video
Expand All @@ -95,4 +128,7 @@

# cleanup
videoOut.release()
cv.destroyAllWindows()
cv.destroyAllWindows()

# after video is finished
print('\nComplete!\n')
34 changes: 31 additions & 3 deletions opticalFlow/lucasKanadeOpticalFlow.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,41 @@
The idea is that perhaps the data about how certain pixels/features are moving across the screen could be used to figure out how the player camera / aim was changing.
"""

from tkinter import *
from tkinter import filedialog
import cv2 as cv
import numpy as np
from tkinter.messagebox import showinfo

# GUI FILE BROWSER------------------------------------------------------------

window = Tk()
window.geometry('300x150') # sets the size of the GUI window
window.title('Select a Video File') # creates a title for the window

# function allowing you to find/select video in GUI
def get_file_path():
global file_path
# Open and return file path
file_path = filedialog.askopenfilename(title = "Select a Video File", filetypes = (("mp4", "*.mp4"), ("mov files", "*.mov") ,("wmv", "*.wmv"), ("avi", "*.avi")))
showinfo(title='Selected File', message=file_path)

# function allowing you to select the output path in the GUI
def output():
global outpath
outpath = filedialog.asksaveasfilename(filetypes=[("mp4", '*.mp4')])
window.destroy()

# Creating a button to search for the input file and to select the output destinatio and file name
b1 = Button(window, text = 'Open a File', command = get_file_path).pack()
b2 = Button(window, text = 'Save File Name', command = output).pack()
window.mainloop()


# PARAMETERS------------------------------------------------------------------

# path to input videofile
vidpath = r""
vidpath = file_path

# do you want to save the video?
savevid = True
Expand Down Expand Up @@ -83,8 +111,8 @@
# if saving video
if savevid:
# path to save output video
pathparts = vidpath.split('.')
savepath = '.'+ vidpath.split('.')[-2] + '_LK_FLOW' + '.mp4'
filename = outpath
savepath = filename + '_LK_FLOW' + '.mp4'
print(f"Saving Output video to: {savepath}")

# get shape of video frames
Expand Down
80 changes: 80 additions & 0 deletions yolov3_testing/coco.names
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
person
bicycle
car
motorbike
aeroplane
bus
train
truck
boat
traffic light
fire hydrant
stop sign
parking meter
bench
bird
cat
dog
horse
sheep
cow
elephant
bear
zebra
giraffe
backpack
umbrella
handbag
tie
suitcase
frisbee
skis
snowboard
sports ball
kite
baseball bat
baseball glove
skateboard
surfboard
tennis racket
bottle
wine glass
cup
fork
knife
spoon
bowl
banana
apple
sandwich
orange
broccoli
carrot
hot dog
pizza
donut
cake
chair
sofa
pottedplant
bed
diningtable
toilet
tvmonitor
laptop
mouse
remote
keyboard
cell phone
microwave
oven
toaster
sink
refrigerator
book
clock
vase
scissors
teddy bear
hair drier
toothbrush
Binary file added yolov3_testing/csgo_tracking.mp4
Binary file not shown.
8 changes: 8 additions & 0 deletions yolov3_testing/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
Testing the premade CNN model using "yolov3" on CSGO footage.

It does pretty good job at tracking human objects in the environment, but it also picks and ID's some of the background as well.

Thanks to @CaptnBaguette for bringing it up in the data-classification channel in the Discord.

I did not see any code posted for it, so I went ahead an made a very basic example.
Everything you need to run the script is included EXCEPT for the model weights. Those can be retrieved from: https://pjreddie.com/darknet/yolo/ (it's the 76MB one).
Loading