Skip to content

Commit

Permalink
Added scene list generation and export to .csv file.
Browse files Browse the repository at this point in the history
  • Loading branch information
Breakthrough committed Jun 10, 2014
1 parent 2bddaaf commit 52be473
Show file tree
Hide file tree
Showing 2 changed files with 136 additions and 20 deletions.
26 changes: 15 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,10 @@ Note that PySceneDetect is currently in alpha (see Current Status below for deta
Download & Requirements
----------------------------------------------------------

You can download the latest release of [PySceneDetect from here](https://github.com/Breakthrough/PySceneDetect/releases). To run PySceneDetect, you will need:
You can download [PySceneDetect from here](https://github.com/Breakthrough/PySceneDetect/releases); to run it, you will need:

- [Python 2 / 3](https://www.python.org/) (tested on 2.7.X, untested but should work on 3.X)
- OpenCV-Python Bindings (can usually be found in Linux package repos already, Windows users can find [prebuilt binaries for Python 2.7 here](http://www.lfd.uci.edu/~gohlke/pythonlibs/#opencv))
- OpenCV Python Module (can usually be found in Linux package repos already, Windows users can find [prebuilt binaries for Python 2.7 here](http://www.lfd.uci.edu/~gohlke/pythonlibs/#opencv))
- [Numpy](http://sourceforge.net/projects/numpy/)

To ensure you have all the requirements, open a `python` interpreter, and ensure you can `import numpy` and `import cv2` without any errors.
Expand All @@ -29,9 +29,9 @@ To run PySceneDetect, you can invoke `python scenedetect.py` or `./scenedetect.p

./scenedetect.py --help

To perform threshold-based analysis with the default parameters, on a video file named `myvideo.mp4`:
To perform threshold-based analysis with the default parameters, on a video named `myvideo.mp4`, saving a list of scenes to `myvideo_scenes.csv` (they are also printed to the terminal):

./scenedetect.py --input myvideo.mp4
./scenedetect.py --input myvideo.mp4 --output myvideo_scenes.csv

To perform threshold-based analysis, with a threshold intensity of 16, and a match percent of 90:

Expand All @@ -44,22 +44,26 @@ Detailed descriptions of the above parameters, as well as their default values,
You can download the file `testvideo.mp4` as well as the expected output `testvideo-results.txt` [from here](https://github.com/Breakthrough/PySceneDetect/tree/resources/tests).


Current Status / Known Issues
Current Status
----------------------------------------------------------

As of version `0.1.0-alpha`, although fade in/outs are detected in videos, they are not interpolated into scenes. In addition, the results are displayed to `stdout`, and not in any particular timecode format. These issues will be addressed in the following version, before moving towards content-aware scene detection.
See [the Releases page](https://github.com/Breakthrough/PySceneDetect/releases) for a list of all versions, changes, and download links. The latest stable release of PySceneDetect is `v0.2.0-alpha`.

### Immediate Work
### Current Features

- analyzes passed video file for changes in intensity/content (currently based on mean pixel value/brightness)
- detects fade-in and fade-out based on user-defined threshold
- exports list of scenes to .CSV file (both timecodes and frame numbers)

### In Process

- allow specification of an output file
- export timecodes in multiple formats to match popular applications
- `mkvmerge` format: `HH:MM:SS.nnnnn`, comma-separated
- interpolate between fade in/outs to determine approximate scene cut time
- adaptive or user-defined bias for fade in/out interpolation

### Future Plans
### Planned Features

- export scenes in chapter/XML format
- adaptive or user-defined bias for fade in/out interpolation
- content-aware scene detection


Expand Down
130 changes: 121 additions & 9 deletions scenedetect.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@
import numpy


VERSION_STRING = '0.1.0-alpha'
VERSION_STRING = '0.2.0-alpha'
ABOUT_STRING = """
PySceneDetect %s
-----------------------------------------------
Expand Down Expand Up @@ -139,6 +139,70 @@ def analyze_video_threshold(cap, threshold, min_percent, block_size, show_output
return fade_list


def generate_scene_list(fade_list, csv_out = None, include_last = False, show_output = True):
""" Creates a list of scenes from a sorted list of fades in/out.
A new scene is created at the beginning of the video ("scene zero"), and
between each fade-out and fade-in in fade_list.
Args:
fade_list: A list of fades generated by analyze_video_threshold().
csv_out: A file-like object to write the scene information to.
include_last: If true, and if the last fade in fade_list is a fade-out,
appends a final scene at the index of the fade-out.
show_output: True to print updates while detecting, False otherwise.
Returns:
A list of scenes as tuples in the form (time, frame number).
"""
if csv_out:
csv_out.write("scene,timecode(ms),frame\n")

if show_output:
print ''
print '----------------------------------------'
print ' SCENE # | TIME | FRAME # '
print '----------------------------------------'

scene_list = []
scene_list.append((0,0)) # Scenes in form (timecode, frame number)

# Ensure fade list starts on fade in and ends with fade out.
# (fade type 0 == out, 1 == in)
if not (fade_list[0][0] == 1):
fade_list.insert(0, ( 0, cap.get(cv2.cv.CV_CAP_PROP_POS_MSEC),
cap.get(cv2.cv.CV_CAP_PROP_POS_FRAMES) ) )
if not (fade_list[-1][0] == 0):
fade_list.append( ( 0, cap.get(cv2.cv.CV_CAP_PROP_POS_MSEC),
cap.get(cv2.cv.CV_CAP_PROP_POS_FRAMES) ) )

last_fade = None

for fade in fade_list:
# We create a new scene for each fade-in we detect.
if (fade[0] == 1 and last_fade):
scene_list.append( ((fade[1] + last_fade[1]) / 2.0,
(fade[2] + last_fade[2]) / 2 ) )
last_fade = fade

if include_last and last_fade[0] == 0:
scene_list.append((last_fade[1], last_fade[2]))

if csv_out or show_output:
for scene_idx in range(len(scene_list)):
if csv_out:
csv_out.write("%d,%f,%d\n" % (
scene_idx, scene_list[scene_idx][0], scene_list[scene_idx][1]) )
if show_output:
print " %3d | %9d ms | %10d" % (
scene_idx, scene_list[scene_idx][0], scene_list[scene_idx][1] )
if show_output:
print '-----------------------------------------'
print ''

return scene_list


def int_type_check(min_val, max_val = None, metavar = None):
""" Creates an argparse type for a range-limited integer.
Expand Down Expand Up @@ -172,6 +236,39 @@ def _type_check(value):
return _type_check


def int_type_check(min_val, max_val = None, metavar = None):
""" Creates an argparse type for a range-limited integer.
The passed argument is declared valid if it is a valid integer which
is greater than or equal to min_val, and if max_val is specified,
less than or equal to max_val.
Returns:
A function which can be passed as an argument type, when calling
add_argument on an ArgumentParser object
Raises:
ArgumentTypeError: Passed argument must be integer within proper range.
"""
if metavar == None: metavar = 'value'
def _type_checker(value):
value = int(value)
valid = True
msg = ''
if (max_val == None):
if (value < min_val): valid = False
msg = 'invalid choice: %d (%s must be at least %d)' % (
value, metavar, min_val )
else:
if (value < min_val or value > max_val): valid = False
msg = 'invalid choice: %d (%s must be between %d and %d)' % (
value, metavar, min_val, max_val )
if not valid:
raise argparse.ArgumentTypeError(msg)
return value
return _type_checker


class AboutAction(argparse.Action):
""" Custom argparse action for displaying raw About string.
Expand Down Expand Up @@ -206,6 +303,9 @@ def get_cli_parser():
parser.add_argument('-i', '--input', metavar = 'VIDEO_FILE',
type = file, required = True,
help = '[REQUIRED] Path to input video.')
parser.add_argument('-o', '--output', metavar = 'SCENE_LIST',
type = argparse.FileType('w'),
help = 'File to store detected scenes in; comma-separated value format (.csv). Will be overwritten if exists.')
parser.add_argument('-t', '--threshold', metavar = 'intensity',
type = int_type_check(0, 255, 'intensity'), default = 8,
help = '8-bit intensity value, from 0-255, to use as a fade in/out detection threshold.')
Expand All @@ -215,12 +315,12 @@ def get_cli_parser():
parser.add_argument('-b', '--blocksize', metavar = 'rows',
type = int_type_check(1, None, 'number of rows'), default = 32,
help = 'Number of rows in frame to check at once, can be tuned for performance.')
parser.add_argument('-s', '--startindex', metavar = 'offset',
type = int, default = 0,
help = 'Starting index for chapter/scene output.')
parser.add_argument('-p', '--startpos', metavar = 'position',
choices = [ 'in', 'mid', 'out' ], default = 'out',
help = 'Where the timecode/frame number for a given scene should start relative to the fades [in, mid, or out].')
#parser.add_argument('-s', '--startindex', metavar = 'offset',
# type = int, default = 0,
# help = 'Starting index for chapter/scene output.')
#parser.add_argument('-p', '--startpos', metavar = 'position',
# choices = [ 'in', 'mid', 'out' ], default = 'out',
# help = 'Where the timecode/frame number for a given scene should start relative to the fades [in, mid, or out].')

return parser

Expand All @@ -240,7 +340,6 @@ def main():
print 'cap.isOpened() is not True after calling cap.open(..)'
return
else:
print ''
print 'Parsing video %s...' % args.input.name

# Print video parameters (resolution, FPS, etc...)
Expand All @@ -265,11 +364,24 @@ def main():
print "Read %d frames in %4.2f seconds (avg. %4.1f FPS)." % (
frame_count, total_runtime, avg_framerate )

#
#
cap.release()

# Ensure we actually detected anything from the video file.
if not len(fade_list) > 0:
print "Error - no fades detected in video!"
return

# Generate list of scenes from fades, writing to CSV output if specified.
scene_list = generate_scene_list(fade_list, args.output)
if (args.output): args.output.close() # Close the file if it was passed.

print "Detected %d scenes in video." % len(scene_list)
print ""


#

if __name__ == "__main__":
main()

0 comments on commit 52be473

Please sign in to comment.