Skip to content

Latest commit

 

History

History
48 lines (28 loc) · 1.16 KB

README.md

File metadata and controls

48 lines (28 loc) · 1.16 KB

Overcrowding Detection Algorithm for Spark

Build Status

Requirements

  • Tested with

  • Install APT required dependencies

      apt-get install libgeos-dev libspatialindex-dev
    
  • Install Python requirements

      pip install -r requirements.txt
    

Testing

Configure enviroment variables to point to your Spark installation directory

export SPARK_HOME=/opt/spark-2.0.1-bin-hadoop2.7
export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.10.3-src.zip:$PYTHONPATH

Run tests with

nosetests -s

Launching sample application

Launch the sample Python application locally with

spark-submit --master local overcrowd_simulator.py

Launch all experiments (locally)

Change to the main directory and run the following script

experiment/run_all.sh

The output of each experiment is written to a different CSV file in the local directory (devices.csv, cells.csv internal.csv).