-
Notifications
You must be signed in to change notification settings - Fork 9
/
README
183 lines (91 loc) · 7.04 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
OTPSetup is a python libary for automating and managing the deployment of OpenTripPlanner instances on AWS. It was developed for the OTP-Deployer application (http://deployer.opentripplanner.org).
A typical setup consists of a collection of EC2 instances/AMIs and S3 storage buckets, with AMQP used for managing inter-component communication workflow. Below is an overview of the current OTP-Deployer setup:
EC2-based Components:
- Controller instance: Hosts the core components, including the RabbitMQ message server, Django webapp (public and admin), and database. Manages communication between the various other components described below.
- Validator instance: Dedicated instance for validating submitted GTFS feeds. Normally idle, the validator instance is woken up by the controller when GTFS files are submitted, and receives the feed locations on S3 via AMQP message.
- Graph-builder instance: Dedicated instance for building OTP graphs. Normally idle when not building a graph, the controller instance is woken up by the controller when a graph needs to be built, and receives the GTFS locations on S3 via AMQP message.
- Proxy server instance: Runs nginx server and automates setup of DNS redirect from public domains (e.g. ___.deployer.opentripplanner.org) to internal EC2 instances. Communicates with the controller via AMQP.
- Deployment instance AMI: An AMI that is used to create instances that host an OTP graph and webapp. There are two deployment instance types (each with its own AMI): single-deployment, where each OTP instance gets its own dedicated EC2 instance, created as needed; and multi-deployment, where deployment host instances are created in advance by the admin and OTP instances are assigned to them as the graphs are created, with each host capable of hosting as many OTP instances as its memory will allow. Communicates with the controller via AMQP.
(See the WORKFLOW file for more detailed documentation of the messaging workflow.)
S3-based Components --
- "otp-gtfs" bucket: user-uploaded GTFS files will be stored here
- "otp-graphs" bucket: successfully built graphs are stored here, along with graph builder output and instance request data (GTFS feeds, OSM extract, GB config file, etc.)
- "otpsetup-resources" bucket: should contain centralized settings template file (settings-template.py) and current versions of following OTP files: graph-builder.jar, opentripplanner-api-webapp.war, and opentripplanner-webapp.war
- planet.osm volume: a dedicated volume that contains a copy of planet.osm. The graph-builder attaches to this upon startup
- NED tile library: a collection of 1-degree NED tiles, downloaded by the graph builder when building NED-enabled graphs
To get started with OTPSetup (single-deployment-per-host mode):
** SETTING UP THE CONTROLLER INSTANCE **
Install the RabbitMQ server:
$ apt-get install rabbitmq-server (or equivalent)
Clone the OTPSetup repo into a local directory (e.g. /var/otp/) and run the setup script:
$ git clone git://github.com/openplans/OTPSetup.git
$ python setup.py install
Install Django-Registration:
$ wget https://bitbucket.org/ubernostrum/django-registration/downloads/django-registration-0.8-alpha-1.tar.gz
$ tar -xzf django-registration-0.8-alpha-1.tar.gz
$ cd django-registration-0.8-alpha-1/
$ python setup.py install
$ easy_install django-registration-defaults
Set up the RabbitMQ server:
$ rabbitmqctl add_vhost /kombu
$ rabbitmqctl add_user kombu [password]
$ rabbitmqctl set_permissions -p /kombu kombu ".*" ".*" ".*"
Finish the Django setup:
(Note: if you wish to use a database other than SQLite for Django, set it up here and modify settings.py as appropriate)
$ python manage.py overload admin client
$ python manage.py syncdb
Copy the otpsetup-controller script from OTPSetup/init.d to /etc/init.d and make it executable:
$ cp /var/otp/OTPSetup/init.d/otpsetup-controller /etc/init.d
$ chmod a+x /etc/init.d/otpsetup-controller
(If OTPSetup was installed to a directory other than /var/otp, modify otpsetup-controller to reflect this)
Modify the "runserver" line to point to the outside address to the django front-end.
Register the script as a bootup script using update-rc.d (on Debian-like systems) or equivalent:
$ update-rc.d otpsetup-controller defaults
Create controller-specific keys using and specify in OTPSetup/
Restart the instance to invoke the boot script.
** SETTING UP THE VALIDATOR INSTANCE **
Clone the OTPSetup repo into a local directory (e.g. /var/otp/):
$ git clone git://github.com/openplans/OTPSetup.git
Copy the otpsetup-val script from OTPSetup/init.d to /etc/init.d and make it executable
$ cp /var/otp/OTPSetup/init.d/otpsetup-val /etc/init.d
$ chmod a+x /etc/init.d/otpsetup-val
(If OTPSetup was installed to a directory other than /var/otp, modify otpsetup-val to reflect this)
Register the script as a bootup script using update-rc.d (on Debian-like systems) or equivalent:
$ update-rc.d otpsetup-val defaults
Restart the instance to invoke the boot script.
** SETTING UP THE GRAPH-BUILDER INSTANCE **
Clone the OTPSetup repo into a local directory (e.g. /var/otp/):
$ git clone git://github.com/openplans/OTPSetup.git
Copy the otpsetup-val script from OTPSetup/init.d to /etc/init.d and make it executable:
$ cp /var/otp/OTPSetup/init.d/otpsetup-gb /etc/init.d
$ chmod a+x /etc/init.d/otpsetup-bg
(If OTPSetup was installed to a directory other than /var/otp, modify otpsetup-val to reflect this)
Register the script as a bootup script using update-rc.d (on Debian-like systems) or equivalent:
$ update-rc.d otpsetup-gb defaults
Set up the graph builder resources directory and note location in settings.py (see README in the OTPSetup/gb-resources).
Restart the instance to invoke the boot script.
** SETTING UP THE PROXY SERVER INSTANCE **
Install Nginx:
$ apt-get install nginx
Clone the OTPSetup repo into a local directory (e.g. /var/otp/)
$ git clone git://github.com/openplans/OTPSetup.git
Copy the otpsetup-deploy script to /etc/init.d and make it executable
$ cp /var/otp/OTPSetup/init.d/otpsetup-deploy /etc/init.d
$ chmod a+x /etc/init.d/otpsetup-deploy
Restart the instance to invoke the boot script.
** SETTING UP THE DEPLOYMENT IMAGE (SINGLE-DEPLOYMENT VERSION) **
Create an empty instance from which the image will be produced
Install Tomcat:
$ apt-get install tomcat6
Modify catalina.sh to provide suffucient memory to OTP, e.g. add the line:
JAVA_OPTS="$JAVA_OPTS -Xms4g -Xmx4g"
Clone the OTPSetup repo into a local directory (e.g. /var/otp/)
$ git clone git://github.com/openplans/OTPSetup.git
Copy the otpsetup-deploy script to /etc/init.d and make it executable
$ cp /var/otp/OTPSetup/init.d/otpsetup-deploy /etc/init.d
$ chmod a+x /etc/init.d/otpsetup-deploy
(If OTPSetup was installed to a directory other than /var/otp, modify otpsetup-deploy to reflect this)
Register the script as a bootup script using update-rc.d (on Debian-like systems) or equivalent
$ update-rc.d otpsetup-deploy defaults 95
(note: otpsetup-deploy must run *after* tomcat in the boot sequence)
Create the AMI based on the instance in this form, and specify its ID in settings.py