-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathREADME.txt
227 lines (162 loc) · 8.32 KB
/
README.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
Analyzing an EMAIL Archive from gmane and vizualizing the data
using the D3 JavaScript library
This is a set of tools that allow you to pull down an archive
of a gmane repository using the instructions at:
http://gmane.org/export.php
In order not to overwhelm the gmane.org server, I have put up
my own copy of the messages at:
http://mbox.dr-chuck.net/
This server will be faster and take a lot of load off the
gmane.org server.
You should install the SQLite browser to view and modify the databases from:
http://sqlitebrowser.org/
The first step is to spider the gmane repository. The base URL
is hard-coded in the gmane.py and is hard-coded to the Sakai
developer list. You can spider another repository by changing that
base url. Make sure to delete the content.sqlite file if you
switch the base url. The gmane.py file operates as a spider in
that it runs slowly and retrieves one mail message per second so
as to avoid getting throttled by gmane.org. It stores all of
its data in a database and can be interrupted and re-started
as often as needed. It may take many hours to pull all the data
down. So you may need to restart several times.
To give you a head-start, I have put up 600MB of pre-spidered Sakai
email here:
https://www.py4e.com/data_space/content.sqlite.zip
If you download this, you can "catch up with the latest" by
running gmane.py.
Navigate to the folder where you extracted the gmane.zip
Note: Windows has difficulty in displaying UTF-8 characters
in the console so for each console window you open, you may need
to type the following command before running this code:
chcp 65001
http://stackoverflow.com/questions/388490/unicode-characters-in-windows-command-line-how
Here is a run of gmane.py getting the last five messages of the
sakai developer list:
Mac: python3 gmane.py
Win: gmane.py
How many messages:10
http://mbox.dr-chuck.net/sakai.devel/1/2 2662
[email protected] 2005-12-08T23:34:30-06:00 call for participation: developers documentation
http://mbox.dr-chuck.net/sakai.devel/2/3 2434
[email protected] 2005-12-09T00:58:01-05:00 report from the austin conference: sakai developers break into song
http://mbox.dr-chuck.net/sakai.devel/3/4 3055
[email protected] 2005-12-09T09:01:49-07:00 cas and sakai 1.5
http://mbox.dr-chuck.net/sakai.devel/4/5 11721
[email protected] 2005-12-09T09:43:12-05:00 re: lms/vle rants/comments
http://mbox.dr-chuck.net/sakai.devel/5/6 9443
[email protected] 2005-12-09T13:32:29+00:00 re: lms/vle rants/comments
Does not start with From
The program scans content.sqlite from 1 up to the first message number not
already spidered and starts spidering at that message. It continues spidering
until it has spidered the desired number of messages or it reaches a page
that does not appear to be a properly formatted message.
Sometimes gmane.org is missing a message. Perhaps administrators can delete messages
or perhaps they get lost - I don't know. If your spider stops, and it seems it has hit
a missing message, go into the SQLite Manager and add a row with the missing id - leave
all the other fields blank - and then restart gmane.py. This will unstick the
spidering process and allow it to continue. These empty messages will be ignored in the next
phase of the process.
One nice thing is that once you have spidered all of the messages and have them in
content.sqlite, you can run gmane.py again to get new messages as they get sent to the
list. gmane.py will quickly scan to the end of the already-spidered pages and check
if there are new messages and then quickly retrieve those messages and add them
to content.sqlite.
The content.sqlite data is pretty raw, with an innefficient data model, and not compressed.
This is intentional as it allows you to look at content.sqlite to debug the process.
It would be a bad idea to run any queries against this database as they would be
slow.
The second process is running the program gmodel.py. gmodel.py reads the rough/raw
data from content.sqlite and produces a cleaned-up and well-modeled version of the
data in the file index.sqlite. The file index.sqlite will be much smaller (often 10X
smaller) than content.sqlite because it also compresses the header and body text.
Each time gmodel.py runs - it completely wipes out and re-builds index.sqlite, allowing
you to adjust its parameters and edit the mapping tables in content.sqlite to tweak the
data cleaning process.
Running gmodel.py works as follows:
Mac: python3 gmodel.py
Win: gmodel.py
Loaded allsenders 1588 and mapping 28 dns mapping 1
1 2005-12-08T23:34:30-06:00 [email protected]
251 2005-12-22T10:03:20-08:00 [email protected]
501 2006-01-12T11:17:34-05:00 [email protected]
751 2006-01-24T11:13:28-08:00 [email protected]
...
The gmodel.py program does a number of data cleaing steps
Domain names are truncated to two levels for .com, .org, .edu, and .net
other domain names are truncated to three levels. So si.umich.edu becomes
umich.edu and caret.cam.ac.uk becomes cam.ac.uk. Also mail addresses are
forced to lower case and some of the @gmane.org address like the following
are converted to the real address whenever there is a matching real email
address elsewhere in the message corpus.
If you look in the content.sqlite database there are two tables that allow
you to map both domain names and individual email addresses that change over
the lifetime of the email list. For example, Steve Githens used the following
email addresses over the life of the Sakai developer list:
We can add two entries to the Mapping table
And so all the mail messages will be collected under one sender even if
they used several email addresses over the lifetime of the mailing list.
You can also make similar entries in the DNSMapping table if there are multiple
DNS names you want mapped to a single DNS. In the Sakai data I add the following
mapping:
iupui.edu -> indiana.edu
So all the folks from the various Indiana University campuses are tracked together
You can re-run the gmodel.py over and over as you look at the data, and add mappings
to make the data cleaner and cleaner. When you are done, you will have a nicely
indexed version of the email in index.sqlite. This is the file to use to do data
analysis. With this file, data analysis will be really quick.
The first, simplest data analysis is to do a "who does the most" and "which
organzation does the most"? This is done using gbasic.py:
Mac: python3 gbasic.py
Win: gbasic.py
How many to dump? 5
Loaded messages= 51330 subjects= 25033 senders= 1584
Top 5 Email list participants
[email protected] 2657
[email protected] 1742
[email protected] 1591
[email protected] 1304
[email protected] 1184
Top 5 Email list organizations
gmail.com 7339
umich.edu 6243
uct.ac.za 2451
indiana.edu 2258
unicon.net 2055
You can look at the data in index.sqlite and if you find a problem, you
can update the Mapping table and DNSMapping table in content.sqlite and
re-run gmodel.py.
There is a simple vizualization of the word frequence in the subject lines
in the file gword.py:
Mac: python3 gword.py
Win: gword.py
Range of counts: 33229 129
Output written to gword.js
This produces the file gword.js which you can visualize using the file
gword.htm.
A second visualization is in gline.py. It visualizes email participation by
organizations over time.
Mac: python3 gline.py
Win: gline.py
Loaded messages= 51330 subjects= 25033 senders= 1584
Top 10 Oranizations
['gmail.com', 'umich.edu', 'uct.ac.za', 'indiana.edu', 'unicon.net', 'tfd.co.uk', 'berkeley.edu', 'longsight.com', 'stanford.edu', 'ox.ac.uk']
Output written to gline.js
Its output is written to gline.js which is visualized using gline.htm.
Some URLs for visualization ideas:
https://developers.google.com/chart/
https://developers.google.com/chart/interactive/docs/gallery/motionchart
https://code.google.com/apis/ajax/playground/?type=visualization#motion_chart_time_formats
https://developers.google.com/chart/interactive/docs/gallery/annotatedtimeline
http://bost.ocks.org/mike/uberdata/
http://mbostock.github.io/d3/talk/20111018/calendar.html
http://nltk.org/install.html
As always - comments welcome.
-- Dr. Chuck
Sun Sep 29 00:11:01 EDT 2013