-
Notifications
You must be signed in to change notification settings - Fork 0
/
common-voice.html
257 lines (234 loc) · 30.4 KB
/
common-voice.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
<!DOCTYPE html>
<html>
<head>
<!-- Meta -->
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0">
<meta name="description" content="Making Voices Heard | A study by the Centre for Internet and Society, India, supported by Mozilla Corporation" />
<!-- Title + CSS + Favicon -->
<title>Making Voices Heard</title>
<link rel="stylesheet" type="text/css" href="css/semantic.min.css">
<link rel="stylesheet" type="text/css" href="css/style.css">
<link rel="shortcut icon" type="image/x-icon" href="img/favicon.ico" />
<!-- Font Awesome -->
<script src="https://kit.fontawesome.com/4c415b9185.js" crossorigin="anonymous"></script>
</head>
<body>
<!-- Header -->
<div>
<div class="ui fluid container banner">
<div class="banner-image" aria-label="Cats are shown as people using various devices including voice interfaces in shops and houses, with a central banner that shows the title ‘Making Voices Heard’."></div>
</div>
</div>
<!-- Top Navigation Bar -->
<div class="blue nav">
<div class="ui container">
<div class="nav-entries">
<a href="index.html">Home</a>     <a href="design-brief.html">Design Brief</a> <a href="policy-brief.html">Policy Brief</a> <a href="mapping-actors.html">Mapping Actors</a> <a href="index.html#case-studies">Case Studies</a> <a href="index.html#literature-surveys">Literature Surveys</a> <a href="index.html#resources">Resources</a> <span id="report"><a href="docs/MakingVoicesHeard_FullReport.pdf"><i class="fas fa-arrow-circle-down"></i> Get Full Report</a></span>
</div>
</div>
</div>
<!-- Title -->
<div class="grey">
<div class="ui container four column stackable grid">
<div class="one wide column empty">
</div>
<div class="fourteen wide column text">
<h2>Common Voice</h2>
</div>
<div class="one wide column empty">
</div>
<div class="one wide column empty">
</div>
<div class="nine wide column text">
<img src="img/CaseStudy_CommonVoice.jpg"width="100%" style="margin: 15px 0 1px 0;" alt="Various cats speaking into a microphone in different languages."/>
<h3 id="about">About</h3>
<p><strong> ‘... to make voice data freely and publicly available, and make sure the data represents the diversity of real people.’<sup class="superscript"><a href="#fn1">1</a></sup><a name="ref1"></a></strong></p>
<p>Common Voice (CV) is an open-source dataset of voice recordings in multiple languages that can be used to train speech-enabled applications. With over 13, 905 validated hours recorded in 76 different languages as of July 2021<sup class="superscript"><a href="#fn2">2</a></sup><a name="ref2"></a> CV strives to create and maintain the largest publicly available voice dataset of its kind. CV believes that the availability of large public voice datasets will help foster innovation and create a healthy market for machine-learning-based speech technologies. In May 2020, CV began data collection for a single-word target segment (the recording of single words in multiple languages) or voice data for single-word sentences (for example yes and no), to be deployed for specific use cases or purposes. The exercise has begun with the digits zero through nine, as well as the words yes, no, hey and Firefox”.<sup class="superscript"><a href="#fn3">3</a></sup><a name="ref3"></a> </p>
<h3 id="methodology">Methodology and process</h3>
<p> CV follows a community-driven model of creating an open-source, multilingual dataset of voice recordings that is openly accessible and usable. At the same time, it has also been working on and navigating various aspects related to privacy of voice data and accessibility for persons with disabilities, which also include complex design challenges and decisions. Some key features of this initiative include:</p>
<h4>Community-driven contribution </h4>
<p>“... Providing more and better data to everyone in the world who seeks to build and use voice technology.”<sup class="superscript"><a href="#fn4">4</a></sup><a name="ref4"></a></p>
<p> Although CV began with creating a voice dataset for English, as most of the team working on it was English-speaking, as of July 2021 there are over 76 languages on the platform. CV depends on a community of volunteers and individual users who contribute voice data in order to add new languages to its website and system. One way CV promotes localisation is by localising its website to the languages it wants to add. Before adding a new language, the community has to localise 85% of the website, so that when volunteers from the local language community visit the website, they can easily navigate it, and do not need to rely on English. Then, when the language is active on the site, it is up to the community to submit 5,000 sentences that have been recorded in that language. This indicates two things to CV: a) that there is an active language community that can provide voice recordings, and b) that the barrier to including the language in CV is fairly low.</p>
<p>The recorded material is based on a sentence corpus that CV provides; everybody on the platform is presented with sentences that they can record and submit. These include content such as parliamentary transcripts, Wikipedia articles, and sentences that members of the community have submitted. Two other community members then check to see if the audio matches the sentences. Though this is not a foolproof system, CV reports that it has a rather high accuracy rate. If people record things that are not on the card, they get voted down very quickly. This system of community curation and regulation, therefore, adds a layer of control to the accuracy and quality of content.</p>
<p> “Amazon and Apple, by necessity, choose languages based on what makes sense in the market and makes the most profit.”<sup class="superscript"><a href="#fn5">5</a></sup><a name="ref5"></a></p>
<p> Key players in the voice-as-product market serve more widely spoken languages, such as English, French, and German, because they have a large user base and hence greater demand. The issue occurs with underrepresented languages, uncommon accents, or the voices of people from underrepresented/marginalised groups – such as those belonging to particular ethnic or gender identities. As a result, large populations remain unrepresented in datasets used to train commercial voice technologies and products. This is the gap that CV is striving to diminish.</p>
<p>CV’s data collection differs from that of start-ups and companies like Google and Amazon; here, the sentences are self-recorded by people, and CV does not automatically detect the individual’s identity, location, or other data. It does not infer the contributor’s demographic based on their browsing data. Community members are also instructed not to identify people who are in the dataset.</p>
<h4>Design process and development</h4>
<p> Since it was envisioned as a community-driven experience, the CV team applied experience design practices when conceptualising this database.<sup class="superscript"><a href="#fn6">6</a></sup><a name="ref6"></a> Like in many design problems, the project began with the identification of a need. This need was for large quantities of publicly available voice data that could be used to train speech-to-text engines. In the design process that followed, the team ideated on creating an open-source voice dataset over the course of several design thinking exercises with Mozilla community members.<sup class="superscript"><a href="#fn7">7</a></sup><a name="ref7"></a>
This resulted in paper prototypes of varying design concepts. CV then gathered in-person feedback on these prototypes to identify which design concepts to proceed on. The initial assumption of the project team was that people would need an ulterior motive to provide voice data towards this project. However, the team’s insight from the research was that most people were open to the idea of voice donation. They also inferred that people wanted to learn more about the need for such voice data collection. Hence, they designed a platform whose prominent feature was collecting voice data.<sup class="superscript"><a href="#fn8">8</a></sup><a name="ref8"></a></p>
<p> They developed an interactive model where people could ‘teach’ a robot to understand human speech by reading sentences to it.<sup class="superscript"><a href="#fn9">9</a></sup><a name="ref9"></a> This robot has become part of the CV website as a mascot of sorts, even though the interactive teaching model is no longer operational. The alpha version of the CV platform was built “to tell the story of voice data and how it relates to the need for diversity and inclusivity in speech technology”.<sup class="superscript"><a href="#fn10">10</a></sup><a name="ref10"></a>
The CV team collected community feedback through tools such as Discourse<sup class="superscript"><a href="#fn11">11</a></sup><a name="ref11"></a>
and Github.<sup class="superscript"><a href="#fn12">12</a></sup><a name="ref12"></a> They developed further iterations after feedback collection and discourse analysis. The Open Innovation team at Mozilla shared with us that they emphasise prototyping and reiterating. They carried out a user experience (UX) audit of the working prototype and considered community feedback from Github and Discourse. Based on this assessment, they made refinements to the platform.</p>
<p> Following the release of the working version, the CV team conducted another UX audit. They took into account a combination of UX heuristics, competitor evaluation (such as of platforms such as Headspace)<sup class="superscript"><a href="#fn13">13</a></sup><a name="ref13"></a>, and community feedback. They looked at community feedback on Github and Discourse and spoke to the engineers who built CV. Since 2017, the focus has been on improving the platform and primarily enhancing the experience of contributing voice data. Presently, the team is looking at the bigger picture by focusing on fine-tuning the contributors’ experience based on the data and research accumulated.</p>
<h3 id="languages">Enabling multi-language contributions</h3>
<p>Following an iterative design process allowed CV to ask questions, derive insights, and improve its platform. The team observed that the data collected needed to be more diverse in terms of gender, accent, dialect, and language. They held an experience workshop to ideate on how to support multiple languages and enable better-quality voice data contributions.<sup class="superscript"><a href="#fn14">14</a></sup><a name="ref14"></a> They realised that the platform needed to provide people with a way to contribute in their desired language(s). They also added dedicated language pages and community dashboards. The team also made further enhancements, such as a new profile login experience and a new contribution experience, to increase the quality and quantity of voice contributions.<sup class="superscript"><a href="#fn15">15</a></sup><a name="ref15"></a></p>
<p> Over the course of our interviews, we learned that CV had been designed to be a global project from the beginning. During the initial stages of development, the team ran a design sprint with a paper prototype on the streets of Taipei. It soon became clear that the platform could not be limited to English. They collected feedback from people who did not speak English as a first language, but wanted to contribute to the platform. It was evident from the feedback that CV did not need to design for specific languages, but for people to opt-in and contribute in a language of their choice. The CV interface is basic, but it features a simple mechanism to choose and add a language. Through this research, the team also discovered that there is an audience for language preservation, who wanted to add languages to CV. The team is currently looking at evolving CV for not just major languages but also for lesser-known or less visible languages.</p>
<h3 id="access">Accessibility and access</h3>
<p>The team analysed the CV website on Lighthouse,<sup class="superscript"><a href="#fn16">16</a></sup><a name="ref16"></a> an open-source, automated tool that audits web pages for performance, accessibility, and search engine optimisation (SEO). Their Lighthouse score indicated that they did not perform well in the area of colour contrast. Subsequently, they are working on ensuring that the website matches all accessibility standards. The CV team emphasised on the importance of having a high quality and accessible dataset. The files for English voice data are heavy and difficult to download, so they are working towards improving access. They are also working on creating a web app version of the website for use on devices with limited bandwidth so that contributors are able to utilise it online and offline.</p>
<h3 id="privacy">Privacy and data collection</h3>
<p>“We don’t believe in taking information that we have not specifically been given regardless of what products are available to us.”<sup class="superscript"><a href="#fn17">17</a></sup><a name="ref17"></a></p>
<p> With a large number of people providing voice data, there is a need to protect privacy, especially as voices and accents are easily identifiable. As they understand the vulnerability of voice data, the CV team works closely with their trust and legal team to ensure the privacy of their contributors. They also work closely with the technical, legal, and privacy teams to ensure that the websites – and any new additions – comply with their privacy policies. Mozilla also has a data steward programme, which is run by a group of experts in the organisation who have volunteered to be consultants on data collection and best practices in data management and protection. The CV platform itself operates on two primary principles. The first is de-identification to the highest degree possible. This requires that for any language being recorded, there should be recordings by at least five people so that it becomes harder to identify them. CV also tries to remove identifiers such as sex and age in smaller datasets. The second principle is based on consent – CV does not associate voice with any client-facing data except when they consent to it. The dashboard helps contributors control who can see their profile; they can hide their visibility to others on CV. The team has created the website to be as malleable as possible when it comes to contributors’ interactions with it. Contributors do not necessarily need to have a profile to contribute voice data. CV’s terms and conditions agreement states that they collect data for research and that they collect personal (voice) information only when people contribute their voices.</p>
<h3 id="design">Design decisions</h3>
<p> Currently, the CV website is not voice-activated but based on ‘classic’ touch-/point-and-click interactions. The CV team feels that it is important to enable some sort of voice detection in the website, as this will allow for the recordings to be more succinct and accurate. The team has also been thinking about the future of the website: what would it look like when people want to donate their own voices on CV? How can CV use the data they collect to tune voice recognition on the platform itself? If they enable this, they would have to rethink the entire user experience, including navigation, actions, and initiators for contributions.</p>
<p> The overall objective of CV’s interface design is to simplify the process by which people can contribute. It is meant to have an intuitive design. However, the experience of contributing may not be the same for everyone, and so this objective is difficult to achieve. The team observed that soon there will be a homogenisation of voice interfaces, as has been the case with websites. They note that this is already underway with wake words and voice assistants. An important question to ask here is if CV data and open-source data can make this homogenisation look different. Can they allow people to tinker and play outside of bigger entities and challenge the idea of what voice interfaces should look like?</p>
<p> The design team notes that it is tough to design for responsiveness. Their challenge has been to fit large quantities of information into a small device/screen, and this is exacerbated by the localisation of CV in various languages. It is difficult to design an interface where one cannot control the way the text appears across browsers. When they cannot read a language, it is difficult to troubleshoot. While this is an ongoing challenge, it is a good problem to have, as it shows that CV is growing. They affirm that taking on community feedback is the most critical and rewarding aspect of this work.</p>
<h3 id="challenges">Challenges</h3>
<p>A key challenge in making CV easier for contributors and the community to access is the need for internet connectivity. In addition to this, material for recording comes from sources such as parliamentary transcripts and Wikipedia, which might not reflect the actual reading and speaking styles that people use in their day-to-day lives. As these sources use more formal writing styles, the training model is also skewed towards a formal mechanism as opposed to the casual way people converse in real life. At times, women and others from underrepresented communities find it less than welcoming to engage with projects in the open-source community – including that of CV – because it mostly consists of men. This means that the dataset comprises mostly male voices, and members from diverse gender identities and communities are not adequately represented in the datasets.</p>
<h3 id="future">Future of Common Voice</h3>
<p>“We are only seeing an increased interest in Common Voice.”<sup class="superscript"><a href="#fn18">18</a></sup><a name="ref18"></a></p>
<p> CV saw a 20% growth in recorded hours during October–December 2020. Additionally, there has been a significant increase in the interest in CV, both from industries as well as communities. In recent years there has been an increase in community-driven contributions, especially from people involved in language preservation and civic duty systems. These individual and community-based initiatives help add more languages into the CV system, which might not have been possible with a centralised system. More recently, CV received two investments worth $1.5 million from Nvidia and $3.4 million from other investors to continue their work with native African languages</p>
<p>The project looks at continuing research and data collection with the help of government funding. Given the scale and amount of funding needed for such projects, including the requirement of infrastructure and trained human resources, the government is the primary source of funding. With the new funding from the Ministry of Electronics and Information Technology, the researchers at IITM have started a project to make English lecture videos available in Indian languages. The objective of this project is to make lectures in different domains, like humanities, healthcare, etc., freely accessible to students in their languages. This is a small-scale project, and Indic TTS hopes to expand it to more languages and subjects. </p>
<br />
<i> Disclaimer: This is an independent case study conducted as a part of the Making Voices Heard Project, supported by the Mozilla Corporation. The researchers have not received any external remuneration as a part of this case study, and claim no conflict of interest.</i>
</div>
<div class="one wide column empty">
</div>
<div class="five wide column meta">
<p><span id="grey">Research and Writing by</span> <br />Shweta Mohandas <span id="grey">and</span> Saumyaa Naidu
<br />
<span id="grey">Review and Editing by</span> <br />Puthiya Purayil Sneha <span id="grey">and</span> Torsha Sarkar<br />
<span id="grey">Research Inputs by</span> <br />Sumandro Chattapadhyay<br />
<br />
<a href="docs/MozVoice_CaseStudies_CV_02.pdf"><i class="fas fa-arrow-circle-down" style="color: black;" ></i> Download Common Voice Case study</a></p>
<br />
<hr />
<br />
<p><span style="line-height: 3em;">CONTENTS</span></p>
<p><a href="#about"><strong>About</strong></a></p>
<p><a href="#methodology"><strong>Methodology and Process</strong></a></p>
<p><a href="#languages"><strong>Enabling multi-language contributions</strong></a></p>
<p><a href="#access"><strong>Accessibility and access</strong></a></p>
<p><a href="#privacy"><strong>Privacy and data collection </strong></a></p>
<p><a href="#design"><strong>Design decissions</strong></a></p>
<p><a href="#challenges"><strong>Challenges</strong></a></p>
<p><a href="#future"><strong>Future of Common Voice</strong></a></p>
</div>
<div class="one wide column empty">
</div>
<div class="nine wide column text">
<div class="ten wide column content">
</div>
<div class="ten wide column content">
<br />
<h3>Notes</h3>
<table class="footnote">
<tr>
<td class="number">1</td>
<td class="reference"><a name="fn1"></a> “Common Voice by Mozilla,” Common Voice, accessed January 4, 2022, <a href="https://commonvoice.mozilla.org/en/about"target="_blank"> https://commonvoice.mozilla.org/en/about.</a> <span class="internal-nav"><a href="#ref1">↑</a></span></td>
</tr>
<tr>
<td class="number">2</td>
<td class="reference"><a name="fn2"></a> “Common Voice by Mozilla.” Common Voice, accessed January 4, 2022, <a href="https://commonvoice.mozilla.org/en/datasets"target="_blank"> https://commonvoice.mozilla.org/en/datasets.</a> <span class="internal-nav"><a href="#ref2">↑</a></span></td>
</tr>
<tr>
<td class="number">3</td>
<td class="reference"><a name="fn3"></a>Branson, M., “Help Create Common Voice’s First Target Segment,” Discourse, 12 May 2020a, Accessed 3 November, 2021, <a href="https://discourse.mozilla.org/t/help-create-common-voices-first-target-segment/59587"target="_blank">https://discourse.mozilla.org/t/help-create-common-voices-first-target-segment/59587</a> <span class="internal-nav"><a href="#ref3">↑</a></span></td>
</tr>
<tr>
<td class="number">4</td>
<td class="reference"><a name="fn4"></a>Roter, G., “Sharing Our Common Voices – Mozilla Releases the Largest to-date Public Domain Transcribed Voice Dataset,” <em>The Mozilla Blog, </em>9 February 2021, accessed 3 November 2021, <a href="https://blog.mozilla.org/en/mozilla/news/sharing-our-common-voices-mozilla-releases-the-largest-to-date-public-domain-transcribed-voice-dataset/"target="_blank">https://blog.mozilla.org/en/mozilla/news/sharing-our-common-voices-mozilla-releases-the-largest-to-date-public-domain-transcribed-voice-dataset/</a> <span class="internal-nav"><a href="#ref4">↑</a></span></td>
</tr>
<tr>
<td class="number">5</td>
<td class="reference"><a name="fn5"></a> Interview, Common Voice, online, Bangalore, 22 October 2020 <span class="internal-nav"><a href="#ref5">↑</a></span></td>
</tr>
<tr>
<td class="number">6</td>
<td class="reference"><a name="fn6"></a>Branson, M., “We’re Intentionally Designing Open Experiences, Here’s Why,” <em>Medium</em>, 10 September 2018, accessed 3 November 2021, <a href="https://medium.com/mozilla-open-innovation/were-intentionally-designing-open-experiences-here-s-why-c6ae9730de54"target="_blank">https://medium.com/mozilla-open-innovation/were-intentionally-designing-open-experiences-here-s-why-c6ae9730de54</a> <span class="internal-nav"><a href="#ref6">↑</a></span></td>
</tr>
<tr>
<td class="number">7</td>
<td class="reference"><a name="fn7"></a> Branson, “We’re Intentionally Designing Open Experiences.” <span class="internal-nav"><a href="#ref7">↑</a></span></td>
</tr>
<tr>
<td class="number">8</td>
<td class="reference"><a name="fn8"></a> Branson, “We’re Intentionally Designing Open Experiences.” <span class="internal-nav"><a href="#ref8">↑</a></span></td>
</tr>
<tr>
<td class="number">9</td>
<td class="reference"><a name="fn9"></a> Branson, “We’re Intentionally Designing Open Experiences.” <span class="internal-nav"><a href="#ref9">↑</a></span></td>
</tr>
<tr>
<td class="number">10</td>
<td class="reference"><a name="fn10"></a> Branson, “We’re Intentionally Designing Open Experiences.” <span class="internal-nav"><a href="#ref10">↑</a></span></td>
</tr>
<tr>
<td class="number">11</td>
<td class="reference"><a name="fn11"></a>Branson, M., “Civilized Discussion,” Discourse, accessed November 1, 2021, <a href=" https://www.discourse.org/."target="_blank"> https://www.discourse.org/.</a> <span class="internal-nav"><a href="#ref11">↑</a></span></td>
</tr>
<tr>
<td class="number">12</td>
<td class="reference"><a name="fn12"></a>“Where the World Builds Software,” <em>GitHub</em>, accessed November 1, 2021, <a href="https://github.com/."target="_blank"> https://github.com/.</a> <span class="internal-nav"><a href="#ref12">↑</a></span></td>
</tr>
<tr>
<td class="number">13</td>
<td class="reference"><a name="fn13"></a> “Meditation and Sleep Made Simple,” <em>Headspace</em>, (n.d.), accessed 3 November 2021, <a href="https://www.headspace.com/."target="_blank">https://www.headspace.com/.</a> <span class="internal-nav"><a href="#ref13">↑</a></span></td>
</tr>
<tr>
<td class="number">14</td>
<td class="reference"><a name="fn14"></a>Branson, M., “Prototyping with Intention – Mozilla Open Innovation,” <em>Medium</em>, 8 May 2020, accessed 3 November 2021, <a href="https://medium.com/mozilla-open-innovation/prototyping-with-intention-33d15fb147c2" target="_blank">https://medium.com/mozilla-open-innovation/prototyping-with-intention-33d15fb147c2 </a> <span class="internal-nav"><a href="#ref14">↑</a></span></td>
</tr>
<tr>
<td class="number">15</td>
<td class="reference"><a name="fn15"></a> Branson, “Prototyping with Intention.” <span class="internal-nav"><a href="#ref15">↑</a></span></td>
</tr>
<tr>
<td class="number">16</td>
<td class="reference"><a name="fn16"></a>“Lighthouse | Tools for Web Developers,” <em>Google Developers</em>, 2020, accessed 3 November 2021, <a href="https://developers.google.com/web/tools/lighthouse "target="_blank">https://developers.google.com/web/tools/lighthouse</a> <span class="internal-nav"><a href="#ref16">↑</a></span></td>
</tr>
<tr>
<td class="number">17</td>
<td class="reference"><a name="fn17"></a>Interview, Common Voice, online, Bangalore, 25 March 2020. <span class="internal-nav"><a href="#ref17">↑</a></span></td>
</tr>
<tr>
<td class="number">18</td>
<td class="reference"><a name="fn18"></a> Interview, Common Voice, online, Bangalore, 22 October 2020. <span class="internal-nav"><a href="#ref18">↑</a></span></td>
</tr>
</table>
</div>
</div>
<div class="six wide column empty">
</div>
</div>
</div>
</div>
<!-- Footer -->
<div class="footer">
<div class="ui container four column stackable grid">
<div class="one wide column empty">
</div>
<div class="five wide column">
<h3>About the Study</h3>
<p>We believe that voice interfaces have the potential to democratise the use of the internet by addressing limitations related to reading and writing on digital text-only platforms and devices. This report examines the current landscape of voice interfaces in India, with a focus on concerns related to privacy and data protection, linguistic barriers, and accessibility for persons with disabilities (PwDs). This project was undertaken with support by the Mozilla Corporation.</p>
</div>
<div class="five wide column">
<h3>Research Team</h3>
<p><em>Research</em> Shweta Mohandas, Saumyaa Naidu, Deepika Nandagudi Srinivasa, Divya Pinheiro, Sweta Bisht</p>
<p><em>Conceptualisation, Planning, and Research Inputs</em> Sumandro Chattapadhyay, Puthiya Purayil Sneha</p>
<p><em>Illustration</em> Kruthika NS (Instagram @theworkplacedoodler)</p>
<p><em>Website Design</em> Saumyaa Naidu</p>
<p><em>Website Development</em> Sumandro Chattapadhyay, Pranav M Bidare</p>
<p><em>Review and Editing</em> Puthiya Purayil Sneha, Divyank Katira, Pranav M Bidare, Torsha Sarkar, Pallavi Bedi, Divya Pinheiro</p>
<p><em>Copy Editing</em> The Clean Copy</p>
</div>
<div class="four wide column">
<h3>Copyright and Credits</h3>
<p>Copyright: <a href="http://cis-india.org/" target="_blank">CIS, India</a>, 2021<br />License: <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank">CC BY 4.0 International</a></p>
<p>Built using <a href="https://semantic-ui.com/" target="_blank">Semantic UI</a><br/><a href="https://fonts.google.com/specimen/Barlow" target="_blank">Barlow</a> and <a href="https://fonts.google.com/specimen/Open+Sans" target="_blank">Open Sans</a> by <a href="https://fonts.google.com/" target="_blank">Google Fonts</a><br/>Social media icons by <a href="https://fontawesome.com/" target="_blank">Font Awesome</a><br/>Hosted on <a href="https://github.com/cis-india/mozvoice" target="_blank">GitHub</a></p>
</div>
<div class="one wide column empty">
</div>
<div class="sixteen wide column">
<div style="float: center; clear: both;">
<a href="https://cis-india.org/" target="_blank" style="border-bottom: 0px solid"><img src="img/logo.png" alt="The Centre for Internet and Society, India" class="logo" /></a>
</div>
<div class="icons" style="float: center; clear: both;">
<a href="https://www.instagram.com/cis.india/" target="_blank"><i class="fab fa-instagram fa-lg"></i></a> <a href="https://twitter.com/cis_india" target="_blank"><i class="fab fa-twitter fa-lg"></i></a> <a href="https://www.youtube.com/channel/UC0SLNXQo9XQGUE7Enujr9Ng" target="_blank"><i class="fab fa-youtube fa-lg"></i></a></p>
</div>
</div>
</div>
</div>
</body>
</html>