Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Dataset Overview Dashboard #1

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -127,3 +127,6 @@ dmypy.json

# Pyre type checker
.pyre/

# data
data/*.csv
15 changes: 15 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
FROM python:3.8.5

EXPOSE 8501

WORKDIR /app
COPY requirements.txt .

RUN pip install --upgrade pip
RUN pip install -r requirements.txt

COPY ./data /data
COPY ./views /views

ENTRYPOINT [ "streamlit", "run"]
CMD ["/views/dataset_overview.py"]
27 changes: 26 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,26 @@
# abusive-clauses-dashboard
PAC - Polish Abusive Clauses Dataset
============
''I have read and agree to the terms and conditions'' is one of the biggest lies on the Internet. Consumers rarely read the contracts they are required to accept. We conclude agreements over the Internet daily. But do we know the content of these agreements? Do we check potential unfair statements? On the Internet, we probably skip most of the Terms and Conditions. However, it is essential to remember that we conclude many more contracts. Imagine that we want to buy a house, a car, send our kids to the nursery, open a bank account, or many more. In all these situations, you will need to conclude the contract, but there is a high probability that you will not read the entire agreement with proper understanding. European consumer law aims to prevent businesses from using so-called ''unfair contractual terms'' in their unilaterally drafted contracts and require consumers to accept. In this paper the ''unfair contractual term'' is the equivalent of abusive clause, and could be defined as a clause that is unilaterally imposed by one of the contract's parties, unequally affecting the other or creating a situation of imbalance between the duties and rights of the parties.

On the EU and at the national such as the Polish levels, agencies cannot check possible agreements by hand. Hence, we took the first step to evaluate the possibility of accelerating this process. We created a dataset and machine learning models for partially automating the detection of potentially abusive clauses. Consumer protection organizations and agencies can use these resources to make their work more effective and efficient. Moreover, consumers can automatically analyze contracts and understand what they agree upon.

How to run
------------
Using Docker:
```bash
docker build . -t pac-dashboard
docker run -p 8501:8501 pac-dashboard
```
Using Python:
```bash
pip install -r requirements.txt
streamlit run views/dataset_overview.py
```

References
------------

License
------------
MIT licensed
Copyright (C) 2022: [CLARIN-PL](https://github.com/CLARIN-PL)
Empty file added data/.keep
Empty file.
6 changes: 6 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
datasets==2.1.0
pandas==1.4.2
plotly==5.7.0
unidecode==1.3.4
streamlit==1.8.1
scipy==1.8.0
89 changes: 89 additions & 0 deletions views/dataset_overview.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
import re
from pathlib import Path
from typing import Dict # using python 3.8.5

import pandas as pd
import plotly.figure_factory as ff
import plotly.express as px
import plotly.graph_objects as go
import streamlit as st
from unidecode import unidecode


SPLIT_NAMES = ['train', 'dev', 'test']

# --- Functions ---

flatten_list = lambda main_list: [item for sublist in main_list for item in sublist]

count_num_of_characters = lambda text: len(re.sub(r'[^a-zA-Z]', '', unidecode(text)))

count_num_of_words = lambda text: len(re.sub(r'[^a-zA-Z ]', '', unidecode(text)).split(' '))

@st.cache
def load_data() -> Dict[str, pd.DataFrame]:
data = {name:pd.read_csv(f'/data/{name}.csv') for name in SPLIT_NAMES}
return data

# --- / Functions ----
# --- PAGE CONTENT ---
DATA_DICT = load_data()


st.title("PAC - Polish Abusive Clauses Dataset")

st.header("Dataset Description")
st.write(
"""
On the EU and at the national such as the Polish levels, agencies cannot check possible agreements by hand.
Hence, we took the first step to evaluate the possibility of accelerating this process.
We created a dataset and machine learning models for partially automating the detection of potentially abusive clauses.
Consumer protection organizations and agencies can use these resources to make their work more effective and efficient.
Moreover, consumers can automatically analyze contracts and understand what they agree upon.
"""
)
st.write("Paper: [arxiv](https://arxiv.org/)")
st.write("Github: [github](https://github.com/CLARIN-PL/abusive-clauses-dashboard)")

st.header("Dataset Statistics")

st.write(f'Train Samples: {len(DATA_DICT["train"])}')
st.write(f'Val Samples: {len(DATA_DICT["dev"])}')
st.write(f'Test Samples: {len(DATA_DICT["test"])}')
st.write(f"Total: {sum([len(df) for _, df in DATA_DICT.items()])}")

st.subheader("Class distribution per data split")
df_class_dist = pd.DataFrame([df['class'].value_counts().rename(k) for k, df in DATA_DICT.items()]).reset_index().rename({'index': 'split_name'}, axis=1)
barchart_class_dist = go.Figure(data=[
go.Bar(name='BEZPIECZNE_POSTANOWIENIE_UMOWNE', x=SPLIT_NAMES, y=df_class_dist['BEZPIECZNE_POSTANOWIENIE_UMOWNE'].values),
go.Bar(name='KLAUZULA_ABUZYWNA', x=SPLIT_NAMES, y=df_class_dist['KLAUZULA_ABUZYWNA'].values),
])
barchart_class_dist.update_layout(
barmode='group',
title_text='Barchart - class distribution',
xaxis_title='Split name',
yaxis_title='Number of data points'
)
st.plotly_chart(barchart_class_dist, use_container_width=True)

st.subheader("Number of words per observation")
hist_data_num_words = [df['text'].apply(count_num_of_words).values for df in DATA_DICT.values()]
fig_num_words = ff.create_distplot(hist_data_num_words, SPLIT_NAMES, show_rug=False, bin_size=1)
fig_num_words.update_traces(nbinsx=100, autobinx=True, selector={'type':'histogram'})
fig_num_words.update_layout(title_text='Histogram - number of words per observation', xaxis_title='Number of words')
st.plotly_chart(fig_num_words, use_container_width=True)

st.subheader("Character count per observation")
hist_data_num_chars = [df['text'].apply(count_num_of_characters).values for df in DATA_DICT.values()]
fig_num_chars = ff.create_distplot(hist_data_num_chars, SPLIT_NAMES, show_rug=False, bin_size=1)
fig_num_chars.update_traces(nbinsx=100, autobinx=True, selector={'type':'histogram'})
fig_num_chars.update_layout(title_text='Histogram - number of characters per observation', xaxis_title='Number of characters')
st.plotly_chart(fig_num_chars, use_container_width=True)

st.subheader("Top 20 common words per data split")
for i, col in enumerate(st.columns(3)):
with col:
st.caption(f"Split Name: {SPLIT_NAMES[i].upper()}")
flat_word_list = flatten_list(DATA_DICT[SPLIT_NAMES[i]].text.apply(lambda x: x.lower().split(' ')).to_list())
top10_words = 100 * pd.Series(flat_word_list, name='Occurance %').value_counts(normalize=True)[:20]
st.dataframe(top10_words)