Our Heritage, Our Stories (OHOS) is a Towards a National Collection (TaNC) project linking together community generated digital content (CGDC) and displaying this linked content to the end user. The work here is “The Observatory”, a web-app designed to visualise this linked CGDC, and is part of The National Archives (TNA) work in the OHOS project.
This project is currently at Minimum Viable Product stage.
NOTE: The Vue app can be run with or without the database/Kong running in the background. Without the database, it will simply not show some elements, as there is no data available. The Vue app may also be less stable and might crash when using some elements without the database running.
- VSCode
- Docker
- Web Browser
- Insomnia
- HTTPie
- Kong config file
- Vue3 app
- Docker file, to generate:
- Graph database served by Blazegraph
- Graph database served by GraphDB
- Miiify
- Kong API
- Open a terminal in the ohos-observatory directory, run 'npm run dev', then go to 'localhost:3000' on your browser.
-
Open a terminal in the ohos-observatory directory, run 'docker build -t ohos_observatory_frontend .'
-
Run this image with the command 'docker run -d -p 3000:3000 ohos_observatory_frontend' .
-
Go to 'localhost:3000' on your browser.
-
Clone repo
-
Open a terminal in the ohos-observatory directory, run 'docker build -t ohos_observatory_frontend .'
-
Open a terminal in the flask_responses_to_iiif_utility directory, run 'docker build -t ohos-iiif-manifest .'
-
Open a terminal in the GO_Api directory, run 'docker build -t ohos_go_api .'
-
Run dockercompose.yaml with the command 'docker-compose up --remove-orphans' to generate a local Blazegraph, Miiify, Flask-based IIIF generator, and GraphDB server, the Vue front-end, and a Kong API.
-
Transfer kong_config.yml to the Kong API
a. This can either be done as per the Kong documentation, or via Insomnia. It requires sending a POST request to 'http://localhost:8001/config', with the contents of kong_config.yml, and the header 'Content-Type: text/yaml'.
curl -H 'Content-Type: text/yaml' --data-binary @kong_config.yml http://localhost:8001/config
is one way to do this. You should get a response of a JSON file containing details about the various routes that have been created.b. To test that this has worked, you should be able to contact miiify via http://localhost:8000/annotation/hello. Either run 'http http://localhost:8000/annotation/hello' (requires HTTPie), 'curl http://localhost:8000/annotation/hello', or go directly through Insomnia. You should receive the response > Welcome to miiify!
-
Data can then be sent to Blazegraph by sending a post to http://localhost:8000/graph-full-access?
-
Go to 'localhost:3000' on your browser.
The GO api is currently in fairly early stages of development, it is not yet used in the main Observatory. To check whether it works, once you have completed step 8, above, run the following command (requires HTTPie) http http://localhost:8000/graph?query=SELECT%20%20%3Fp%20%3Fo%20%3Fq%20%3Fr%20where%20%20%7B%3Chttp%3A%2F%2Fdbpedia.org%2Fresource%2FBrixworth%3E%20%3Fp%20%3Fo%20.%20%3Fq%20%3Fr%20%3Chttp%3A%2F%2Fdbpedia.org%2Fresource%2FBrixworth%3E%20.%7D%20
The format for requests to the GO api is currently http://localhost:8000/graph?query=
, followed by an Sparql query that has been url encoded. This is almost guaranteed to change during development.
The database can be accessed directly, and queried using SPARQL commands. To do so, launch the app as per the “database active” instructions. Once it is active, SPARQL queries can be passed to it directly by querying the URL below, adding the SPARQL query in plain text after the '?'
http://localhost:8000/graph?
Headers are required to get a response. The suggested default is
{Accept: application/json}'
See here for several other options.
Below is an example query to the SPARQL endpoint using HTTPie, and the start of the response.
> http GET 'http://localhost:8000/graph?query=SELECT * {?s ?p ?o} LIMIT 100' Accept:application/json
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/sparql-results+json;charset=utf-8
Server: Jetty(9.4.18.v20190429)
Transfer-Encoding: chunked
Via: kong/2.8.0
X-Kong-Proxy-Latency: 32
X-Kong-Upstream-Latency: 56
{
"head": {
"vars": [
"s",
"p",
"o"
]
},
"results": {
"bindings": [
{
"o": {
'
This project is working with and investigating the possible uses of linked data. For this, we are currently using two linked data formats to store our data: .nt and .ttl/.ttlx.
- .nt (aka N-triples) simply stores the triples in full size plain text. This makes them very easy for software to generate and parse, but they can become verbose. For example:
- <:bob> <:knows> <:alice>
- <:bob> <:knows> <:dave>
- .ttl/.ttlx (aka Turtle, or Terse RDF Triple Language) is designed to be more human-readable, and to look similar to SPARQL queries. It is less verbose, leaving out repeated subjects or predicates where possible. Note that if the next triple repeats the predicate, the triple is followed by a comma; if the next triple repeats the subject, the triple is followed by a semi-colon. For example:
- :bob :knows :alice, :dave .
Several separate prototypes will be produced at the same time, in order to investigate different approaches to visualise, interact, and work with the available data.
- Existing content management system based (P1) - Working with Omeka-S and other existing tools to investigate whether they adequately answer the research questions.
- Bespoke (2D) (P2) - A bespoke web-app based on Vue3, designed from the ground up to investigate what can be done with linked data for the benefit of the end user
- Experimental (3D) (P3) - An investigative look into technologies that may be the future of exhibitions. Particular focus will be paid to 3d environments such as Mozilla hubs
There is a test suite avaiable. To run the tests, follow the instructions above to run with the database activated. Once this is setup, open a terminal in the main /ohos-observatory/ direcory, and run the below command
npm run test
There are some known issues with WSL. If you run this project using WSL and actively using the Linux portion for development, you will likely run into issues related to timing of POST requests that cross the Windows/Linux barrier. This is inconsistent, and seems to show itself mostly when sending the config to Kong, and when running the automated tests. This bug has been noted, and is in the list of tasks to work on. Temporary workaround: if something fails due to a timeout issue, re-try. It may take a few times, but sometimes the large delay simply won’t happen, and the timeout issue goes away.