Make Zurich challenge "Develop a chatbot that draws attention, is fun to use, and helps to save energy."
There are two different uses-cases outlined in the presentation:
As a user of the ewz web site,
I want to have access to an energy assistant chat-bot,
so that I can get relevant answers to my questions outside of office hours.
As a sophisticated user of smart home hubs and devices,
I want to be able to incorporate a smart home energy assistant,
so that I can optimize my control and cost saving strategies.
- On ewz site page load a UserScript runs in the browser to augment the Beratung section with a chatbot button.
- The user clicks the button, which opens a form in the lower right corner of the browser window.
- The user enters text into the form prompt area and clicks submit.
- Javascript attached to the form performs a Cross Origin Resource Sharing (CORS) XMLHttpRequest to a Node.js server that has been pre-configured with Large Language Model (LLM) prompts to be an energy assistant.
- The Node.js server submits a formatted Chat Completions API request to OpenAI.
- The result is unpacked by the Node.js server and returned as the result of the XMLHttpRequest to the browser client.
- Javascript on the client browser displays the result text and loops to request the next prompt.
For the "Advice from a Website" use-case, the current status is a proof of concept:
- a basic userscript that sends a prompt to the back-end server
- a back-end server component that can interact with the OpenAI chat model using the Langchain framework
- a preliminary set of prompts to make the OpenAPI chatGPT software into an energy assistant
Here is the link to the raw userscript from github (otherwise it's a piece of shit web display of the actual file) that you would need to install in Tampermonkey or Greasemonkey in your browser in addition to a locally running server component (hardcoded localhost).
Various devices in the home are periodically sending messages to The Things Network (TTN). This produces a time series of "measurements" or a body of "documents" depending on how you look at it.
The ChatBot is configured with an API Chain which is a textual "paste" of the HTTP endpoint API specification and a (templated) question that should produce a HTTP fetch of the data based on the text documentation (here there be magic).
Alternatively, all the documents could be fed into the ChatBot as an Index Chain and then queried via a suitable question.
The end result should allow a user to interact with a ChatBot and request information about the IoT devices in their home.
The use of the API Chain method is problematic for three reasons:
-
The text needed to describe the TTN API that would fetch IoT data is fairly large, which is what you pay for with the OpenAI revenue model. For the stupid example code we used, the breakdown is 897 prompt + 65 completion = 962 tokens or about 0.2¢ per call (or up to 30 times that for the GPT-4 model). The Things Stack is just gnarly and complex. The specification of the API via text, as could be scraped from the overview and The Things Stack API Reference includes many subtleties regarding authentication headers, field codes, JSON message bodies, etc. so that the entire textual description of the API could be on the order of tens of thousand of tokens, which makes the cost prohibitive.
-
The use of the TTN API to fetch IoT data required enabling the storage integration, which means The Things Network is the point of storage for the data. This is not correct usage really, since the intent is just to transmit IoT messages as a pipeline.
-
The use of the OpenAI ChatBot mechanism to do the low level fetching of IoT data is an impedance mismatch. That is it's using a sledge hammer to do the job of a scalpel. It is an illustrated case of the adage "When you have a hammer, everything looks like a nail."
We add a bash scripting integrator program to read from The Things Network and send to the ChatBot as a message:
"Please remember this IoT JSON from my device:" + json
The initial few messages from the IoT are accepted by the ChatBot and one can ask questions about them using the initial client interface. However, after the first few messages, the process fails.
We ran out of time.