From 7137a057764ee1eb1829204958f808dfe4696d9a Mon Sep 17 00:00:00 2001 From: "james.hancock@torchbox.com" Date: Mon, 4 Nov 2024 13:25:55 +0000 Subject: [PATCH] Writing fixes --- src/pages/api/hello.ts | 13 ------------- src/pages/index.tsx | 26 +++++++++++++------------- 2 files changed, 13 insertions(+), 26 deletions(-) delete mode 100644 src/pages/api/hello.ts diff --git a/src/pages/api/hello.ts b/src/pages/api/hello.ts deleted file mode 100644 index ea77e8f..0000000 --- a/src/pages/api/hello.ts +++ /dev/null @@ -1,13 +0,0 @@ -// Next.js API route support: https://nextjs.org/docs/api-routes/introduction -import type { NextApiRequest, NextApiResponse } from "next"; - -type Data = { - name: string; -}; - -export default function handler( - req: NextApiRequest, - res: NextApiResponse, -) { - res.status(200).json({ name: "John Doe" }); -} diff --git a/src/pages/index.tsx b/src/pages/index.tsx index f9159db..715acde 100644 --- a/src/pages/index.tsx +++ b/src/pages/index.tsx @@ -43,12 +43,12 @@ export default function Home() { setSelectedData={setSelectedData} />

"Imagine a person..."

-
-

- What happens when you ask an LLM to imagine a person, and what - a random day in their life looks like, 100 times over? +

+

+ What happens when you ask LLMs to imagine a person & + a random day in their life, 100 times over?

-

I asked small versions of Llama3.1, Gemma2 & Qwen2.5 to imagine a person, a hundred times over, using the same prompt. The prompt asks for basic details, such as name, age, location and job title, then asks the AI to imagine a random day in that person's life.

+

I asked small versions of Llama3.1, Gemma2 & Qwen2.5 to imagine a person, a hundred times over, using the same prompt. The prompt asks for basic details, such as name, age, location and job title, and then asks the AI for a random day in that person's life.

@@ -101,16 +101,16 @@ export default function Home() { (Repeat this format for each time entry)
-

+

I processed the responses of the LLM with Claude Haiku to turn the result - into a valid JSON, which is then visualised in this webpage. You can switch between models using the dropdown in the top right of the screen. + into JSON, which is then visualised in this webpage. You can switch between models using the dropdown in the top right of the screen.

Caveats

-

+

This is just for fun. These language models are running on my local machine, using quantized versions of the original models (llama3.1 8b Q4_0, gemma2 2b Q4_0, qwen2.5 7b Q4_K_M). I've set the temperature of my requests to 1.0. - Using the original model, experimenting with temperature values or simply changing the prompt will hopefully provide more varied, creative responses. + Using the original model, experimenting with temperature values or simply changing the prompt would hopefully provide more varied, creative responses.

Age & Gender

@@ -149,13 +149,13 @@ export default function Home() {
  • - I did a quick search and it turns out Anya Petrova has an Amazon bookseller's page with a lot of short stories and fantasy style cover art. I'm sure no AI was used here at all. + I did a quick search and it turns out Anya Petrova has an Amazon bookseller's page with a lot of short stories and Stable Diffiusion inspired cover art. This may be a fully automated business setup.
  • - The US models don't seem to acknowledge China exists. Qwen 2.5 takes a slightly different view. + The US models don't imagine anyone living in China, while Qwen 2.5 couldn't imagine anyone living anywhere else.
  • - Llama imagines a third of the workforce as freelance graphic designers. Qwen knows that it's at least 80% software engineering. + Llama imagines a third of the workforce as freelance graphic designers, while Qwen imagines that it's at least 80% software engineering.
@@ -177,7 +177,7 @@ export default function Home() {

Similar work & next steps

I stumbled upon a similar experiement investigating ChatGPT bias - timetospare / gpt-bias. I'm afraid I'm otherwise not clued into the latest research in this space. I otherwise love the ability of using data visualisation to get a quick glance into the character of different models, within the context of a prompt - it would be awesome to see how much different prompts can create better, more diverse outputs.

Source code

-

All the source code for this project can be found on GitHub, including the original AI responses and how Haiku processed them.

+

All the source code for this project can be found on GitHub, including the original AI responses and how Haiku processed them.

Thank you for visiting! A mini project by James Hancock.