As a new studio with no backing from a publisher or investors (yet!) we are having to self-fund our game. As such, we are trying to save money by doing everything ourselves where possible. One thing that I was not expecting to have to do was write two thousand quiz questions!
The questions for Quiz Quest are separated into six categories: History & Politics, Sport & Leisure, Geography & World Culture, Entertainment & Pop Culture, Arts and Literature and Science & Technology. There are also six levels of difficulty within each category: level one being super easy, and level six being super hard. Each question also has a hint, which is revealed when the Ring of Truth weapon is used. The task of writing two thousand of these questions and two thousand hints was a daunting one, but Michael had the bright idea of using AI to do most of the leg work. He had been playing around with ChatGPT, asking it to write questions on various topics, and what it came out with seemed pretty decent, so I had a go myself with Microsoft Bing.
I quickly learned that being too general with the topics was not helpful. I needed to break each category into sub-categories and sometimes focus in on something very specific. It could only generate six questions at a time, which is understandable, but, annoyingly, it would repeat questions unless specifically told not to. One thing Bing struggled with was levels of difficulty. I first asked it to write me a set of easy questions on a subject, then asked for some difficult ones, and the number of repeated questions between the two sets was ridiculous. In the end I had to judge the question difficulty myself, so you can blame me if the questions are too difficult or too easy!
I also quickly learned that a lot of the answers Bing came up with were simply wrong so, most of the time, I was having to fact check everything, particularly for History questions. Science questions it could do mostly fine (probably because you cannae defy the laws of physics, Captain!), but for subjects like History, and even Geography, it was coming up with stuff that was just flatly incorrect. It would also oversimplify complex subjects in an attempt to create a concise question, even though there is no single correct answer to such questions. For example, it wrote a question asking what was the cause of the American Civil War and provided four options, all of which were factors that led to the war, so there was no clear cut answer. (Turns out the answer is simply ‘slavery’. Good job Bing!)
The hints it came out with were mostly awful too. Here’s an example:
Q: Who composed ‘The Rite of Spring’?
A. Igor Stravinsky
B. Sergei Prokofiev
C. Dmitri Shostakovich
D. Maurice Ravel
Correct Answer: A. Igor Stravinsky
Hint: His name might make you think of a bird, but his music will make you think of spring!
The question is fine, but that hint is dreadful. I had to Google the connection between ‘Igor’ and ‘bird’ and it turns out there’s a Japanese children’s book called ‘Igor, the Bird Who Couldn’t Sing’ - not the sort of thing most people, or even veteran quizzers, would know. Naturally, we opted to write the hints ourselves!
But quirks and annoyances aside, AI has been useful, mostly as a way to come up with ideas for questions. Even if the questions it writes itself aren’t great, they might suggest a different question to me that I could then quickly fact check and write. Basically, using AI meant I didn’t have to work from scratch, so, in that respect, it has been a handy tool. This has proved to me, however, that AI still has a long way to go before it can think like a proper human. It also really needs to work on its sense of humour!