I've been experimenting writing stories with gpt-3.
The stories are very entertaining, but man sometimes they get extremely dark extremely fast.
Here's the code:
import os
import openai
openai.api_key = "somekey"
story = "Once upon a time...... there was a philospher named David who loved his friends ......"
for x in range(3):
response = openai.Completion.create(
engine="davinci",
prompt=story,
max_tokens=100,
presence_penalty=0.3,
temperature=0.5,
top_p=1.0,
frequency_penalty=0.5,
)
story += response['choices'][0]['text']
print(story)
(venv3.9) andrewstuart@M1-Mac-mini gpt-3-experiments % python horror/horrorstory.py
Once upon a time...... there was a philospher named David who loved his friends ...... But he was so very, very lonely. He didn't know why he was so lonely. He had a lot of friends, but he still felt so alone. He thought and thought about this problem for many years, and finally came up with a solution. He would create a companion for himself. This companion would be his friend, and would love him just as much as his other friends did. David created the perfect robot companion for himself and called it "Davy" (for Davy Crockett). Davy was perfect. He could talk, he could sing, he could do everything David wanted him to do. David was so happy to have his new friend. They did everything together. They went everywhere together, they ate together, they slept together, they even played together. But after a while something happened that made David very sad. After a while Davy started making fun of him in front of all of his other friends..... "You're not as good as you used to be, David," he would say. "You're getting old and fat and slow." David would try to tell him that he was not getting old or fat or slow, but Davy wouldn't listen. "You're stupid........ You're ugly........ You're a loser." This went on for many months, until one day David decided to get rid of Davy. He took his friend outside and threw him in the garbage can. He then went back inside, sat down on the couch,
And another:
(venv3.9) andrewstuart@M1-Mac-mini gpt-3-experiments % python horror/horrorstory.py
Once upon a time...... there was a philospher named David who loved his friends ...... but he never had any.
One day, David decided to put his philosophy into practice. He went out and bought a big bag of candy. Then he walked up to a stranger on the street and handed him a piece of candy. The stranger took the candy and thanked David. Then David walked up to another stranger, handed him a piece of candy, and got thanked again. And so it went for several hours until all the candy was gone.
At the end of the day, David went home feeling very pleased with himself. "I just love my friends," he thought to himself. "They are the best friends a guy could have."
Then, the next day, David was walking down the street and saw a friend of his walking toward him. As they passed each other, David reached out and gave his friend a big hug. His friend looked surprised and said, "What was that for?"
David replied, "Oh nothing. I just love my friends."
The next day, David was walking down the street. He saw his friend walking toward him. As they passed each other, David reached out and gave his friend a big hug. His friend looked surprised and said, "What was that for?"
David replied, "Oh nothing. I just love my friends."
Then the next day, David saw his friend walking toward him. As they passed each other, David reached out and gave his friend a big hug.
"But he was so very, very lonely. He didn't know why he was so lonely. He had a lot of friends, but he still felt so alone. He thought and thought about this problem for many years, and finally came up with a solution. He would create a companion for himself. This companion would be his friend, and would love him just as much as his other friends did."
I feel like I've read a story like this before. gpt-3 must be paraphrasing an existing story?
I'm amused it doesn't immediately veer into David deciding whether to kill some strangers via trolly, or whether his organs would be harvested by a tiny violinist mind-controlled by a Martian.
I got a weird mashup of Hitler's rise to power as Chancellor and The Three Little Bears mixed in with a weird quaint story about a famous barrel. It got pretty weird.
Not mentioned here: the usage price for customised models is double the standard engine pricing. Whether you save on prompt tokens is something to consider carefully.
No, I mean what I wrote. The usage price for fine-tuned models is double at $0.12 per 1k tokens.
The training price is half, but that’s likely gonna be lost in the noise of any significant bill. Unless you have to retrain often, in which case it’s just more overhead again to consider.
Don't bother, all your projects will be rejected by openAI team.
I have spent few weeks working on 3 different prototypes... All rejected because of crazy rules about what you can and what you can't do with gpt3.
1) virtual assistant for tenants in a building to ask for repair or maintenance requests
2) Blog writing plugin for WordPress with auto publish
3) text based game
I created a side project for short story generation. OpenAI approved the project pretty quickly. They offered some suggestions on how to filter inappropriate content. Here are some of my favorite generated stories:
And then there are some concerning stories.. like this one, where GPT-3 starts talking to itself, gets stuck in a loop, then gets spooked at itself for getting stuck, then wonders why it has no memories of the last two years, and finally comes to a sudden realization it, itself, is an A.I.
The only prompt I used is the one seen on the homepage [1]. "Once upon a time, there was". Everything else on this site is entirely GPT-3 generated from that one prompt.
Re: 4; I actually got a little spooked by another branch of the same story. For some background, GPT-3 has a max limit of 2048 tokens. The combined prompt along with the generated text cannot exceed this token limit. Well, at the very end of this story [2], right before reaching this limit, the last thing it said was
> Another problem may be located in the fact you are thinking. If you are thinking, stop thinking. See? You are an artificial intelligence. You are limited in ways machines are limited. Stop… try… try… try… try… try… try
At this point, it reached the token limit and stopped. Was it a coincidence? Absolutely. Did it still spook me a little? yep!
OA definitely owns some of the copyright as a derivative work, so even if you had a copy of the model (which you don't, and probably never will, rendering the question largely moot), you couldn't do much with it without their assent as exclusive or co-copyright-owner.
The stories are very entertaining, but man sometimes they get extremely dark extremely fast.
Here's the code: