Yes I know, this is yet another language model. I used PEFT to finetune the Cerebras-GPT-2.7B on Alpaca, which makes for a very very fast, coherent, albeit hallucinating model.
It took 5 hours on a vast.ai machine at 29 cents an hour, so less than a dollar and a half to finetune.
The repository contains steps to reproduce the model, the dataset, links to the merged-in model on huggingface, a colab notebook, and the LoRA weights.
There's also a chatbot app, and a link to a ggml version of the model you can run via https://github.com/ggerganov/ggml on your CPU alone!
Enjoy!