You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All these new LLMs generate coherent text, but it would take all the fun out of this to use one. So I'm considering making a Markov chain from some (probably quite large, public domain) corpora, and then run a grammar-fixer over top of it of my own design. Ideally it would fix up verb tenses and so on.
Maybe I'll impose some additional structure if I get time, like changing the corpora in different parts, to change the tone. I might also experiment with some more advanced (but still very simple) machine learning, like word2vec.
The goal is for the overarching structures and rules that the sentences follow to be made by me, even if the words and sentences chosen are generated randomly.
I might also consider tagging words during Markov chain construction, by how far through their source text they appeared. That way I can generate words weighted by their distance to a certain part of a book, and hopefully get more introductions at the start, and more climactic finale sentences at the end.
The text was updated successfully, but these errors were encountered:
All these new LLMs generate coherent text, but it would take all the fun out of this to use one. So I'm considering making a Markov chain from some (probably quite large, public domain) corpora, and then run a grammar-fixer over top of it of my own design. Ideally it would fix up verb tenses and so on.
Maybe I'll impose some additional structure if I get time, like changing the corpora in different parts, to change the tone. I might also experiment with some more advanced (but still very simple) machine learning, like word2vec.
The goal is for the overarching structures and rules that the sentences follow to be made by me, even if the words and sentences chosen are generated randomly.
I might also consider tagging words during Markov chain construction, by how far through their source text they appeared. That way I can generate words weighted by their distance to a certain part of a book, and hopefully get more introductions at the start, and more climactic finale sentences at the end.
The text was updated successfully, but these errors were encountered: