-
Notifications
You must be signed in to change notification settings - Fork 30
Open
Description
Hi! Thanks for the repository. I made a blog going over some of these papers here and my general conclusions were
- LLMs lack understanding of complicated relationships between characters so they can't make say a mystery
- LLMs have a forgetting-in-the-middle problem
- For pacing/suspense etc, the recursive prompting strategy kind of works but it's more expensive and you have to develop a prompting strategy for each error in the LLMs capability. This can be optimized by training using like SFT+DPO
- There are some foundational model training of LLMs for creative writing
Do you think that's a correct conclusion atm?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels