Skip to content

Question about state of research #6

@isamu-isozaki

Description

@isamu-isozaki

Hi! Thanks for the repository. I made a blog going over some of these papers here and my general conclusions were

  1. LLMs lack understanding of complicated relationships between characters so they can't make say a mystery
  2. LLMs have a forgetting-in-the-middle problem
  3. For pacing/suspense etc, the recursive prompting strategy kind of works but it's more expensive and you have to develop a prompting strategy for each error in the LLMs capability. This can be optimized by training using like SFT+DPO
  4. There are some foundational model training of LLMs for creative writing
    Do you think that's a correct conclusion atm?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions