Skip to content

unbearably slow output via mods #635

@magikRUKKOLA

Description

@magikRUKKOLA

Description

I was testing the long context output (several thousands of tokens) and when the speed of the output is more than about 10 tps the mods starts to consume up to 200% CPU (than is, two cores of 4 GHz each!!) which starts to interfere with llm inference (that is, its slower by 10-15%) if the inference is done at the same machine. And, interestinly, the actual output speed is slower than the actual llm output/decode. That is, the system monitor shows that the inference is finished but mods is lagging behind and still struggling to keep up. Why is that?

Version

master from few moths ago

Environment

debian linux testing

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions