Wow, this was super helpful - probably the most intuitive write up on LLM hardware economics that I’ve read anywhere.
One question for you - how does the length of the context window fit into this equation? AFAIK, longer context windows are more computationally expensive, even if you don’t fill them with tokens. How do you account for that in your calculations?
Wow, this was super helpful - probably the most intuitive write up on LLM hardware economics that I’ve read anywhere.
One question for you - how does the length of the context window fit into this equation? AFAIK, longer context windows are more computationally expensive, even if you don’t fill them with tokens. How do you account for that in your calculations?