12 Comments
Mar 17, 2023Liked by Finbarr Timbers

Wow, this was super helpful - probably the most intuitive write up on LLM hardware economics that I’ve read anywhere.

One question for you - how does the length of the context window fit into this equation? AFAIK, longer context windows are more computationally expensive, even if you don’t fill them with tokens. How do you account for that in your calculations?

Expand full comment
Aug 16, 2023Liked by Finbarr Timbers

I tried to work out the math when you describe the optimal batch size for memory bound vs compute bound and I think there may be an error. The multiplicative factor of B (batch size) should be with the compute latency calculation.

Kipply's blog also has the same - https://kipp.ly/transformer-inference-arithmetic/#batch-sizes

Expand full comment

Can you please explain the eqution

latency_memory = 2 . P . n_bytes / memory_bandwidth ?

i am struggling with the factor 2, can't figure out where it came from. Thank you

Expand full comment