You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are being bitten constantly by Golang's GC. Its performance (i.e. throughput) seems to be the culprit. We think that having a non-generational/non-compacting GC is seriously affecting the performance of our application (our app generates a lot of short-lived objects and uses some immutable datastructures).
It would be good to have a confirmation through an independent throughput test.
The text was updated successfully, but these errors were encountered:
I could generate graphs for several different throughputs, but that would also start measuring general language and runtime library efficiencies (parsing the headers, etc)
But what if in addition to the normal GC work, the program also created N short-lived objects (only for the duration of the request) for some percentage P of requests? That would more accurately mirror what happens in real applications (there are many more temporary objects that get created, not just the long-lived ones). If the GC can't deal with that well, it will also affect the latency...
We are being bitten constantly by Golang's GC. Its performance (i.e. throughput) seems to be the culprit. We think that having a non-generational/non-compacting GC is seriously affecting the performance of our application (our app generates a lot of short-lived objects and uses some immutable datastructures).
It would be good to have a confirmation through an independent throughput test.
The text was updated successfully, but these errors were encountered: