Replies: 1 comment 6 replies
-
Yes. To sum it up: The TechEmpower benchmarks are useless, because they don't impose any constraints on how the applications and their environments are set up. What you're seeing there is mostly a difference in performance of servers, not frameworks. You actually provided a very good example of this. In the screenshot you shared, we can see that FastAPI is supposedly faster than Starlette which, if you think about it, doesn't make any sense whatsoever, seeing how FastAPI is implemented on top of Starlette, and does not alter its routing or core functionality in any way; All it does is add some logic on top of it. The only result that should be possible here is that it's slower. But the results show it to be almost twice as fast, so how is that possible? Well in this benchmark, Starlette (And Starlite) were running on gunicorn + uvicorn workers, using a stock uvicorn, while FastAPI was running uvicorn directly, with Cython compiled dependencies (which handle the protocol layer) and uvloop. This of course makes the comparison pretty much useless, since what's being benchmarked there are effectively different servers and setups thereof. In our benchmarking suite, we use the exact same runner image for all frameworks, making for a much better comparison. Of course there are issues with this as well, since not all configurations will benefit all frameworks in a similar manner. Sanic for example seems to be slower when running uvicorn with Cython dependencies for some reason, while blacksheep's performance was almost unaffected, and therein lies the crux of framework benchmarks really; Different configurations and environments will impact each framework differently, and you can pretty much tweak the setups to give you any result you want. (There is an interesting article from Miguel Grinberg on this topic) It is very important to keep in mind that benchmarks like these only tell you how a framework will perform under these exact conditions. Another difference between the TechEmpower benchmarks and ours is what's actually being tested. The TechEmpower ones are very synthetic, while we try to simulate more realistic loads. There can be a stark difference between how a framework handles 10 bytes of plaintext vs 100kB, which is something TechEmpower does not test for. Our suite is quite extensive, and tries to cover a lot of basic scenarios with varying load sizes. I think the takeaway from this should be that synthetic benchmarks like these are almost never a good indicator of how well an application might perform. Personally, I think people really ought to stop referring to the TechEmpower benchmarks and frameworks should stop participating. This "chasing higher numbers" thing in that manner is quite pointless anyway and has almost no influence on real world applications. |
Beta Was this translation helpful? Give feedback.
-
I did some quick runs of the api-performance-tests on a AWS instance and Starlite/Litestar generally comes at the top among the 5 contenders, often by a large margin 🚀
The TechEmpower web framework benchmarks however paint a different view. Just looking at the latest JSON serialization results, Starlite ranks almost in the middle of ~50 Python frameworks. Limiting the list to the ones covered by
api-performance-tests
, it's at the bottom:Granted, these are almost a year old now but new versions aside, any insights on other reasons for such a glaring discrepancy?
Beta Was this translation helpful? Give feedback.
All reactions