Replies: 7 comments
-
drogonThe server ate a lot of memory. |
Beta Was this translation helpful? Give feedback.
-
I would consider reaching out each framework persona :) |
Beta Was this translation helpful? Give feedback.
-
This is more like general head blocking issue rather than pipeline specific. It would be interesting to see if these frameworks fail the same way with a single large request head. (like garbage filled header value that takes up a lot of memory) like @franz1981 pointed out in real world web server it's important to use bounded memory for read/write and actively back off when memory buffer is full. |
Beta Was this translation helpful? Give feedback.
-
In socketify case was a header parse bug that I introduced (my bad), but i will ship the new version today, wrk + pipelining with lua works flawless, auto cannon also works, something in this script that triggered the right condition. |
Beta Was this translation helpful? Give feedback.
-
When I implementing HTTP pipelining in server FastWSGI, I didn’t even think of organizing a separate large storage for incoming packages. Therefore, with this implementation, it is not necessary to allocate additional memory, but there is no way to get more than 2 million requests per second in test |
Beta Was this translation helpful? Give feedback.
-
@remittor Have you observed an actual memory leak or just high memory usage? In the latter case I don't really see an issue - requesting memory from the operating system and giving it back has system call overheads, so it makes sense if a server holds onto its allocations and reuses them for future requests. Also, if limiting memory usage is a real concern, cgroups provide a far more reliable way to achieve this, and there is no need to complicate the server code by implementing a similar mechanism there. |
Beta Was this translation helpful? Give feedback.
-
The developers of the But by the detailed results of TFB test, you can easily see whether the HTTP server saves memory when reading from the TCP socket of endless requests (pipeline). Examples from fresh results:
I also noticed this strange situation (when 256 concurrency):
I cannot give an explanation for this. |
Beta Was this translation helpful? Give feedback.
-
In a web server, the most difficult thing to implement correctly is pipelining for requests.
Consequences of incorrect implementation:
To test the indicated problem, I wrote a fairly simple python script: https://gist.github.com/remittor/f2c89f50ff259e6e704fe246ee6c3dd5
Run example test:
$ python3 pl_test.py -h 127.0.0.1 -p 5000 -u /plaintext
libreactor
The server ate a lot of memory and did not free it.
japronto
The server ate a lot of memory and did not free it.
Server logs:
vibora
The server ate a lot of memory. After test stop memory frees, but not completely.
Server logs are full of such errors:
socketify
There is a segfault, but the author promises to fix it soon.
UPD: Fixed in version 0.0.19
And most likely most of the servers in test TFB have problems implementing HTTP pipelining.
Beta Was this translation helpful? Give feedback.
All reactions