You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Adjust permissions for memory locking. Add to `/etc/security/limits.conf`
said to adjust memlock limits.
run minimal_memory_lock with "--allocate-in-node 50 --lock-memory"
Expected behavior
Process memory before node creation: 5 MB
Sleeping here so all the middleware threads are created
Process memory before locking: 20 MB
Process memory locked
Process memory after locking: 106 MB
Process memory before spin: 106 MB
Total page faults before spin [Minor: 22671, Major: 0]
[WARN] [1697725046.436525305] [minimal_publisher]: New page faults during spin: [minor: 12822, major: 0]
[INFO] [1697725046.907732793] [minimal_publisher]: New page faults during spin: [minor: 0, major: 0]
[INFO] [1697725047.407619649] [minimal_publisher]: New page faults during spin: [minor: 0, major: 0]
[INFO] [1697725047.907790732] [minimal_publisher]: New page faults during spin: [minor: 0, major: 0]
[INFO] [1697725048.407655468] [minimal_publisher]: New page faults during spin: [minor: 0, major: 0]
...
Actual behavior
Process memory before node creation: 5 MB
Sleeping here so all the middleware threads are created
Process memory before locking: 20 MB
terminate called after throwing an instance of 'std::runtime_error'what(): mlockall failed. Error code Cannot allocate memory
[1] 3244501 abort
and check the memlock limits with
ulimit -l
65536
It remains unchanged, and it's fewer than --allocate-in-node 50 --lock-memory need. I believe that's the reason why mlockall() failed.
Additional information
Since I was logging with ssh, so there's an additional step to take:
Add to /etc/ssh/sshd_config (as sudo)
UsePAM yes
Then check the limits
ulimit -l
266144
Then it works!
Feature request
Feature description
Maybe we should add the additional step above to README.md for other developers use ssh to login 😄
In addition to using stress-ng to evaluate the entire system, we can also use perf to monitor the CPU usage of individual process minimal_memory_lock: perf stat -e cpu-clock,context-switches,branches,branch-misses,cache-references,cache-misses,instructions,cycles -D 30 -p $pid
The text was updated successfully, but these errors were encountered:
use lock memory as non-root users (sshd_config: UsePAM yes)
Required Info:
Steps to reproduce issue
Before setting memlock limits:
ulimit -l 65536
Do as
ros2-realtime-examples/minimal_memory_lock/README.md
Line 14 in f61f93f
run
minimal_memory_lock
with "--allocate-in-node 50 --lock-memory"Expected behavior
Actual behavior
and check the memlock limits with
ulimit -l 65536
It remains unchanged, and it's fewer than
--allocate-in-node 50 --lock-memory
need. I believe that's the reason whymlockall()
failed.Additional information
Since I was logging with ssh, so there's an additional step to take:
Add to
/etc/ssh/sshd_config
(as sudo)Then check the limits
ulimit -l 266144
Then it works!
Feature request
Feature description
minimal_memory_lock
:perf stat -e cpu-clock,context-switches,branches,branch-misses,cache-references,cache-misses,instructions,cycles -D 30 -p $pid
The text was updated successfully, but these errors were encountered: