You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Scenario is to create tgz archives on a remote server (tar -czf --to-stdout directory).
Currently, captureing the output will be done in memory which is not feasible for large output. Implementation could be specific for files (like capture %Q{tar czf -O}, into_file: 'bigarchive.tgz') or preferrably more general (capture %Q{}, into_io: File.open(...)).
With a bit of headwrapping, this could also be modelled as download_cmd_out tar_cmd, to: 'stdout.file'.
Note that while i refer to tar commands, this might be useful for other commands that produce a lot of data, too.
The text was updated successfully, but these errors were encountered:
The terminal solution would look like ssh user@host 'cd /path/ && tar -cf - file | gzip -9' > file.tgz (or actually <filedescriptor> instead of file.tgz). The first part of that (the ssh stuff) is nicely wrapped in the DSL; right now I only see support for it while using a custom ssh command.
Scenario is to create
tgz
archives on a remote server (tar -czf --to-stdout directory
).Currently,
capture
ing the output will be done in memory which is not feasible for large output. Implementation could be specific for files (likecapture %Q{tar czf -O}, into_file: 'bigarchive.tgz'
) or preferrably more general (capture %Q{}, into_io: File.open(...)
).With a bit of headwrapping, this could also be modelled as
download_cmd_out tar_cmd, to: 'stdout.file'
.Note that while i refer to
tar
commands, this might be useful for other commands that produce a lot of data, too.The text was updated successfully, but these errors were encountered: