Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

optimized memcpy #18912

Open
wants to merge 31 commits into
base: master
Choose a base branch
from
Open

optimized memcpy #18912

wants to merge 31 commits into from

Conversation

dweiller
Copy link
Contributor

@dweiller dweiller commented Feb 13, 2024

This PR does a few things to improve memcpy performance:

  • branch for small copy lengths
  • when source and destination have the same address modulo suggested vector size, long copies are done via vectors suggested by std.simd.suggestVectorLength(u8)
  • when source and destination have different addresses modulo suggested vector size, the load is done from an aligned vector and the store to an align(1) vector

It is also possible to use aligned vector loads and stores in the misaligned case, but I have not found a nice way to make the compiler do it (for x86_64), and would rather not do it with inline assembly. The simplest way to do it (using @shuffle) does improve performance further, however it causes massive bloat in the memcpy function (producing >12x more code than without it on my avx2 machine) due to the need for a comptime mask.

Here is a graph or the performance of master (commit f845fa0) and 8688b0a in a microbenchmark. The benchmark times @memcpy for a specific length and alignment (mod 32) of source and destination 100 000 times in a loop; this is done for all 256 combinations of source and destination alignment and the average time per iteration across all alignment combinations is the reported result.

hot-loop

Note that this benchmark is not going to be particularly realistic for most circumstances as branch predictors will be perfectly trained.

In general we want to focus on optimizing small length copies as small lengths are the most frequent—some reported distributions can be found at here and this google paper (LLVM uses these for benchmarking). For small lengths, the above graph indicates between a modest and significant (up to around 3x) improvement, depending on the length.

Here is a graph showing performance across the distributions from the linked paper linked above (taken from the LLVM repo).

distributions

For code size of memcpy:

optimize mode master f845fa0 (B) 8688b0a (B)
ReleaseFast 251 275
ReleaseSmall 32 27

I have only checked the performance of this change on my (x86_64, avx2, znver1) machine—it would be good to check on other architectures as well if someone with one would like to test it. The benchmarks used can be found here—the average and distrib ones were used to produce the above charts (check out the tools directory for data generation scripts).

A few other notes:

  • the copy loops are not manually unrolled (and LLVM doesn't unroll them)
  • testing manual unrolling on my machine has not indicated a significant difference in the distribution benchmark, but does show regressions in the hot loop benchmark
  • I have not benchmarked ReleaseSmall performance
  • For x86_64 machines that have ERMSB it is likely best to use rep movsb for copies over several hundred bytes (I don't have a suitable machine to test this)
  • For x86_64 machines that have FSRM it may be better to rep movsb for all/most cases (I don't have a suitable machine to test this)
  • When taking advantage of ERMSB and FSRM there would be the option of runtime detection—detection should have negligible performance impact as the branch should always be predicted correctly after the first few calls

Other stuff that can be done (either before merging or as follow up issues):

  • check performance on other architectures and CPUs (would need other people to run benchmarks on their machines, or put some new benchmarks in gotta-go-fast)
  • see if there is a reasonable way to do aligned vector moves for misaligned case (doubtful without inline assembly) pretty sure this can't be done without @select taking a runtime mask or using inline assembly
  • benchmark copies with random sizes

@Rexicon226
Copy link
Contributor

Nice work! Could you share the benchmark script? I am really curious to see how those aligned loads/stores are emitted.

@dweiller
Copy link
Contributor Author

Nice work! Could you share the benchmark script? I am really curious to see how those aligned loads/stores are emitted.

The benchmark script is just a pretty simple loop copying from one buffer to the other a bunch of times for each source alignment offset. I haven't done anything clever in the micro benchmark. The reason aligned vector moves are emitted is the use of @ptrCast to get pointers of type [*]CopyType/[*]const CopyType in lib/compiler_rt/memcpy.zig.

Here's the micro benchmark (doesn't look like I can colour it inside the fold):
pub fn main() !void {
    const allocator = std.heap.page_allocator;
const args = try std.process.argsAlloc(allocator);
defer allocator.free(args);
if (args.len > 3 or args.len < 2) return error.BadArgs;

const iterations = try std.fmt.parseInt(usize, args[1], 10);

const copy_len = if (args.len >= 3)
    try std.fmt.parseInt(usize, args[2], 10)
else
    std.mem.page_size;

std.debug.print("copying blocks of size {d}\n", .{std.fmt.fmtIntSizeBin(copy_len)});

const alignment = if (std.simd.suggestVectorLength(u8)) |vec_size|
    @alignOf(@Type(.{ .Vector = .{ .child = u8, .len = vec_size } }))
else
    @alignOf(usize);

var times: [alignment]u64 = undefined;

// preheat
try iteration(allocator, alignment, copy_len, alignment - 2, &times[0], iterations);

for (0..alignment) |s_offset| {
    try iteration(allocator, alignment, copy_len, s_offset, &times[s_offset], iterations);
    std.debug.print(
        "s_offset: {d}, time: {}\n",
        .{
            s_offset,
            std.fmt.fmtDuration(times[s_offset]),
        },
    );
}

var avg: u128 = 0;
for (times) |t| {
    avg += t;
}
avg /= times.len;

const final: u64 = @intCast(avg);

const throughput: u64 = @intCast((@as(u128, copy_len) * iterations * std.time.ns_per_s) / final);

std.debug.print("average time: {}, throughput: {d:.2}\n", .{
    std.fmt.fmtDuration(final),
    std.fmt.fmtIntSizeBin(throughput),
});

}

inline fn iteration(
allocator: Allocator,
comptime alignment: comptime_int,
copy_len: usize,
s_offset: usize,
p_time: *u64,
iterations: usize,
) !void {
const src = try allocator.alignedAlloc(u8, alignment, copy_len + s_offset);
defer allocator.free(src);

const dest = try allocator.alignedAlloc(u8, alignment, copy_len);
defer allocator.free(dest);

var timer = try std.time.Timer.start();

var i: usize = 0;
while (i < iterations) : (i += 1) {
    @memcpy(dest[0..copy_len], src[s_offset..][0..copy_len]);
}

const time = timer.read();

p_time.* = time;

}

const std = @import("std");
const Allocator = std.mem.Allocator;

@Rexicon226
Copy link
Contributor

Interesting. I have attempted to re-produce this and I see 2, maybe 3 issues.

  1. This generates only like 1 pair of aligned loads, when you could be fitting in way more. See: https://zig.godbolt.org/z/YxEPvssMa

  2. I simply cannot replicate your results using the benchmark script you provided. On my AVX2 machine, I see a consistent 30% worse perf on your function. I did these measurements by simply compiling a version of LLVM Zig with both compiler_rt versions. Furthermore, I wrote my own benchmark script, and also cannot replicate it. However there I was measuring how many cycles it takes to copy X amount of bytes. Very comparable results to the benchmark script your provided.
    Note: all data collected was normalized and the mean was found with a rolling average of 50 data points over 10kb total range.

  3. If you remember our discussion about this before, my personal memcpy function was faster, and that still seems to be the case. Please do not interpret the before sentence as criticism or containing any mal-intent, I am simply stating the facts I observe. I believe it is because my version contains more aligned vector loads squeezed in and does a better job at managing pre-fetching.

I recommend you use something like:

dest[0..v].* = @as(*const @Vector(v, u8), @alignCast(@ptrCast(aligned_src[offset..][0..v]))).*;

to generate your aligned vector loads, 😉

@dweiller
Copy link
Contributor Author

dweiller commented Feb 13, 2024

1. This generates only like 1 pair of aligned loads, when you could be fitting in way more. See: https://zig.godbolt.org/z/YxEPvssMa

3. ... I believe it is because my version contains more aligned vector loads squeezed in and does a better job at managing pre-fetching.

I am not convinced this is the case. It is trivial to produce a longer sequence of aligned move operations by unrolling the loop - though this does neccesitate extra cleanup code - and when I did this in the past it did not have much impact on performance. Unless I get benchmark results clearly showing a win that seems generalizable I wouldn't want to do it; forcing this sort of unrolling seems more likely to trip up the optimizer and may only positively impact performance on the machine you run the benchmark on while degrading performance on other machines.

2. I simply cannot replicate your results using the benchmark script you provided.

If you let me know precisely how you compiled/ran it I will try on my machine as well. Particular things of interest would be how many iterations you did, the copy length, target cpu features, and optimize mode.

2. I see a consistent 30% worse perf on your function. I did these measurements by simply compiling a version of LLVM Zig with both compiler_rt versions.

Can you be more specific about what you did when you saw worse performance relative to master so I can investigate? I have also used compiling the zig compiler itself as a test and saw a minor (not really sure it was outside uncertainty) improvement in compile time.

2. Furthermore, I wrote my own benchmark script, and also cannot replicate it. However there I was measuring how many cycles it takes to copy X amount of bytes. Very comparable results to the benchmark script your provided.

Can you post your benchmark? I'm not sure what you mean by "...and also cannot replicate it" along with "Very comparable results to the benchmark script your provided." - those read as contradictory statements to me, so I must be misinterpretting something.

@dweiller
Copy link
Contributor Author

Looks like this somehow broke a bunch of non-x86_64 targets in the linux CI. I wonder if on those targets LLVM is making memcpy produce a call to itself...

@dweiller
Copy link
Contributor Author

dweiller commented Feb 13, 2024

I recommend you use something like:

dest[0..v].* = @as(*const @Vector(v, u8), @alignCast(@ptrCast(aligned_src[offset..][0..v]))).*;

to generate your aligned vector loads, 😉

I assume you mean @as(*Vector(V, u8), @alignCast(@ptrCast(dest[0..v]))).* = ...? Otherwise that is going to produce an unaligned store. Using this particular snippet to perform the cast (casting the dest as well) doesn't make a difference.

@dweiller
Copy link
Contributor Author

dweiller commented Feb 13, 2024

I wonder if on those targets LLVM is making memcpy produce a call to itself...

It looks like this is the issue, at least for wasm32-wasi. I checked a test.wasm file produced by zig build test with wasm2wat and memcpy was indeed making a recursive call to itself. It's going to be pretty tedious if we have to rewrite memcpy for each target according to whether LLVM does this, or find a way to write it so it doesn't do this on all targets simultaneously.

If anyone knows of a way to trace wasm code produced back to source lines, similar to objdump --disassemble -S that would be helpful for working around this.

@Jarred-Sumner
Copy link
Contributor

Jarred-Sumner commented Feb 13, 2024

I suspect zig’s memcpy could get a whole lot faster than 30%-ish. I think it’s hard to rule out measurement noise without also having a complete benchmark snippet for others to run and on larger amounts of data. There must already be comprehensive memcpy benchmarks online somewhere you could copy from

@dweiller
Copy link
Contributor Author

I suspect zig’s memcpy could get a whole lot faster than 30%-ish.

For small sizes this is certainly true, but for large copies I'm a bit skeptical, at least not without using inline asm with something like rep movsb, but that opens up a bit of a can of worms.

@dweiller
Copy link
Contributor Author

dweiller commented Feb 14, 2024

I've made a bunch of improvements yielding both better performance and smaller code size. The table in the top post has been updated.

Edit: Not sure what the problem with riscv64-linux is in the CI - I've disassembled it and memcpy and is not doing a recursive call.

@dweiller
Copy link
Contributor Author

dweiller commented Feb 15, 2024

Edit: Not sure what the problem with riscv64-linux is in the CI - I've disassembled it and memcpy and is not doing a recursive call.

This is wrong, recursive jalr instructions are generated for the case 16 <= copy_len < 32, to try and do a copy of length 16.

@dweiller dweiller force-pushed the memcpy-opt branch 3 times, most recently from 81a4c1c to 076afc6 Compare February 19, 2024 03:08
@dweiller
Copy link
Contributor Author

dweiller commented Feb 19, 2024

The windows CI failure is CMake Error: CMake was unable to find a build program corresponding to "Ninja". CMAKE_MAKE_PROGRAM is not set. You probably need to select a different build tool. Not sure why that it happening.

@RossComputerGuy
Copy link
Contributor

@dweiller I just ran into that in a different PR as well, seems to be a fluke.

@dweiller
Copy link
Contributor Author

dweiller commented Mar 15, 2024

The windows CI failure seems to be the compiler running out of memory - not sure why this would happen, it shouldn't be caused by this PR.

@kprotty
Copy link
Member

kprotty commented May 10, 2024

A decent amount of the code could be simplified using comptime into something like:

// shared (dst, src)

copy(blk, offsets):
  for i in offsets: 
    dst[i..][0..blk].* = src[i..][0..blk].*

memcpy(len):
  if len == 0: return
  if len < 4: return copy(1, {0, len/2, len-1})

  v = max(4, suggestVectorSize(u8) or 0)
  inline for n in ctz(4)..ctz(v):
    if len <= 1 << (n+1): // @expect(false)
      return copy(1<<n, {0, len - (1<<n)})
      
  for i in 0..(len / v): copy(v, .{ i * v }) // 4-way auto-vec
  copy(v, .{ len - v })

But I think (at a higher level) we should go all the way: Facebook's folly memcpy is heavily optimized but also serves as their memmove since it's so similar. For sizes which cant be handled by a single copy above (anything after the inline for), they detect if src/dst overlap and switch to something similar but copied backwards.

Interestingly. they never use aligned loads since it doesn't seem worth it. Copy-forward does both load/store unaligned, but copy-backward uses aligned stores with @prefetch(dst_to_write, .{ .rw = .write, .locality = 3 }). They also switch to rep movsb (it's like builtin CPU memcpy) if the conditions align for it to be fast (len >= 1024, pointers dont overlap, ERMSB cpuid enabled).

The general strategy of having memcpy & memmove be the same thing would be nice (perf & maintenance wise). Fast memset is also the same as fast memcpy but replace loads of src with a pre-@splat block/vector/register of the same byte.

@dweiller
Copy link
Contributor Author

dweiller commented May 10, 2024

A decent amount of the code could be simplified using comptime into something like:

This looks pretty nice, I'll try it out.

But I think (at a higher level) we should go all the way: Facebook's folly memcpy is heavily optimized but also serves as their memmove since it's so similar. For sizes which cant be handled by a single copy above (anything after the inline for), they detect if src/dst overlap and switch to something similar but copied backwards.

Our memcpy assumes that src/dst don't overlap, so a change like that would be a (non breaking, but significant) change in semantics that would affect performance.

Interestingly. they never use aligned loads since it doesn't seem worth it. Copy-forward does both load/store unaligned, but copy-backward uses aligned stores with @prefetch(dst_to_write, .{ .rw = .write, .locality = 3 }).

On my machine, IIRC, the aligned ops did affect performance, but this would also be something machine dependent. I have seen that the current wisdom seems to be that modern x86_64 doesn't really care (but for some reason haven't seen when this started being the case), but what about other platforms? At least for a general algorithm, I would think we should use aligned ops, and if/when we want to individually optimise different platforms we could use unaligned ops. I did also try it unrolling with prefetch, but didn't want to over-optimise for my machine - I can't recall how much difference it made for me.

They also switch to rep movsb (it's like builtin CPU memcpy) if the conditions align for it to be fast (len >= 1024, pointers dont overlap, ERMSB cpuid enabled).

I did some research into rep movsb and read that the benefit and breakpoint for when it's beneficial is highly machine dependent, so I thought I'd leave using it for a future enhancement.

Fast memset is also the same as fast memcpy but replace loads of src with a pre-@splat block/vector/register of the same byte.

I do actually have a memset branch locally as well - I could include it in this PR, but I haven't put as much work into it yet.

@kprotty
Copy link
Member

kprotty commented May 10, 2024

Our memcpy assumes that src/dst don't overlap, so a change like that would be a (non breaking, but significant) change in semantics that would affect performance.

I rather mean to keep memcpy, the function with noalias and such, but have it just call memmove internally. This should keep the noalias optimizations applied by compiler replacing memcpy, but keep the vector-based / branch-optimized version at runtime.

On my machine, IIRC, the aligned ops did affect performance

Was this in relation to the results above? I think the benchmark could report throughput in cycles/byte rather than ns. This is something used in benchmarks like smhasher; hash functions do similar reading/branching opts and take a small amount of time for short sizes. Combine this with not clflushing dst/src and not cpu-core pinning, I'd assume ns-based time fluctuates more even with high iterations.

I only mention this as aligned loads didn't seem to have an effect on other benchmarks so trying to somehow discover/rationalize the difference in the results. TBF, also haven't tested on anything outside avx2, avx512f, and apple_m1.

rep movsb and read that the benefit and breakpoint for when it's beneficial is highly machine dependent

Indeed looks like it depends on micro-architecture specifically (being finnicky on zen2+). Seemed to be same speed as the normal vectorized loop at least on 5600x and 6900HS.

I do actually have a memset branch locally as well - I could include it in this PR

Yea a separate PR would be best. Just wanted to mention their similarity.

@dweiller
Copy link
Contributor Author

dweiller commented May 10, 2024

I rather mean to keep memcpy, the function with noalias and such, but have it just call memmove internally. This should keep the noalias optimizations applied by compiler replacing memcpy, but keep the vector-based / branch-optimized version at runtime.

Ah, okay - I understand now.

On my machine, IIRC, the aligned ops did affect performance

Was this in relation to the results above? I think the benchmark could report throughput in cycles/byte rather than ns. This is something used in benchmarks like smhasher; hash functions do similar reading/branching opts and take a small amount of time for short sizes. Combine this with not clflushing dst/src and not cpu-core pinning, I'd assume ns-based time fluctuates more even with high iterations.

I think the benchmark has changed between when I tested (and the memcpy function has quite a bit too) - I'll re-check. Unless I'm missing something, I would be biased to keep the aligned strategy until/unless we diverge implementation based on whether a target has slow unaligned access or not, because I expect the impact on systems that don't care to be minimal (it costs one branch and one vector copy but this is only on large copies which are rare, and the effects of the increased code size), but systems that do have slower unaligned accesses will pay that cost proportional to the size of a long copy.

I can have the benchmark report cycles/byte, and do things like core-pinning and disabling frequency scaling next time I work on improving the benchmarking, but I think adding other benchmarks will probably take priority.

I only mention this as aligned loads didn't seem to have an effect on other benchmarks so trying to somehow discover/rationalize the difference in the results. TBF, also haven't tested on anything outside avx2, avx512f, and apple_m1.

One reason, could be the architecture - looking at Agner Fog's tables my machine (Zen 1) has worse latencies/throughputs on unaligned intstructions (at least for integer ones, can't remember at the moment if integer or float instructions were generated).

@andrewrk
Copy link
Member

Before merging it would be good to:

It looks like this is not ready for merging then? Can you close it and open a new one when it's ready for review & merging? Otherwise we can use the issue tracker to discuss ideas, plans, strategies, etc.

@dweiller
Copy link
Contributor Author

dweiller commented May 11, 2024

Before merging it would be good to:

It looks like this is not ready for merging then? Can you close it and open a new one when it's ready for review & merging? Otherwise we can use the issue tracker to discuss ideas, plans, strategies, etc.

I've just done some cleanup of commits and assuming it passes CI, I'm happy for it to be merged. The unchecked boxes are all things that are potential future work and could be broken out into new issues.

I would say the real question is what level of effort you want to see put into benchmarking before merging. I can certainly do more on my own - write more synthetic benchmarks, do some proper statistics, check some real-world impacts more carefully (I did see a small benefit in the zig compiler using the c backend, but that was a while ago and I don't remember the details) - but I think any serious benchmarking effort would require help from others (in particular to check on more machines/architectures) and may be better done as part of gotta-go-fast.

@dweiller
Copy link
Contributor Author

Looking at the table above again - would it be better to not include the optimisations done in this PR on ReleaseSmall? I'll double check the number are still accurate since the llvm-18 upgrade, but assuming they are is that too many extra bytes for ReleaseSmall?

before this commit, when compiling for target riscv64-linux-none and
with `-mcpu basline`, recursive jalr instructions were generated that
caused infinite looping and a stack overflow for copy lengths between 16
and 31 (inclusive).

This was caused by LLVM optimisations - in particular LLVM has an
optimisation that can turn copies of a sufficient length into a call to
memcpy. This change breaks up some copies in order to try and prevent
this optimisation from occuring.
On x86_64+avx2 this replaces instructions (`add`, `cmp`, `ja`) with
(`dec`, `jne`) in the hot loops.
This change uses does branchless copies of lengths [a, 4 * a) rather
than [a, 2 * a] in order to reduce the number of branches for small
lengths.
@dweiller
Copy link
Contributor Author

But I think (at a higher level) we should go all the way: Facebook's folly memcpy is heavily optimized but also serves as their memmove since it's so similar. For sizes which cant be handled by a single copy above (anything after the inline for), they detect if src/dst overlap and switch to something similar but copied backwards.

Folly seems to do this in order to not break legacy code that (erroneously) depends on memcpy working for aliasing pointers. As far any Zig code goes, I don't think it makes sense to do this as @memcpy should have appropriate safety checks in safe builds.

Does Zig's compiler-rt implementation of memcpy ever get called from C code? If not I think there's no reason to follow Folly by making memcpy a memmove. If it does then it could be considered, but neither LLVM or GCC do it.

@dweiller
Copy link
Contributor Author

dweiller commented Nov 21, 2024

rep movsb and read that the benefit and breakpoint for when it's beneficial is highly machine dependent

Indeed looks like it depends on micro-architecture specifically (being finnicky on zen2+). Seemed to be same speed as the normal vectorized loop at least on 5600x and 6900HS.

There are two CPUID feature flags that may be a good indicator for when rep movsb is worth using, ERMSB (enhanced rep movsb) and FSRM (fast short rep mov). Unfortunately my CPU does not have these, so I cannot do any performance testing for it, but if someone with a CPU with ERMSB/FSRM wants to test you can try my memcpy-rep-movsb branch. On linux you can run lscpu and check the flags output to check your CPU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants