-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use 128 bit fat pointers for continuation objects #186
Conversation
Some benchmarking results:
I now compare the performance impact of enabling vs disabling the linearity check when using this PR (i.e., whether or not the
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am confused about the removal of the cont_twice.wast
test.
.github/workflows/main.yml
Outdated
# Crude check for whether | ||
# `unsafe_disable_continuation_linearity_check` makes the test | ||
# `cont_twice` fail. | ||
- run: | | ||
(cargo test --features=unsafe_disable_continuation_linearity_check --test wast -- --exact Cranelift/tests/misc_testsuite/typed-continuations/cont_twice.wast && test $? -eq 101) || test $? -eq 101 | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you deleting this test? It should still fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment on wast.rs
// continuation reference and the revision count. | ||
// If `unsafe_disable_continuation_linearity_check` is enabled, the revision value is arbitrary. | ||
// To denote the continuation being `None`, `init_contref` may be 0. | ||
table_grow_cont_obj(vmctx: vmctx, table: i32, delta: i32, init_contref: pointer, init_revision : i64) -> i32; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess it would be nice to extend the interface of libcalls API to support 128 bit wide values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There isn't a particularly nice way of doing that. We would effectively have to do the splitting into two i64
values at the libcall translation layer, and thus the implementation of the libcall in libcalls.rs
would still receive two parameters. This only gets uglier when you then incorporate the switching between safe and unsafe mode. Given that we only have two libcalls actually taking these kinds of values, I'd rather avoid all of that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not suggesting doing it now. But I think it will be simpler than you think. We should be able to map a hypothetical i128
to Rust u128
just as i32
maps to u32
, etc.
tests/wast.rs
Outdated
// This test specifically checks that we catch a continuation being | ||
// resumed twice, which we cannot detect in this mode. | ||
if test.ends_with("cont_twice.wast") { | ||
return true; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Surely this is only true when unsafe_disable_continuation_linearity_check
is toggled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that check should read test.ends_with("cont_twice.wast") && cfg!(feature = "unsafe_disable_continuation_linearity_check")
.
My intention is to make sure that the test suite passes normally with this feature enabled, thus disabling this particular test in its presence. Given that, I had to remove the check regarding cont_twice.wast
from main.yml
. Or are you particularly interested in ensuring that the test does indeed fail if unsafe_disable_continuation_linearity_check
is enabled? That's reasonable (the test case will just not trap with the message expected in that case), but beyond that I would consider the behaviour of that program to be undefined.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to make sure the test fails when unsafe_disable_continuation_linearity_check
is toggled.
let overflow = | ||
builder | ||
.ins() | ||
.icmp_imm(IntCC::UnsignedLessThan, revision_plus1, 1 << 16); | ||
builder.ins().trapz(overflow, ir::TrapCode::IntegerOverflow); // TODO(dhil): Consider introducing a designated trap code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to preserve these traps in debug mode. I think that can prove useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And what should they check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They should check whether the revision counter has wrapped around.
I noticed that there is an issue when continuation tables are allocated in a |
What's the problem/error? |
The However, all of this is based on the (hardcoded) assumption that all table entries across all table types are pointer-sized (i.e., I will address this as follows:
In summary, these changes mean that while the table pool occupies more virtual address space, the amount of actually committed pages for non-continuation tables does not change. There are some other solutions, which seem less preferable:
|
This PR provides a prerequisite for #186, by implementing a solution for a problem originally described [here](#186 (comment)). To reiterate, the problem is as follows: For "static" tables (= tables managed my a `TablePool`), the `TablePool` manages a single mmapped memory region from which it allocates all tables. To this end, it calculates the required overall size of this region as `max_number_of_allowed_tables` * `max_allowed_element_count_per_table` * `size_of_each_table_entry`. Thus, the memory for table with index i in the pool then starts at i * `max_allowed_element_count_per_table` * `size_of_each_table_entry`. However, all of this is based on the (hardcoded) assumption that all table entries across all table types are pointer-sized (i.e., `size_of_each_table_entry` is `sizeof(*mut u8))`. But once #186 lands, this is not the case any more. This PR addresses this as follows: 1. Change the calculation of the overall size of the mmapped region to `max_number_of_allowed_tables` * `max_allowed_element_count_per_table` * `max_size_of_each_table_entry`, where `max_size_of_each_table_entry` will be `sizeof(VMContObj)` == 16 once #186 lands. This effectively doubles the amount of address space occupied by the table pool. The calculation of the start address of each table is changed accordingly. 2. Change the logic for allocating and deallocating tables from the pool so that we take the element size for that particular table type into account when committing and decommitting memory. Note that the logic implemented this PR is independent from the underlying element sizes. This means that this PR itself does not change the space occupied by the tables, as `max_size_of_each_table_entry` is currently still the size of a pointer. The necessary changes happen implicitly once #186 lands, which changes the size of `ContTableElem` which in turns changes the constant `MAX_TABLE_ELEM_SIZE`. In summary, these changes mean that in the future the table pool occupies more virtual address space, but the amount of actually committed pages for non-continuation tables does not change.
56cc3c8
to
fc71dbc
Compare
This should be good to go now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
@@ -692,7 +692,7 @@ jobs: | |||
# `unsafe_disable_continuation_linearity_check` makes the test | |||
# `cont_twice` fail. | |||
- run: | | |||
(cargo test --features=unsafe_disable_continuation_linearity_check --test wast -- --exact Cranelift/tests/misc_testsuite/typed-continuations/cont_twice.wast && test $? -eq 101) || test $? -eq 101 | |||
(cargo run --features=unsafe_disable_continuation_linearity_check -- wast -W=exceptions,function-references,typed-continuations tests/misc_testsuite/typed-continuations/cont_twice.wast && test $? -eq 101) || test $? -eq 101 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why cargo run
and not cargo test
? I thought the cargo test
artifact would already have been built (maybe the run
artifact hasn't/has too?).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This particular test is now #[ignore]
-d in the presence of unsafe_disable_continuation_linearity_check
. Thus, you need to manually feed it into wasmtime wast
to run it.
Edit: Sorry, it's not actually ignored using #[ignore]
, but manually skipped by the logic in tests/wast.rs
. But the result is the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, the following works:
cargo test --features=unsafe_disable_continuation_linearity_check --test wast -- --include-ignored --exact Cranelift/tests/misc_testsuite/typed-continuations/cont_twice.wast
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In terms of avoiding additional building, it shouldn't make a difference anyway. As far as I can tell, this is the only place where we actually build with unsafe_disable_continuation_linearity_check
, meaning that it will cause a separate rebuild of most stuff anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I am fine with either. I was just curious. Thanks!
This PR changes the representation introduced in #182 , where continuation objects were turned into tagged pointers, containing a pointer to a
VMContRef
as well as a 16bit sequence counter to perform linearity checks.In this PR, the representation is changed from 64bit tagged pointers to 128bit fat pointers, where 64bit are used for the pointer and the sequence counter.
Some implementation details:
disassemble_contobj
andassemble_contobj
to go from effectivelyOptional<VMContObj>
toOptional<VMContRef>
is preserved.unsafe_disable_continuation_linearity_check
is preserved: If it is enabled, we do not use fat (or tagged) pointers at all, and all revision checks are disabled.I8X16
for any value of type(ref $continuation)
and(ref null $continuation)
. See the comment onvm_contobj_type
in shared.rs for why we cannot useI128
orI64X2
instead.translate_*
functions in theFuncEnvironment
trait now need to take aFunctionBuilder
parameter, instead ofFuncCursor
, which slightly increases the footprint of this PR.table.fill
for continuation tables was missing. I've added this and in the process extendedcont_table.wast
to be generally more exhaustive.VMContObj
, I've introduced dedicated versions for theVMContObj
case, namelytable_fill_cont_obj
andtable_grow_cont_obj
in libcalls.rs. These manually split theVMContObj
into two parts.