You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
rloadest has that limit of 176,000 rows for aggregation. many aggregation options should be small enough that even if the whole dataset is huge, a subset of the data would cover each entire aggregation interval and could be sent to rloadest (predLoad/predConc) separately from the rest of the data.
rloadest has that limit of 176,000 rows for aggregation. many aggregation options should be small enough that even if the whole dataset is huge, a subset of the data would cover each entire aggregation interval and could be sent to rloadest (predLoad/predConc) separately from the rest of the data.
We currently do data chunking with chunks really close to 176,000. We could probably choose the chunk sizes better to make it so that more large datasets work fine. Code to modify is here: https://github.com/wdwatkins/loadflex/blob/9e4e520e0b88a04ce594aa1e7b5048e26dae0e2a/R/loadReg2.R#L356-L379
The text was updated successfully, but these errors were encountered: