Skip to content
This repository has been archived by the owner on Aug 27, 2023. It is now read-only.

Thermistor config #208

Open
wants to merge 4 commits into
base: experimental
Choose a base branch
from
Open

Conversation

phord
Copy link
Collaborator

@phord phord commented Apr 11, 2016

I noticed the Configtool produces temperature lookup tables which are significantly wrong around 300C where I print nylon and polycarbonate. I have a list of the correct ADC conversion values for all 1024 possible inputs and you can see the error on this graph.

output

I wrote an algorithm that works like this:

  • Calculate the ideal values
  • Insert the two extremes into our sample list
  • Calculate the linear approximation of the remaining values
  • Insert the correct value for the "most-wrong" estimation into our sample list
  • Repeat until "n" values are chosen as requested

The updated graph is a much better approximation using the same number of samples:

fixed

@Traumflug
Copy link
Owner

First commit applied, nice finding. Had to keep the first of the removed lines, because step is used.

@Traumflug
Copy link
Owner

Regarding the second patch. The algorithm is an excellent idea and actually sounds familiar to me: Traumflug/Visolate@6b53bbe

I've applied the commit with some whitespace edits and by adding the description from here. I think I did so correctly, but when creating a table with this config.h (both included files are part of the distribution) ...

// Configuration for controller board.
#include "config/board.nanoheart-v1.0.h"

// Configuration for printer board.
#include "config/printer.mendel.h"

... a build fails, because there's a comma in line 36 missing:

[...]
    { 110,   820}, //  205 C,    570 ohms, 0.537 V, 0.51 mW
    { 147,   748}, //  187 C,    797 ohms, 0.718 V, 0.65 mW
    { 246,   624}  //  156 C,   1515 ohms, 1.201 V, 0.95 mW
    { 401,   504}, //  126 C,   3103 ohms, 1.958 V, 1.24 mW
    { 750,   308}, //   77 C,  13026 ohms, 3.662 V, 1.03 mW
[...]

So I put this into a new branch, thermistor-config. Do you have an idea on what's going wrong?

Also, having the same algorithm for the supposedly more accurate Steinhart-Hart table (also in thermistortablefile.py) would be awesome.

Looking at the graphs above it looks like a good idea to swap the thermistor comparison resistor, typically 4K7, to a smaller one, e.g. 1K or 560R. Between 250°C and 350°C you use only 1 or 2% of ADC's accuracy, which isn't much. But that's not a Teacup issue.

@Traumflug
Copy link
Owner

P.S.

Thinking about it, it might be a good idea to limit the table to 0..500 °C. The new algorithm undoubtly gives a more precise table, but 9 entries of 25 are pretty useless, because above 350 °C.

Also, as accuracy is calculated now, it might be a good idea to ignore NUM_TEMPS and use only enough entries to achieve a reasonable accuracy (2°C ?, 1% ? ). Smaller tables are faster.

Traumflug pushed a commit that referenced this pull request Apr 12, 2016
Temperature tables are emitted by selecting arbitrary sample values
to be used for the linear lookup table.  This is fine in the range
where the thermistor produces linear output, but it is markedly wrong
near the extremes where the thermister output begins to curve.

Introduce a new sample selector which chooses samples based on the "most
incorrect estimate" and improves from there to ensure we get a cleaner
approximation across the selected range.

Traumflug: this topic is tracked here:
#208
@Traumflug
Copy link
Owner

Also, as accuracy is calculated now, it might be a good idea to ignore NUM_TEMPS and use only enough entries to achieve a reasonable accuracy (2°C ?, 1% ? ). Smaller tables are faster.

As I just have the temp table lookup code in front of me for applying this other pull request: this part is probably too much work for little gain, NUMTEMPS isn't easily changeable to a per-sensor value.

@phord
Copy link
Collaborator Author

phord commented Apr 13, 2016

If you're looking for speed from the temp table, a binary search would be an obvious first step. The current linear search is needlessly slow most of the time.

@Wurstnase
Copy link
Collaborator

Not that much slow as it seems. You normally print over 200°C. That are just 4 steps. On a normal binary you need more.

Maybe for a print bed. Hmmm...

phord added a commit that referenced this pull request Apr 13, 2016
Temperature tables are emitted by selecting arbitrary sample values
to be used for the linear lookup table.  This is fine in the range
where the thermistor produces linear output, but it is markedly wrong
near the extremes where the thermister output begins to curve.

Introduce a new sample selector which chooses samples based on the "most
incorrect estimate" and improves from there to ensure we get a cleaner
approximation across the selected range.

Traumflug: this topic is tracked here:
#208
@phord
Copy link
Collaborator Author

phord commented Apr 13, 2016

In the optimized sample set, 200C appears after 15 samples. But a binary search will consider only 5 samples.

I don't think this is a good optimization to consider. The lookup should be fast in any case, and always faster than the division which follows it.

@phord
Copy link
Collaborator Author

phord commented Apr 13, 2016

These latest commits fix the misplaced-comma bug and include the optimization for the Reinhart-Hart algorithm, too. I don't have any examples of valid Reinhart values to try, but I'm reasonably certain it should work for reasonably correct sample data. I think there are bugs when the Reinhart algorithm returns invalid data, though.

@Wurstnase
Copy link
Collaborator

Ah ok. Was a little bit confused because we start with j = 1.

On Marlin, they uses a lookup table to calculate the delays for the stepper timer. This table has also a delta between the values.

Like:

const uint16_t PROGMEM temptable[NUMTABLES][NUMTEMPS][2] = {
  // EXTRUDER temp table using Beta algorithm with parameters:
  // R0 = 100000, T0 = 25, R1 = 0, R2 = 4700, beta = 4092, maxadc = 1023
  {
    {   1, 42,  3284, 2208}, //  821 C,      5 ohms, 0.005 V, 0.01 mW
    {  43, 42, 1076, 192}, //  269 C,    208 ohms, 0.210 V, 0.21 mW
    {  85, xx,  884, yy}, //  221 C,    432 ohms, 0.415 V, 0.40 mW
.
.
.
  }
};

This could also improve the performance.

https://github.com/Wurstnase/Marlin/blob/Development/Marlin/stepper.cpp#L505-L513
https://github.com/Wurstnase/Marlin/blob/Development/Marlin/speed_lookuptable.h

@Wurstnase
Copy link
Collaborator

We could save the reciprocal as fixpoint. Just made a small test with my local temptable.
Any delta of (x1 - x0) is 42. 1/42*2^13 ~ 195(error is 2^-4).
Now(((temp - x0)*y1 + (x1 - r)*y0) * 195) >> 13. No division anymore.

195 and 13 could be saved in the temptable also to minimize the error. Everything could be calculated in the createTemperatureLookup.py

@Traumflug
Copy link
Owner

If you're that keen on accurate conversions, there's still the thermistotable branch(es). This removes the table entirely in favor of a direct runtime calculation. Performance isn't that bad, 2500 ticks for the calculation vs. 1800 ticks for the interpolation.

Traumflug pushed a commit that referenced this pull request Apr 14, 2016
Temperature tables are emitted by selecting arbitrary sample values
to be used for the linear lookup table.  This is fine in the range
where the thermistor produces linear output, but it is markedly wrong
near the extremes where the thermister output begins to curve.

Introduce a new sample selector which chooses samples based on the "most
incorrect estimate" and improves from there to ensure we get a cleaner
approximation across the selected range.

Traumflug: this topic is tracked here:
#208
@Traumflug
Copy link
Owner

These latest commits fix the misplaced-comma bug and include the optimization for the Reinhart-Hart algorithm, too.

Thank you very much, all applied to experimental and thermistor-config removed.

@Wurstnase
Copy link
Collaborator

(error is 2^-4)

This is the error between 195 >> 13 and the division by 42. Not accuracy. Just speed.

@phord
Copy link
Collaborator Author

phord commented Apr 14, 2016

I was thinking of precalculating the deltas also but it will break everyone's existing temperature tables if we implement that in the code unless we do it in a separate table.

@Traumflug
Copy link
Owner

Not accuracy. Just speed.

If you're into speed, avoiding this calculation alltogether is likely the best option. Instead of running PID against Celsius values it can run just as well against raw readings. Math is a bit different, of course.

I was thinking of precalculating the deltas also but it will break everyone's existing temperature tables

I wouldn't mind too much "breaking" older tables, because they're now created on the fly when using Configtool.

@phord
Copy link
Collaborator Author

phord commented Apr 14, 2016

Which not everyone uses.

@Wurstnase
Copy link
Collaborator

If you're into speed, avoiding this calculation alltogether is likely the best option. Instead of running PID against Celsius values it can run just as well against raw readings. Math is a bit different, of course.

Yes, this should be the next step. Anyhow, the speed of the table can be improved.

I wouldn't mind too much "breaking" older tables, because they're now created on the fly when using Configtool.

In temp.c you read temptable[num][x][0|1]. So it should be no issue to add one extra element?

Edit: Ah got you. Yes, but progress needs sometimes breaking older things.

P.S.: Tested that new temptable. Looks pretty nice!

@phord
Copy link
Collaborator Author

phord commented Apr 14, 2016

Old code reading new tables (3-values wide) is not a problem. New code reading old tables (2-values wide) could be a problem. But it could be mitigated with clever #if's. I'm looking into it now.

@Traumflug Traumflug mentioned this pull request Apr 14, 2016
Teach simulator to calculate temperatures for all possible ADC values
using the compiled-in temperature tables and report the resulting
conversion.  Do no other run-time simulation; exit after reporting
the conversion results.  Output is suitable for gnuplot for example
like this:

    gnuplot --persist -e "plot '< ./sim -T0' u 1:2 with lines,
                               '< ./sim -T1' u 1:2 with lines"
The Thermistortablefile.py routine prematurely drops the fractional
part of the temperature when computing the 14.2 temperature values
to emit in the code.  Keep this instead until the last moment when
we finally calculate the integer format we will store.
Save a division at runtime by pre-calculating the slope between each
pair of adjacent thermistortable values.  Since we use the larger value
each time, save the slope between two values A and B in the table
with the B data.  Therefore the slope is that between each value and
its predecessor in the list.

Store this new value in the third element of the now 3-integers-wide
array which makes up the table. Use fixed-point 6.10 format to store
the slope.  This is almost too narrow for some slopes and maybe it
should be changed to 8.8 fixed-point. In practice this presents a
loss in accuracy, but it is still significantly better than the
previous fixed-sample-size table production method. In particular no
provision is made to handle values which scale over 65535, and it
seems we should at least warn about this if not simply fail before
letting the user go off compiling his code.

Add a new flag TEMPTABLE_FORMAT and define it as 1 to tell the code
that we are using this new and incompatible format.  This lets us
tolerate old hand-crafted thermistor tables by keeping the slower
algorithm in case one is still used.  New thermistor tables should be
defined with this new format and with the FORMAT define set accordingly.

With the default 25 samples this adds 100 bytes to the flash image for
the thermistortable storage for two different thermistors.  But the
code is simplified and saves me 134 bytes in the bargain for a net
decrease in flash size of 34 bytes.
@phord
Copy link
Collaborator Author

phord commented Apr 14, 2016

Something like this, perhaps. df3d827

I've only tested this in the simulator so far. But the simulator is now able to report the calculated temperatures for every ADC value, and I used that to compare the before and after temperature profiles for this code change.

@phord
Copy link
Collaborator Author

phord commented Apr 14, 2016

And this change includes the binary search optimizer.

Use a binary search to find our target temperate in fewer
comparisons.  The search algorithm is ostensibly slower because
it involves a division, but it's a div-by-two so should be optimized
into a simple bit-shift.  Fewer comparisons involves fewer pgm_read_words
and should be faster overall, but the gain is arguably tiny.
@Wurstnase
Copy link
Collaborator

Wurstnase commented Apr 16, 2016

Hopefully I'm doing anything right with the simulAVR. So here we see some numbers.

TEMPTABLE_FORMAT 1

LED on occurences: 710.
LED on time minimum: 401.76 clock cycles.
LED on time maximum: 699.36 clock cycles.
LED on time average: 420.248 clock cycles.

TEMPTABLE_FORMAT 0

LED on occurences: 710.
LED on time minimum: 587.76 clock cycles.
LED on time maximum: 1661.6 clock cycles.
LED on time average: 1357.5 clock cycles.

old naive search added in temp_table_lookup() from phord

LED on occurences: 710.
LED on time minimum: 664.64 clock cycles.
LED on time maximum: 2199.76 clock cycles.
LED on time average: 1787.44 clock cycles.

current experimental

LED on occurences: 710.
LED on time minimum: 1289 clock cycles.
LED on time maximum: 2175 clock cycles.
LED on time average: 1359.12 clock cycles.

@Traumflug
Copy link
Owner

I'm excited to see you doing performance measurements! However I'm scratching a bit my head on how you did this. For measuring temperature conversion performance I did something like this in earlier attempts:

  • Comment out DEBUG_LED handling in timer-avr.c, because timing of the step interrupt isn't interesting here.
  • Keep DEBUG_LED defined in config.h.Profiling.
  • Add something like this just before the main loop in mendel.c, see also feeb511:
volatile uint16_t result;
for (uint16_t i = 0; i < 1024; i++) {
  ATOMIC_START
    WRITE(DEBUG_LED, 1);
    result = temp_table_lookup(i);
    WRITE(DEBUG_LED, 0);
  ATOMIC_END
}
  • Compile with config.h.Profiling.
  • Run a simulation with nothing.gcode, because there's no need to move steppers.

This should give exactly 1024 LED on occurences.

But you got 710 of them, so you didn't change the code other than applying patches? In this case you still measure performance of the step interrupt. But why does this interrupt get faster, then?

@Wurstnase
Copy link
Collaborator

I moved the debug led to temp.c, this was maybe wrong. I will test your idea, too.

@Wurstnase
Copy link
Collaborator

Ok, here we go:

TEMPTABLE_FORMAT 1

LED on occurences: 1024.
LED on time minimum: 437.72 clock cycles.
LED on time maximum: 494.76 clock cycles.
LED on time average: 470.278 clock cycles.

TEMPTABLE_FORMAT 0

LED on occurences: 1024.
LED on time minimum: 1401.2 clock cycles.
LED on time maximum: 1553.72 clock cycles.
LED on time average: 1446.91 clock cycles.

current lookup.

LED on occurences: 1024.
LED on time minimum: 1011.84 clock cycles.
LED on time maximum: 1993.92 clock cycles.
LED on time average: 1768.85 clock cycles.

@Traumflug
Copy link
Owner

Thanks for the repetition. Looks like my headscratching was mostly unfounded.

Now, this reduction from 1770 ticks average to 470 ticks average is impressing, isn't it? This additional #define still bugs me a bit, let me see wether I can solve this with a sizeof().

@Traumflug
Copy link
Owner

Good. Using a sizeof(temptable[0][0]) detects the type of table fine. Other than that I hope I've picked everything to experimental correctly. Temperature conversions are 3.7-fold faster now :-)

Are we done with this issue?

@Wurstnase
Copy link
Collaborator

Really impressive work from @phord !
I think, this one is done.

@Traumflug Traumflug closed this Apr 18, 2016
@phord
Copy link
Collaborator Author

phord commented Apr 18, 2016

Heh. I had "if ... sizeof" too but I didn't want the code to be bloated with two algorithms. It didn't occur to me to check if the compiler would completely optimize out the unused one. Thanks for the cleanup.

@wsowa
Copy link
Contributor

wsowa commented May 27, 2016

Hi, I've tested experimental branch and hit into a problem regarding pre-computed slope in thermistor table.
I had two thermistors set up:

  • in extruder I use NTC that I generate table for using configtool (beta equation)
  • in heated bed I use PTC (KTY83-110) that I have table generated based on data sheet.

I generated table for NTC and used my old table without a slope precomputed for PTC so I ended with following config:

const uint16_t PROGMEM temptable[NUMTABLES][NUMTEMPS][3] = {
  // EXTRUDER temp table using Beta algorithm with parameters:
  // R0 = 100000, T0 = 25, R1 = 0, R2 = 1800, beta = 4267, maxadc = 1023
  {
    {  37,  1342,  2784}, //  335 C,     67 ohms, 0.181 V, 0.48 mW, m =  2.719
    {  44,  1280,  2233}, //  320 C,     81 ohms, 0.215 V, 0.57 mW, m =  2.181
    {  61,  1172,  1632}, //  293 C,    114 ohms, 0.298 V, 0.78 mW, m =  1.594
    {  82,  1080,  1117}, //  270 C,    157 ohms, 0.400 V, 1.02 mW, m =  1.092
    { 109,   997,   791}, //  249 C,    214 ohms, 0.532 V, 1.32 mW, m =  0.773
    { 142,   923,   576}, //  230 C,    290 ohms, 0.693 V, 1.66 mW, m =  0.563
    { 179,   859,   437}, //  214 C,    381 ohms, 0.874 V, 2.00 mW, m =  0.427
    { 223,   800,   344}, //  200 C,    501 ohms, 1.089 V, 2.37 mW, m =  0.336
    { 335,   690,   252}, //  172 C,    875 ohms, 1.636 V, 3.06 mW, m =  0.246
    { 464,   596,   186}, //  149 C,   1491 ohms, 2.266 V, 3.44 mW, m =  0.182
    { 777,   399,   161}, //   99 C,   5662 ohms, 3.794 V, 2.54 mW, m =  0.157
    { 902,   295,   212}, //   73 C,  13308 ohms, 4.404 V, 1.46 mW, m =  0.207
    { 952,   232,   320}, //   58 C,  23800 ohms, 4.648 V, 0.91 mW, m =  0.313
    { 986,   167,   496}, //   41 C,  46705 ohms, 4.814 V, 0.50 mW, m =  0.485
    { 998,   131,   754}, //   32 C,  69092 ohms, 4.873 V, 0.34 mW, m =  0.737
    {1007,    94,  1050}, //   23 C, 106624 ohms, 4.917 V, 0.23 mW, m =  1.026
    {1019,     0,  2620}  //    0 C, 366840 ohms, 4.976 V, 0.07 mW, m =  2.559
  },
  // BED temp table
  {
    {347, 0},
    {366, 40},
    {384, 80},
    {403, 120},
    {421, 160},
    {439, 200},
    {457, 240},
    {474, 280},
    {491, 320},
    {507, 360},
    {523, 400},
    {538, 440},
    {553, 480},
    {561, 500},
    {568, 520},
    {582, 560},
    {596, 600}
  }
};

As you can see one one table contains pre-computed slope while the other doesn't. Unfortunately there's no compilation warning. GCC silently assumes missing table value (slope) to be 0. IN result my bed temperature reading have no interpolation and was jumping from one sample to the other.

I did some debugging I found the reason and I thought "Ok, this is smart, I'll precompute slope for my PTC as well". And here's the problem: NTCs have falling slope, so D is negative. Table values are unsigned integers therefore we keep the D inverted there and invert it back during interpolation (no comment about it in a code nor in generated table!) PTCs have raising slope, so D is positive and using the representation I would need to put negative numbers into unsigned int table...

I would propose to change entry in a table from triplet of unsigned ints into a structure of uint16, uint16 and signed int32. Signed int would solve problem with PTC/NTC combination. 32-bit int would solve a problem with precision mentioned above.
Also it would be nice to have different table lengths for each thermistor....
Thoughts?

@Traumflug
Copy link
Owner

My thoughts? Excellent bug report! It just could have been put into a new one.

The only reason I can see to not use an int32_t is increased binary size (~100 bytes for two tables) and slightly slower computation. My guess is some additional 10 clocks per computation for reading the value from flash only, because temp.c line 247 is a 32-bit operation already. If I could find a proof that an int16_t is sufficient for all realistic cases I'd use this. Either way would add support for PTCs at a reasonable cost.

Regarding distinct table lengths I'm not sure what the point would be other than saving a couple of bytes binary size. Graphs above show how accurate 25 values are already. Precision isn't crucial, regulation should work with coarse tables, too, so it's mostly a matter of the number shown on the display.

Do you feel like crafting a patch?

@Traumflug Traumflug reopened this May 27, 2016
@phord
Copy link
Collaborator Author

phord commented May 27, 2016

@wsowa Interesting and unexpected side-effect of zero-filled array padding. Thanks for the detailed analysis.

Using a structure is the right way forward, but doesn't solve the problem of zero-filling omitted values. A run-time check for zeroes could, but this wouldn't have the same code-space savings as we got with @Traumflug's if/else trick.

I guess we need to bite the bullet either way. Or else we could add a g++ dependency and use templates to build this table. 😁

I'm not convinced we need any more precision in the slope value. We only need to reach a precision of 0.25 anyway. 15-bits of slope leaves us enough room.

We have a 10-bit exponent now allowing a slope up to 32 in 5 more bits. But we can do better if the exponent is smaller. We want 2 bits of exponent in the final calculation which leaves us with a range of 8 bits (256) to cover our interpolation point differences. That's pretty huge on a total range which is only 1024 long (10-bit ADC). We could easily get by with only 5+2 bits of exponent without any real precision lost, I think.

@phord
Copy link
Collaborator Author

phord commented May 28, 2016

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants