-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Our new fluctuation complexity generalization is incorrect. #410
Comments
Generalized or not,
The equation is compatible with anything. If you use anything other than Shannon entropy, it is a deviation of the Shannon information around some other summary statistic. This is just as valid as an information statistic as any other. If one insists that a fluctuation measure - on a general basis - must compare X-type information to X-type weighted averages, then sure, the generalization does not make sense. But neither the measure description, nor the implementation, makes any such demand. The docs are also explicit that the default inputs give you the original Shannon-type measure. I'll think about it a bit and see if there are any obvious ways of generalizing, though, because it is a good point that one should match the "unit of information" to the selected measure in order for the measure to precisely respect the original intention. |
Just quickly did some calculations. We can easily define e.g. Tsallis-type "self information" or "information content", analogous to the Shannon-type information content. The same goes for many of the other entropy types. Perhaps a good middle ground here is just to explicitly find the "self information" expressions for each of the entropies, then use dispatch to produce a "correct"/measure specific deviation, depending on if one picks |
Yes, but this sounds like a research paper to me. If someone published this shannon fluctuation information, someone can publish the generalization. |
But do we restrict measures implemented here to measures that have already been published? We've got a plethora of methods that do not appear in any journal as part of the package already. |
yeah, we don't, and it probably isn't too complex to extract the unit of information for each measure. I'm just saying that if you do, you might as well publish a short paper for it. Maybe we can get a BSc student to write a small paper about this, it appears like a low risk high reward project for a BSc student. |
I totally agree; in fact, I already started a paper draft on Overleaf to keep my notes in one place 😁 Do you have any bachelor students in mind that may be interested? This is something that shouldn't take too much time: a few simple derivations, a few example applications, and corresponding dispatch in the code here for each generalized variant of the fluctuation complexity. |
I don't have any students yet. I hope to find soon. I will continue bugging the Exeter people to see how I can find more students. In the meantime, I'll promote more such projects in my website. |
Ok, then I'll probably just write up the paper myself as soon as possible. If you want to have a read, give me a nod here, and I'll send you a link to the paper. |
in the new fluctuation complexity, #409 , we have the equation:
this equation is only compatible with the shannon entropy. Only the shannon entropy defines as -lop(p) the "unit" of information, and shannon entropy is just the weighted average of the information. WIth other information measures the equation of the fluctuation complexity simply doesn't make as much sense, because they don't define as -log(p) the unit of information.
I argue we should revert the measure to be a complexity measure instead, and keep the note at the end that this can be generalized but one needs to come up or provide appropriate ways so that I_i makes sense in the equation of fluctuation complexity.
The text was updated successfully, but these errors were encountered: