-
Notifications
You must be signed in to change notification settings - Fork 1
Convolution #59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Convolution #59
Conversation
|
Sigh, fine, offset doesn't have to live in Convolution. Removing it will be cleaner. |
|
There is one potential issue: if users add components to the sample model it doesn't update correctly. e.g. Not sure if and how to tackle this. The "correct" way to do it is to do |
rozyczko
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few minor issues raised
|
|
||
| if sample_model is not None and not isinstance(sample_model, SampleModel): | ||
| raise TypeError( | ||
| f"`sample_model` is an instance of {type(sample_model).__name__}, but must be a SampleModel or ModelComponent." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message says must be a SampleModel or ModelComponent but the validation only checks for SampleModel. This will reject valid ModelComponent instances
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whoops
| else: | ||
| resolution_components = [self.resolution_model] | ||
|
|
||
| total = np.zeros_like(self.energy, dtype=float) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does np.zeros_like work with scipp's sc.Variable here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seemed to work, but changed it to self.energy.values, since self.energy should always be a sc.Variable
| and name in self._invalidate_plan_on_change | ||
| ): | ||
| # super().__setattr__("_convolution_plan_is_valid", False) | ||
| self._build_convolution_plan() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_build_convolution_plan() sets self._convolution_plan_is_valid = True and calls self._set_convolvers().
If any of those operations set an attribute in _invalidate_plan_on_change, this could cause unexpected re-triggering. The commented-out line suggests this was a concern.
Consider using a guard
def __setattr__(self, name, value):
super().__setattr__(name, value)
if (
getattr(self, "_reactions_enabled", False)
and name in self._invalidate_plan_on_change
and not getattr(self, "_building_plan", False)
):
self._building_plan = True
try:
self._build_convolution_plan()
finally:
self._building_plan = False
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whoops, this was not quite right. This is also the part of the code that is most likely to need changing. The point is that the convolution plan needs to update if things change, e.g. if the sample model or energy change. However, it's a mess if this happens during init, hence the _reactions_enabled. I may want a brief chat about possible better solutions to this. For now, I've updated the code to do what I actually wanted it to do.
| energy_dense = np.linspace(extended_min, extended_max, num_points) | ||
| energy_span_dense = extended_max - extended_min | ||
|
|
||
| energy_dense_step = energy_dense[1] - energy_dense[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will energy_sense always have at least 2 elements? Maybe check the length
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh good point. Although if people don't use enough points there is no guarantee that the convolution will give the correct results
| Returns: | ||
| np.ndarray | ||
| The evaluated convolution values at self.energy. | ||
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docstring mentions resolution_component but the parameter is resolution_model. Also, the first parameter is clearly a DeltaFunction (expected type), not a general ModelComponent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whoops, too much copy/paste
| # The thresholds are illustrated in performance_tests/convolution/convolution_width_thresholds.ipynb | ||
| LARGE_WIDTH_THRESHOLD = 0.1 # Threshold for large widths compared to span - warn if width > 10% of span | ||
| SMALL_WIDTH_THRESHOLD = 1.0 # Threshold for small widths compared to bin spacing - warn if width < dx | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider moving these constants to class-level constants for easier configuration and testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point
Just disable the |
Implements convolution of SampleModels and Components with each other, including detailed balancing. The code is based on the SampleModel branch, not develop.
I have very extensive tests of the convolutions. Many of them were written a long time ago, and so they can probably be improved. However, before doing that, a review of the main code would be very useful.