Skip to content

FIX RuntimeWarning by dividing by zero in test_kernel_gradient #19396

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Feb 11, 2021

Conversation

nuka137
Copy link
Contributor

@nuka137 nuka137 commented Feb 8, 2021

Reference Issues/PRs

#19334

What does this implement/fix? Explain your changes.

The runtime error is caused by Matern::__call__ function.
The error will be raised when D has a zero element.
So, I added small value 1e-8 to np.sqrt(D.sum(2))[:, :, np.newaxis] before dividing K[..., np.newaxis] by it.
Because K[..., np.newaxis] * D will be zero, I think above tweaking is no problem.

@jjerphan
Copy link
Member

jjerphan commented Feb 8, 2021

Hi @nuka137

thanks for this proposition! Could you add a test to assert that the warning is not thrown with this correction? 🙂

@nuka137
Copy link
Contributor Author

nuka137 commented Feb 9, 2021

@jjerphan

Thanks! I added the no warning test.

@@ -19,7 +19,7 @@

from sklearn.utils._testing import (assert_almost_equal, assert_array_equal,
assert_array_almost_equal,
assert_allclose,
assert_allclose, assert_no_warnings,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a line break here to respect the structure of the imports.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I added a line break.

Co-authored-by: Julien Jerphanion <git@jjerphan.xyz>
Co-authored-by: Julien Jerphanion <git@jjerphan.xyz>
Copy link
Member

@jeremiedbb jeremiedbb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @nuka137, here are a couple comments

Comment on lines 1689 to 1691
denominator = np.sqrt(D.sum(2))[:, :, np.newaxis]
denominator[denominator == 0.0] = 1e-8
K_gradient = K[..., np.newaxis] * D / denominator
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find it a bit cleaner to only divide where it's non zero

denominator = np.sqrt(D.sum(2))[:, :, np.newaxis]
K_gradient = K[..., np.newaxis] * np.divide(D, denominator, where=denominator != 0)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't it change the normalisation?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

D = 0 where denominator = 0, so K_gradient = 0 where denominator = 0, regardless of the normalization

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that is much cleaner way.
I applied your suggestion, thanks.

@@ -52,7 +53,7 @@
@pytest.mark.parametrize('kernel', kernels)
def test_kernel_gradient(kernel):
# Compare analytic and numeric gradient of kernels.
K, K_gradient = kernel(X, eval_gradient=True)
K, K_gradient = assert_no_warnings(kernel, X, eval_gradient=True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to check that no warning is issued. Otherwise we'd have to to that for every line of code. The fact that the warning disappears with this PR should be enough to me.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 with @jeremiedbb

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I agree.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reverted this line to original.

Co-authored-by: Jérémie du Boisberranger <34657725+jeremiedbb@users.noreply.github.com>
Copy link
Member

@jeremiedbb jeremiedbb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. Thanks @nuka137

Copy link
Member

@rth rth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very minor style improvement, otherwise LGTM.

@rth rth changed the title Fix: RuntimeWarning by dividing by zero in test_kernel_gradient FIX RuntimeWarning by dividing by zero in test_kernel_gradient Feb 11, 2021
@rth rth merged commit 7729cb4 into scikit-learn:main Feb 11, 2021
@nuka137 nuka137 deleted the fix_test_kernel_gradient branch February 11, 2021 21:18
@glemaitre glemaitre mentioned this pull request Apr 22, 2021
12 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants