-
-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convolutions with negative padding segfault... sometimes #123
Comments
Okay, so I tried to see if I could fix this issue myself (as a fix would be very useful to my work). In short, it seems to be an indexing problem. Looking through the code, I (initially) figured it should be sufficient to modify
to
However, when I was going to test this, I suddenly realized that NNlib.jl now defaults to NNPACK (I'm using Linux), which means that negative padding now throws a (Yes, I know I can crop the input array "manually", but that seems like a much less elegant/performant solution than using negative paddings...) |
Currently there is no way to not use NNPACK. And NNPACK does not support negative padding so a check for this should be the ideal solution. |
I will add this to my TODO: I have a cleanup planned to make it easier to integrate the blocked convolution code in #97. It will be possible to disable NNPACK for negative-padded convolutions, as well as be able to disable @Sleort, in the meantime, you can, of course, just use using Flux
Core.eval(Flux, quote
function (c::Conv)(x::AbstractArray)
σ, b = c.σ, reshape(c.bias, map(_->1, c.stride)..., :, 1)
cdims = DenseConvDims(x, c.weight; stride=c.stride, padding=c.pad, dilation=c.dilation)
# Just change `conv` to `conv_im2col`
σ.(conv_im2col(x, c.weight, cdims) .+ b)
end
end) |
@avik-pal I see. Yes, I think there should at least be a check here. My test case sometimes run (for smaller system sizes, maybe conv_direct.jl is used?) and sometimes not, which is quite confusing when you don't know the details of the internals of the library. @staticfloat Thanks! And thanks for the tip! |
Recently, we've made NNPACK not be the default, so would be good to revisit this as @staticfloat mentioned. |
My work so far is here: #163 but it needs to be rebased. |
Sometimes a negative padding fails (only for
Float32
?), sometimes it doesn't...This works:
and this works:
but this fails (segfaults):
with the following error message:
Depending on details in kernel size, the size of the input array, and whether the padding is symmetric or not, I get different error messages/crashes (
double free or corruption (!prev)
,corrupted size vs. prev_size
,malloc(): smallbin double linked list corrupted
,malloc_consolidate(): invalid chunk size
...). Typically, the convolution crashes when the input arrays become larger than some small size. So far, the issue only seems to affectFloat32
inputs.I'm on Julia 1.1.1, Flux 0.8.3, NNlib 0.6.0 (and Ubuntu 19.04).
The text was updated successfully, but these errors were encountered: