Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA Error: invalid argument: Unable to allocate memory #6

Open
okdhryk opened this issue Jan 7, 2021 · 5 comments
Open

CUDA Error: invalid argument: Unable to allocate memory #6

okdhryk opened this issue Jan 7, 2021 · 5 comments

Comments

@okdhryk
Copy link

okdhryk commented Jan 7, 2021

Hello!
There seems to be insufficient memory for CUDA.
Do you know any solution?

Thanks!

@Tossy0423
Copy link
Owner

Dear @okdhryk

Thank you for using my repository!
I would like you to tell me two things about your execution environment.

  1. When do you get this error? Is it during execution? Or after make is finished?
  2. What version of CUDA do you have? Maybe the API arguments have changed since the CUDA version is newer. In my environment, I am able to run it with CUDA 10.2 and 11.0.

Sorry for the late reply...

@okdhryk
Copy link
Author

okdhryk commented Jan 14, 2021

Hello,

What version of CUDA do you have? Maybe the API arguments have changed since the CUDA version is newer. In my >environment, I am able to run it with CUDA 10.2 and 11.0.
my CUDA version is 11.2.
Is that the cause?

I made a config file and tried it and yolo_v4-tiny.launch worked fine.

When do you get this error? Is it during execution? Or after make is finished?
This error occurs during execution.

...
...
158 conv 512 1 x 1/ 1 19 x 19 x1024 -> 19 x 19 x 512 0.379 BF
159 conv 1024 3 x 3/ 1 19 x 19 x 512 -> 19 x 19 x1024 3.407 BF
160 conv 255 1 x 1/ 1 19 x 19 x1024 -> 19 x 19 x 255 0.189 BF
161 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, cls_norm: 1.00, scale_x_y: 1.05
nms_kind: greedynms (1), beta = 0.600000
Total BFLOPS 128.459
avg_outputs = 1068395
Allocate additional workspace_size = 106.46 MB
Try to set subdivisions=64 in your cfg-file.
CUDA status Error: file: /home/roboworks/catkin_ws/src/yolov4-for-darknet_ros/darknet_ros/darknet/src/dark_cuda.c : () : line: 373 : build time: Jan 10 2021 - 10:40:47

CUDA Error: out of memory
CUDA Error: out of memory: File exists
[darknet_ros-1] process has died [pid 7535, exit code 1, cmd /home/roboworks/catkin_ws/devel/lib/darknet_ros/darknet_ros camera/rgb/image_raw:=/camera/rgb/image_raw __name:=darknet_ros __log:=/home/roboworks/.ros/log/bf16ae3e-5640-11eb-815f-60f2623d0cf2/darknet_ros-1.log].
log file: /home/roboworks/.ros/log/bf16ae3e-5640-11eb-815f-60f2623d0cf2/darknet_ros-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done

@emreurcu
Copy link

We have same error. Did you solve?

@h3ct0r
Copy link

h3ct0r commented May 4, 2021

Same here, any answer?

@okdhryk
Copy link
Author

okdhryk commented May 6, 2021

It now works correctly under the following conditions

Ubuntu 18.04.5 LTS
GeForce GTX 1660 SUPER/PCIe/SSE2

$ /usr/local/cuda/bin/nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Fri_Feb__8_19:08:17_PST_2019
Cuda compilation tools, release 10.1, V10.1.105

$ cat /usr/include/cudnn.h | grep CUDNN_MAJOR -A 2
#define CUDNN_MAJOR 7
#define CUDNN_MINOR 5
#define CUDNN_PATCHLEVEL 0

#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)

#include "driver_types.h"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants