-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker won't run #14
Comments
I tried using docker and i am getting the same error ( more or less ) :
|
Can you try to upgrade numpy to |
I'll give it a shot this weekend and report back, thanks! |
I didn't work for me .
|
Had the exact same issue as @AshoPash and could not figure out how to resolve it. |
@rlleshi when upgrading numpy to 1.23.0 inside the docker container, i could run the demo:
however, it didn't actually work because of CUDA problems with the container. i couldn't get the demo to run outside a container because i couldn't find a working combination of corresponding versions between cuda, torch and mmcv which would work under ubuntu 22.04 |
I have it working using numpy 1.23.0 as the only change to the software configuration; there was also a bug fix, #17. Beyond that, I have it running on the the Deep Learning AMI from AWS - the pyTorch Ubuntu 20.04 edition on g5.4xlarge instances, which is overpowered - g5.xlarge would do. The dlami is a "known working" combo of tools, it includes the nvidia-container-toolkit. It works, but is woefully inaccurate on my data set. I'm looking for ways to visualize the output better. |
@orangekittysoftware Any progress on this? I'm also going to try the same thing as I don't have a NVIDIA GPU. Why is it so inaccurate on your dataset? |
I found I could run about 4-6 videos at a time in a g5.4xlarge at 480p. I did not try much more or try to stress test it. Anyway, I may have the chance to try running it locally on NVIDIA hardware.
Ahem, "dataset bias due to preference"? I'll speak a bit more openly to it because I think it speaks to a modeling issue: The existing models for PHAR are hardcore - based on two people. But a lot of porn is solo / softcore - which also has some different problems for HAR. I'd be interested in a side project to create a PHAR-like model and actionset set for softcore. |
Makes sense, Thanks. I'm going to get a NVidia GPU and will see what the results are once i have it hopefully by end of week, this repo is still the most promising multi-label porn solution I have seen so far. Tried to use AWS but they don't allow you to use g-family EC2 instances for new AWS accounts anymore due to crypto miners abusing it and I lost my old account :( I'm wanting to extend this repo and train it on specific fetishes as I think this is something that is not being addressed by the existing ML solutions by mindgeek and other providers that currently use ML. |
Got it working, with thanks from @orangekittysoftware suggestion and fix PR, seems to work well on a simple oral video. By the way, I am using vast.ai to rent cheap RTX GPU's instead of AWS. It's much cheaper. Would recommend it. |
Hi,
I can't get the docker image to run, it gives me the following error:
docker run rlleshi/phar python src/demo/multimodial_demo.py video.mp4 demo.json --timestamps
When I compile from source, I get almost the same error:
When compiling from source, I made sure I had the right versions of the libraries by compiling from the exact git checkpoints referenced by your code:
mmaction2 0.23.0 /ml/phar/mmaction2
mmcv 1.3.18 /ml/phar/mmcv
mmdet 2.12.0 /ml/phar/mmdet
mmpose 0.22.0 /ml/phar/mmpose
Any ideas? Thanks.
The text was updated successfully, but these errors were encountered: