Replies: 3 comments
-
well done! |
Beta Was this translation helpful? Give feedback.
0 replies
-
Did anyone get line 362, in get_messages I dunno how to fix it its from the user modelpy |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have documented the steps I had to go through to get the scraper to work. I hope someone will find this useful.
For the record: this is on Windows.
Install Python 3.10.xx, I went with 3.10.7: https://www.python.org/downloads/release/python-3107/
Open up powershell as administrator
In Powershell use Python to install pipx: https://pipx.pypa.io/stable/installation/ (py -m pip install --user pipx)
In Powershell use pipx to install Poetry: https://python-poetry.org/docs/ (pipx install poetry)
Download the project ZIP and extract to a folder of your choice
With your powershell cd to the project folder you just made
Sidenote: If you have a newer version of Python already installed you will have to tell poetry to use the version we installed earlier, to do this use the following command in powershell: poetry env use /full/path/to/python.exe
Change /full/path/to/ to the path you installed Python to
Now try to start the scraper with following command: poetry run python start_us.py
If you're like me you will encounter following error: ModuleNotFoundError: No module named 'ultima_scraper_collection'
Try running following command: poetry add ultima-scraper-collection
Poetry however might tell you this package is already added
To solve this issue run the following command: poetry update package
Try starting the scraper again with: poetry run python start_us.py
It should now run.
During first setup the scraper will open up several config files. We will need to edit the config files as they open.
The first file that will open is config.json
Scroll down in the file and look for the line "dynamic_rules_link":
The default link no longer functions and will cause an error, replace it with: "https://raw.githubusercontent.com/betoalanis/Dynamic-Rules/main/dynamic-rules.json"
Save and close this config file.
The next file that will open is auth.json for OnlyFans
Open up your browser (I used Chrome)
Go to https://onlyfans.com/ and make sure your are logged out
Press F12 on your keyboard to open up the developer tools
In the developer tools, click on the 'Network' tab
Now log in to your OnlyFans account as normal
Data will appear in the developer tools window
Click in the filter field and type 'api'
Now, this is where it gets confusing. In the original instructions it says to look for 'init', however in my case there is no such thing. For me the entries were called 'login' or sometimes 'me'. When you find any of these, click them, then scroll down on the right side until you see 'Request Headers', under that verify that you can find "Cookie:", "X-Bc" and "User-Agent". If all of them are present chances are good you found the correct entry.
Now go back to that config file that was still open.
Copy what comes after 'Cookie:' from your browser to 'cookie': in the config file exactly how it appears in your browser
Do the same for x_bc and user_agent
Change what comes after 'active': from false to true
Save and close this config file
The next file that opens is auth.json for Fansly, I do not use Fansly so I can't be of much help, just close this file to skip configuring it
The scraper should now be up and running
After all this you might still run into an error, if the error ends with something like:
line 255, in get_chats
has_more = results[-1]["hasMore"]
KeyError: 'hasMore'
Then I suggest you open up config.json again(located in the settings folder of the project)
Scroll down until you see:
"supported": {
"onlyfans": {
"settings": {
"auto_profile_choice": [],
"auto_model_choice": false,
"auto_api_choice": true,
"auto_media_choice": "",
"browser": {
"auth": true
},
"jobs": {
"scrape": {
"subscriptions": true,
"messages": true,
Change what comes after"messages" from true to false
This will disable scraping messages and will bypass the error. It will have to suffice until someone finds a fix.
Beta Was this translation helpful? Give feedback.
All reactions