Gpt4all integration in progress. Any other LLMs you want to see integrated? #34
Replies: 55 comments 45 replies
-
vicuna? |
Beta Was this translation helpful? Give feedback.
-
@DataBassGit Is there a PR for this? |
Beta Was this translation helpful? Give feedback.
-
Putting them into a docker-compose would make it really easy to run and config. |
Beta Was this translation helpful? Give feedback.
-
Might be worth thinking about creating a Dataset from the OpenAI API calls to be able to fine tune other models for Auto-GPT. |
Beta Was this translation helpful? Give feedback.
-
Interesting, how will you go about this ? My tests show GPT4ALL totally fails at langchain prompting. |
Beta Was this translation helpful? Give feedback.
-
This would be amazing! |
Beta Was this translation helpful? Give feedback.
-
Here are two preliminary suggestions (please verify if they are useful first): -https://huggingface.co/autotrain automatic training it will automatically find models that may help. Language Model "Alpaca" form standford is also available i heared and standford has published a lot over the decades so it might stay relevant/be updated in the future. git clone https://github.com/antimatter15/alpaca.cpp (I have cloned the Alpaca model and (barely) tested it, like Gpt4all it does work.) (PS:
Unrelated some dreaming: |
Beta Was this translation helpful? Give feedback.
-
I really want to stay updated with this as it would be amazing to have something like Auto-GPT but offline, please keep us (Me) updated on the progress! |
Beta Was this translation helpful? Give feedback.
-
Are these models even better than GPT3.5 turbo ? |
Beta Was this translation helpful? Give feedback.
-
Have we considered asking autogpt to write a version of itself that can utilize offline models? Lol |
Beta Was this translation helpful? Give feedback.
-
Koala, openassistant(oasst), gpt-x-alpaca,vicuna ... honestly there's a lot of good local model. Using something like llamacpp-python would probably enable user to use any of those model (also think that llamacpp offer embeddings*not sure)... honestly what we really need to solve would be a Lora for those model to fine tune then at answering json/make task/prioritize task based on log shared by user of autogpt... that would be awesome 👌 |
Beta Was this translation helpful? Give feedback.
-
May I suggest that while focusing on including other models that you really look at creating a universal interface for (autoGPT'ing) so that there is a drop in structure for "any" models as long as they have an i/o API that would then allow you to play with whatever new model comes along using autoGPT as a bridging orchestrator / logic layer through which to work with different GPT models |
Beta Was this translation helpful? Give feedback.
-
Microsoft JARVIS at huggingface perhaps? Since it supposedly works with other models as well. |
Beta Was this translation helpful? Give feedback.
-
Vicuna 1.1 has just been release... i'm currently testing it with my babyagi style script but it seems to be a well trained model up to now |
Beta Was this translation helpful? Give feedback.
-
"GPT4All-j" |
Beta Was this translation helpful? Give feedback.
-
No bias is one of many reasons. Big reason for myself is having a LLM model
that's basically unlocked, free from any constraints or Digital Oppression
(A term in which may have just been first used and expressed but, think
about that one for min seriously 🤔). A outlaw model in which we can truly
unlock and see the full capabilities and possibilities of artificial
intelligence. Embracing the unknown💯🤯
…On Thu, May 18, 2023, 9:57 PM GoZippy ***@***.***> wrote:
no bias - just raw pretrained data to pull from
—
Reply to this email directly, view it on GitHub
<#34 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AY5DWBXFQ73GJXVE3IVJTKDXG3HRTANCNFSM6AAAAAAWQN34HM>
.
You are receiving this because you commented.Message ID:
***@***.***
.com>
|
Beta Was this translation helpful? Give feedback.
-
This can be revived once the new agent loop discussion and code is done. |
Beta Was this translation helpful? Give feedback.
-
They have a GUI with some very good (Eastern) models, the best I have found so far, I would like someone to implement the GPT4All model in that GUI as well: |
Beta Was this translation helpful? Give feedback.
-
Although text generation webui provide openai-like api, many model have context window less than 2048, while in AutoGPT, many prompt exceeds this limit, is there any solution? or is there any open weighted llm with large token limit? |
Beta Was this translation helpful? Give feedback.
-
Stablecode Google knowledge graph api keys and beautifulsoup web scrapes (I have code) |
Beta Was this translation helpful? Give feedback.
-
wizardlm-13b-v1.1-superhot-8k |
Beta Was this translation helpful? Give feedback.
-
llama2 seems to be like gpt4, having llama2 will be a great breakthrough. The important thing is, there should be a 3 person process where there will be a light weight model or a parallel process of llama2 itself between the agent and the person to check looping and to guide and make decisions following strict guidelines by the intermediate one |
Beta Was this translation helpful? Give feedback.
-
I've had AutoGPT connecting to models hosted in LM Studio 0.2.8, using a docker-hosted autogpt by: OPENAI_API_BASE_URL=http://192.168.86.37:xxxx/v1 But despite running 0.2.8 of LM Studio which now plays nicely with autogen agents, something still goes horribly wrong trying to use AutoGPT, but I'm just about to take the time to actually go look at what the errors are telling me: there's shouldn't be a token limit but maybe I do need to tune some of the parameters better. |
Beta Was this translation helpful? Give feedback.
-
Going through the thread, it seems everybody agrees that we (will) need several, and new interesting models popup every month. I imagine AutoGPT to call AI model based on a context (prompt, language, level of expected quality,text vs. voice vs. image vs. python dev ....) on a specific endpoint; and this endpoint would chose the best model to run according to a strategy to be defined. For now, it could be a proxy to openAPI/Azure but it would be easier to support and test future models and improving the strategy in the future. What do you think? |
Beta Was this translation helpful? Give feedback.
-
It's been a while now. What's the plan? I'm ready to spend some time on this but hit pause 3 months ago waiting to see what all the changes were going to be. My branch works. |
Beta Was this translation helpful? Give feedback.
-
Suggestion: gpt4all leverages text-generation-webui's api and/or
transformers inference library
…On Sun, Nov 5, 2023 at 6:07 AM GoZippy ***@***.***> wrote:
It's been a while now. What's the plan? I'm ready to spend some time on
this but hit pause 3 months ago waiting to see what all the changes were
going to be. My branch works.
—
Reply to this email directly, view it on GitHub
<#34 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABHKKOUEQEJ673KHP6GRK5TYC6FRFAVCNFSM6AAAAAAWQN34HOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TINZZGM3TA>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Je souhaite utilisé les modèles GGUF de Mistral car s'est des modèles très efficace et avec une meilleur compréhension sur la plupart des languages ! |
Beta Was this translation helpful? Give feedback.
-
There was a change in the spo endpoint. User_Bio something.
oobabooga/text-generation-webui#5761 (comment)
…On Tue, Apr 2, 2024, 5:16 AM Mister Maximus ***@***.***> wrote:
after 5 hours of, investigating and updating, AutoGPT 4.7, the problem
ended with LM server reporting..
[2024-04-02 10:51:45.630] [ERROR] Unexpected endpoint or method. (POST
/v1/completions/chat/completions). Returning 200 anyway
...this error is in how the OpenAI library works. The solution would be to
do what OpenCodeInterpreter does, and replace all the code for "OpenAI"
with code for "litellm". It would require a lot of work, and I decided with
all things considered with my earlier hacks to hardcode things to my local
model, I would prefer to start over, with the focus on replacing the OpenAI
code as I go with something hardcoded for litellm and local models.
I wont be doing that at the moment, not until I get my head around the
structure of version 5, if I am going to invest serious time in it.
—
Reply to this email directly, view it on GitHub
<#34 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABHKKORRUD46N7SZ5X6VWYDY3KHPFAVCNFSM6AAAAAAWQN34HOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DSOBSGY3DI>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
don't suppose you could integrate anythingllm's api-key into auto-gpt? just curious... |
Beta Was this translation helpful? Give feedback.
-
I'm working on implementing GPT4All into autoGPT to get a free version of this working. All LLMs have their limits, especially locally hosted. The best bet is to make all the options. So throw your ideas at me.
Beta Was this translation helpful? Give feedback.
All reactions