This is a Next.js project bootstrapped with create-next-app
First, run the development server:
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
Open http://localhost:3000 with your browser to see the result.
You can start editing the page by modifying app/page.tsx
. The page auto-updates as you edit the file.
This project uses next/font
to automatically optimize and load Geist, a new font family for Vercel.
To learn more about Next.js, take a look at the following resources:
- Next.js Documentation - learn about Next.js features and API.
- Learn Next.js - an interactive Next.js tutorial.
You can check out the Next.js GitHub repository - your feedback and contributions are welcome!
The easiest way to deploy your Next.js app is to use the Vercel Platform from the creators of Next.js.
Check out our Next.js deployment documentation for more details.
curl -X POST "http://0.0.0.0:11434/api/pull" -d '{"model":"llama3.2:1b"}' curl -X POST "http://0.0.0.0:11434/api/generate" -d '{"model":"llama3.2:1b", "prompt":"hello"}'
-H "Content-Type: application/json" \
curl -X POST "http://localhost:3000/api/models"
--cookie ""
-d '{"model":"llama3.2:1b", "messages": [{"role":"user", "content":"Hello"}]}'
curl -X GET "https://ai.cornellappdev.com/api/models/all"
curl -H "Origin: https://ai.cornellappdev.com"
-H "Access-Control-Request-Method: POST"
-H "Access-Control-Request-Headers: X-Requested-With"
-X OPTIONS --verbose
https://ai.cornellappdev.com/api/models
curl -X POST "https://ai.cornellappdev.com/api/models"
-H "Content-Type: application/json"
-d '{"model":"llama3.2:1b", "messages": [{"role":"user", "content":"Hello"}]}'
--verbose
curl -X POST "https://ai.cornellappdev.com/api/models"
-H "Content-Type: application/json"
-H "Origin: https://ai.cornellappdev.com"
-d '{"model":"llama3.2:1b", "messages": [{"role":"user", "content":"Hello"}]}'
--verbose
docker exec -it e345672ce6b2 curl -X POST "http://0.0.0.0:11434/api/generate" -d '{"model":"llama3.2:1b", "prompt":"hello"}'
thestack_app-network
Pull Model - ollama
docker run --rm --network thestack_app-network curlimages/curl
-X POST "http://ollama:11434/api/pull"
-d '{"model":"llama3.2:1b"}'
Get active models - ollama
docker run --rm --network thestack_app-network curlimages/curl
-X GET "http://ollama:11434/api/ps"
Get all models - ollama
docker run --rm --network thestack_app-network curlimages/curl
-X GET "http://ollama:11434/api/tags"
Generate - ollama
docker run --rm --network thestack_app-network curlimages/curl
-X POST "http://ollama:11434/api/generate"
-d '{"model":"llama3.2:1b", "prompt":"hello"}'
Chat - ollama
docker run --rm --network thestack_app-network curlimages/curl
-X POST "http://ollama:11434/api/chat"
-d '{"model":"llama3.2:1b", "messages": [{"role":"user", "content":"Why is the sky blue"}]}'
Get all models - app
docker run --rm --network thestack_app-network curlimages/curl
-X GET "http://ai-dev-app:3000/api/models/all"
Get active models - app
docker run --rm --network thestack_app-network curlimages/curl
-X GET "http://ai-dev-app:3000/api/models/active"
Chat - app
docker run --rm --network thestack_app-network curlimages/curl
-X POST "http://ai-dev-app:3000/api/chat"
-d '{"model":"llama3.2:1b", "messages": [{"role":"user", "content":"Hello"}]}'
Chat Models - app
docker run --rm --network thestack_app-network curlimages/curl
-X POST "http://ai-dev-app:3000/api/models"
-d '{"model":"llama3.2:1b", "messages": [{"role":"user", "content":"Hello"}]}'