11434
TCP
Ollama
Ollama LLM Server
⚠️ Security: DependsStatus: Active🤖 Category: AI/ML
📋 Basic Information
- Port Number
- 11434
- Protocol
- TCP
- Service
- Ollama
- Popularity
- 🔥 High
🔒 Security Information
💡
Security Notes
Localhost only by default, configure firewall if external access needed
🛠️ Installation
🐧Linux
#!/bin/bash
# Ollama Installation Script for Ubuntu/Debian
# 1. Download and install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# 2. Start Ollama service
sudo systemctl start ollama
sudo systemctl enable ollama
# 3. Verify installation
ollama --version
# 4. Pull a model (example: llama2)
ollama pull llama2
# 5. Test the server
curl http://localhost:11434/api/version
echo "✅ Ollama installed successfully on port 11434!"🍎Macos
#!/bin/bash
# Ollama Installation Script for macOS
# 1. Install via Homebrew
brew install ollama
# 2. Start Ollama
ollama serve &
# 3. Pull a model
ollama pull llama2
echo "✅ Ollama installed successfully on port 11434!"🔗 Related Ports
8501
TensorFlow
TensorFlow Serving
TCP
Streamlit web application for data science dashboards
⚠️ Security: Depends🤖 Category: AI/ML
8000🔥
FastAPI
FastAPI Development Server
TCP
FastAPI Python web framework default port with high-performance async API, automatic documentation (Swagger/OpenAPI), and type hints
⚠️ Security: Depends💻 Category: Development
7860
Gradio
Gradio ML Demo Interface
TCP
Gradio ML model demo interface for quickly sharing machine learning models with web UI, Python-based, Hugging Face integration
⚠️ Security: Depends🤖 Category: AI/ML