generated from nathanwoodburn/python-webserver-template
feat: Initial code
This commit is contained in:
@@ -43,3 +43,25 @@ jobs:
|
||||
docker push git.woodburn.au/nathanwoodburn/$repo:$tag_num
|
||||
docker tag $repo:$tag_num git.woodburn.au/nathanwoodburn/$repo:$tag
|
||||
docker push git.woodburn.au/nathanwoodburn/$repo:$tag
|
||||
|
||||
- name: Build Docker image for agent
|
||||
run : |
|
||||
echo "${{ secrets.DOCKERGIT_TOKEN }}" | docker login git.woodburn.au -u nathanwoodburn --password-stdin
|
||||
echo "branch=${GITHUB_HEAD_REF:-${GITHUB_REF#refs/heads/}}"
|
||||
tag=${GITHUB_HEAD_REF:-${GITHUB_REF#refs/heads/}}
|
||||
tag=${tag//\//-}
|
||||
tag_num=${GITHUB_RUN_NUMBER}
|
||||
echo "tag_num=$tag_num"
|
||||
if [[ "$tag" == "main" ]]; then
|
||||
tag="latest"
|
||||
else
|
||||
tag_num="${tag}-${tag_num}"
|
||||
fi
|
||||
repo="docker-inventory-agent"
|
||||
echo "container=$repo"
|
||||
cd docker_agent
|
||||
docker build -t $repo:$tag_num .
|
||||
docker tag $repo:$tag_num git.woodburn.au/nathanwoodburn/$repo:$tag_num
|
||||
docker push git.woodburn.au/nathanwoodburn/$repo:$tag_num
|
||||
docker tag $repo:$tag_num git.woodburn.au/nathanwoodburn/$repo:$tag
|
||||
docker push git.woodburn.au/nathanwoodburn/$repo:$tag
|
||||
|
||||
134
README.md
134
README.md
@@ -1,33 +1,141 @@
|
||||
# Python Flask Webserver Template
|
||||
# Home Lab Inventory
|
||||
|
||||
Python3 website template including git actions
|
||||
Flask inventory system for homelab environments with automatic data collection and a dashboard UI.
|
||||
|
||||
## Features
|
||||
- SQLite persistence for inventory, source health, and collection history
|
||||
- Automatic scheduled polling with manual trigger API
|
||||
- Connectors for:
|
||||
- Proxmox (VM and LXC)
|
||||
- Docker hosts
|
||||
- Coolify instances
|
||||
- Nginx config ingestion from Docker agents
|
||||
- Dashboard with topology cards and filterable inventory table
|
||||
|
||||
## Requirements
|
||||
- Python 3.13+
|
||||
- UV
|
||||
|
||||
## Development
|
||||
1. Install project requirements
|
||||
1. Install dependencies
|
||||
```bash
|
||||
uv sync
|
||||
```
|
||||
2. Run the dev server
|
||||
2. Start app
|
||||
```bash
|
||||
uv run python3 server.py
|
||||
```
|
||||
3. Alternatively use the virtual environment
|
||||
```bash
|
||||
source .venv/bin/activate
|
||||
```
|
||||
You can exit the environment with `deactivate`
|
||||
|
||||
For best development setup, you should install the git hook for pre-commit
|
||||
3. Optional pre-commit hooks
|
||||
```bash
|
||||
uv run pre-commit install
|
||||
```
|
||||
|
||||
|
||||
## Production
|
||||
Run using the main.py file
|
||||
```bash
|
||||
python3 main.py
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
### Core
|
||||
- `APP_NAME` default: `Home Lab Inventory`
|
||||
- `BASE_DIR` default: current directory
|
||||
- `DATABASE_PATH` default: `${BASE_DIR}/inventory.db`
|
||||
- `SCHEDULER_ENABLED` default: `true`
|
||||
- `POLL_INTERVAL_SECONDS` default: `300`
|
||||
- `INITIAL_COLLECT_ON_STARTUP` default: `true`
|
||||
- `REQUEST_TIMEOUT_SECONDS` default: `10`
|
||||
- `ADMIN_TOKEN` optional, required in `X-Admin-Token` for manual trigger when set
|
||||
|
||||
### Proxmox
|
||||
- `PROXMOX_ENABLED` default: `true`
|
||||
- `PROXMOX_ENDPOINTS` comma-separated URLs
|
||||
- Token auth:
|
||||
- `PROXMOX_TOKEN_ID`
|
||||
- `PROXMOX_TOKEN_SECRET`
|
||||
- Or username/password auth:
|
||||
- `PROXMOX_USER`
|
||||
- `PROXMOX_PASSWORD`
|
||||
- `PROXMOX_VERIFY_TLS` default: `false`
|
||||
|
||||
### Docker
|
||||
- `DOCKER_ENABLED` default: `true`
|
||||
- `DOCKER_HOSTS` comma-separated Docker API endpoints
|
||||
- `DOCKER_HOST` single Docker endpoint (used if `DOCKER_HOSTS` is empty)
|
||||
- `DOCKER_BEARER_TOKEN` optional
|
||||
- `DOCKER_AGENT_ENDPOINTS` comma-separated inventory agent URLs
|
||||
- `DOCKER_AGENT_TOKEN` bearer token used by inventory agents
|
||||
|
||||
Docker endpoint examples:
|
||||
```text
|
||||
DOCKER_HOST=unix:///var/run/docker.sock
|
||||
DOCKER_HOSTS=tcp://docker-1:2376,tcp://docker-2:2376,https://docker-3.example/api
|
||||
DOCKER_AGENT_ENDPOINTS=https://docker-a-agent:9090,https://docker-b-agent:9090,https://docker-c-agent:9090
|
||||
DOCKER_AGENT_TOKEN=change-me
|
||||
```
|
||||
|
||||
### Coolify
|
||||
- `COOLIFY_ENABLED` default: `true`
|
||||
- `COOLIFY_ENDPOINTS` comma-separated URLs
|
||||
- `COOLIFY_API_TOKEN`
|
||||
|
||||
### Nginx (via Docker Agent)
|
||||
- Nginx config data is collected from each Docker agent endpoint (`/api/v1/nginx-configs`).
|
||||
- No separate NPM API credentials are required in the inventory app.
|
||||
|
||||
## API Endpoints
|
||||
- `GET /api/v1/summary`
|
||||
- `GET /api/v1/topology`
|
||||
- `GET /api/v1/assets`
|
||||
- `GET /api/v1/sources`
|
||||
- `GET /api/v1/health`
|
||||
- `POST /api/v1/collect/trigger`
|
||||
|
||||
## Docker Notes
|
||||
- Persist DB with a volume mounted into `BASE_DIR` or set `DATABASE_PATH`
|
||||
- If you need local Docker socket collection, expose Docker API via a secure endpoint or add your preferred socket proxy
|
||||
|
||||
## Multi-Server Docker Agent Pattern
|
||||
- For one local Docker host, use `DOCKER_HOST=unix:///var/run/docker.sock`.
|
||||
- For multiple Docker servers, run a small inventory agent on each Docker server and poll those agent endpoints from this app.
|
||||
- Agent responsibilities:
|
||||
- Read local Docker socket (`/var/run/docker.sock`) on that host.
|
||||
- Optionally read mounted Nginx configuration files.
|
||||
- Expose read-only inventory JSON over HTTPS.
|
||||
- Authenticate requests with a bearer token.
|
||||
- Return container name, image, state, ports, and networks.
|
||||
- This avoids exposing Docker daemon APIs directly across your network.
|
||||
|
||||
## Docker Agent Quick Start
|
||||
Use the `docker_agent/` folder to run one agent per Docker server.
|
||||
|
||||
1. On each Docker server, copy the folder and start it:
|
||||
```bash
|
||||
cd docker_agent
|
||||
docker compose up -d --build
|
||||
```
|
||||
|
||||
2. Set an agent token on each server (`AGENT_TOKEN`) and expose port `9090` only to your inventory app network.
|
||||
|
||||
Nginx config support in docker agent:
|
||||
- Mount your Nginx config directory into the agent container.
|
||||
- Set `NGINX_CONFIG_DIR` to the mounted path.
|
||||
- Query `GET /api/v1/nginx-configs` on the agent.
|
||||
|
||||
Example compose mount in `docker_agent/docker-compose.yml`:
|
||||
```yaml
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- /etc/nginx:/mnt/nginx:ro
|
||||
environment:
|
||||
NGINX_CONFIG_DIR: /mnt/nginx
|
||||
```
|
||||
|
||||
3. In the inventory app `.env`, set:
|
||||
```text
|
||||
DOCKER_ENABLED=true
|
||||
DOCKER_AGENT_ENDPOINTS=http://docker1.local:9090,http://docker2.local:9090,http://docker3.local:9090
|
||||
DOCKER_AGENT_TOKEN=change-me
|
||||
```
|
||||
|
||||
4. Trigger a collection in the UI and confirm `docker` source reports `ok` in the Sources panel.
|
||||
|
||||
3
collectors/__init__.py
Normal file
3
collectors/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
from .orchestrator import InventoryCollectorOrchestrator
|
||||
|
||||
__all__ = ["InventoryCollectorOrchestrator"]
|
||||
17
collectors/base.py
Normal file
17
collectors/base.py
Normal file
@@ -0,0 +1,17 @@
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, List
|
||||
|
||||
|
||||
@dataclass
|
||||
class CollectionResult:
|
||||
source: str
|
||||
assets: List[Dict]
|
||||
status: str
|
||||
error: str = ""
|
||||
|
||||
|
||||
class BaseCollector:
|
||||
source_name = "unknown"
|
||||
|
||||
def collect(self) -> CollectionResult:
|
||||
raise NotImplementedError
|
||||
146
collectors/coolify.py
Normal file
146
collectors/coolify.py
Normal file
@@ -0,0 +1,146 @@
|
||||
from typing import Dict, List
|
||||
|
||||
import requests
|
||||
|
||||
from config import AppConfig
|
||||
from .base import BaseCollector, CollectionResult
|
||||
|
||||
|
||||
class CoolifyCollector(BaseCollector):
|
||||
source_name = "coolify"
|
||||
|
||||
def __init__(self, config: AppConfig):
|
||||
self.config = config
|
||||
|
||||
def collect(self) -> CollectionResult:
|
||||
if not self.config.coolify_enabled:
|
||||
return CollectionResult(source=self.source_name, assets=[], status="disabled")
|
||||
if not self.config.coolify_endpoints:
|
||||
return CollectionResult(source=self.source_name, assets=[], status="skipped", error="No COOLIFY_ENDPOINTS configured")
|
||||
if not self.config.coolify_api_token:
|
||||
return CollectionResult(source=self.source_name, assets=[], status="skipped", error="No COOLIFY_API_TOKEN configured")
|
||||
|
||||
headers = {
|
||||
"Accept": "application/json",
|
||||
"Authorization": f"Bearer {self.config.coolify_api_token}",
|
||||
}
|
||||
|
||||
assets: List[Dict] = []
|
||||
errors: List[str] = []
|
||||
|
||||
for endpoint in self.config.coolify_endpoints:
|
||||
base = endpoint.rstrip("/")
|
||||
try:
|
||||
resp = requests.get(
|
||||
f"{base}/api/v1/applications",
|
||||
headers=headers,
|
||||
timeout=self.config.request_timeout_seconds,
|
||||
)
|
||||
resp.raise_for_status()
|
||||
for app in self._extract_app_list(resp.json()):
|
||||
app_status = self._derive_status(app)
|
||||
assets.append(
|
||||
{
|
||||
"asset_type": "service",
|
||||
"external_id": str(app.get("id", app.get("uuid", "unknown-app"))),
|
||||
"name": app.get("name", "unknown-service"),
|
||||
"hostname": app.get("fqdn") or app.get("name"),
|
||||
"status": app_status,
|
||||
"ip_addresses": [],
|
||||
"node": endpoint,
|
||||
"metadata": {
|
||||
"coolify_uuid": app.get("uuid"),
|
||||
"environment": app.get("environment_name"),
|
||||
"repository": app.get("git_repository"),
|
||||
"raw_status": app.get("status"),
|
||||
"health": app.get("health"),
|
||||
"deployment_status": app.get("deployment_status"),
|
||||
},
|
||||
}
|
||||
)
|
||||
except Exception as exc:
|
||||
errors.append(f"{endpoint}: {exc}")
|
||||
|
||||
if errors and not assets:
|
||||
return CollectionResult(source=self.source_name, assets=[], status="error", error=" | ".join(errors))
|
||||
if errors:
|
||||
return CollectionResult(source=self.source_name, assets=assets, status="degraded", error=" | ".join(errors))
|
||||
return CollectionResult(source=self.source_name, assets=assets, status="ok")
|
||||
|
||||
@staticmethod
|
||||
def _extract_app_list(payload: object) -> List[Dict]:
|
||||
if isinstance(payload, list):
|
||||
return [item for item in payload if isinstance(item, dict)]
|
||||
|
||||
if isinstance(payload, dict):
|
||||
for key in ("data", "applications", "items", "result"):
|
||||
value = payload.get(key)
|
||||
if isinstance(value, list):
|
||||
return [item for item in value if isinstance(item, dict)]
|
||||
|
||||
return []
|
||||
|
||||
@staticmethod
|
||||
def _derive_status(app: Dict) -> str:
|
||||
candidate_fields = [
|
||||
app.get("status"),
|
||||
app.get("health"),
|
||||
app.get("deployment_status"),
|
||||
app.get("current_status"),
|
||||
app.get("state"),
|
||||
]
|
||||
|
||||
for value in candidate_fields:
|
||||
normalized = CoolifyCollector._normalize_status(value)
|
||||
if normalized != "unknown":
|
||||
return normalized
|
||||
|
||||
if app.get("is_running") is True or app.get("running") is True:
|
||||
return "running"
|
||||
if app.get("is_running") is False or app.get("running") is False:
|
||||
return "stopped"
|
||||
|
||||
return "unknown"
|
||||
|
||||
@staticmethod
|
||||
def _normalize_status(value: object) -> str:
|
||||
if value is None:
|
||||
return "unknown"
|
||||
|
||||
text = str(value).strip().lower()
|
||||
if not text:
|
||||
return "unknown"
|
||||
|
||||
online = {
|
||||
"running",
|
||||
"online",
|
||||
"healthy",
|
||||
"up",
|
||||
"active",
|
||||
"ready",
|
||||
"started",
|
||||
"success",
|
||||
"completed",
|
||||
}
|
||||
offline = {
|
||||
"stopped",
|
||||
"offline",
|
||||
"down",
|
||||
"unhealthy",
|
||||
"error",
|
||||
"failed",
|
||||
"crashed",
|
||||
"dead",
|
||||
"exited",
|
||||
}
|
||||
|
||||
if text in online:
|
||||
return "running"
|
||||
if text in offline:
|
||||
return "stopped"
|
||||
if "running" in text or "healthy" in text:
|
||||
return "running"
|
||||
if "stop" in text or "fail" in text or "unhealthy" in text:
|
||||
return "stopped"
|
||||
|
||||
return text
|
||||
152
collectors/docker_hosts.py
Normal file
152
collectors/docker_hosts.py
Normal file
@@ -0,0 +1,152 @@
|
||||
from typing import Dict, List
|
||||
import importlib
|
||||
|
||||
import requests
|
||||
|
||||
from config import AppConfig
|
||||
from .base import BaseCollector, CollectionResult
|
||||
|
||||
|
||||
class DockerHostsCollector(BaseCollector):
|
||||
source_name = "docker"
|
||||
|
||||
def __init__(self, config: AppConfig):
|
||||
self.config = config
|
||||
|
||||
def collect(self) -> CollectionResult:
|
||||
if not self.config.docker_enabled:
|
||||
return CollectionResult(source=self.source_name, assets=[], status="disabled")
|
||||
if not self.config.docker_hosts and not self.config.docker_agent_endpoints:
|
||||
return CollectionResult(
|
||||
source=self.source_name,
|
||||
assets=[],
|
||||
status="skipped",
|
||||
error="No DOCKER_HOSTS or DOCKER_AGENT_ENDPOINTS configured",
|
||||
)
|
||||
|
||||
assets: List[Dict] = []
|
||||
errors: List[str] = []
|
||||
headers = {"Accept": "application/json"}
|
||||
if self.config.docker_bearer_token:
|
||||
headers["Authorization"] = f"Bearer {self.config.docker_bearer_token}"
|
||||
|
||||
agent_headers = {"Accept": "application/json"}
|
||||
if self.config.docker_agent_token:
|
||||
agent_headers["Authorization"] = f"Bearer {self.config.docker_agent_token}"
|
||||
|
||||
for endpoint in self.config.docker_agent_endpoints:
|
||||
base = endpoint.rstrip("/")
|
||||
try:
|
||||
resp = requests.get(
|
||||
f"{base}/api/v1/containers",
|
||||
headers=agent_headers,
|
||||
timeout=self.config.request_timeout_seconds,
|
||||
)
|
||||
resp.raise_for_status()
|
||||
payload = resp.json()
|
||||
containers = payload.get("containers", []) if isinstance(payload, dict) else payload
|
||||
for container in containers:
|
||||
assets.append(
|
||||
{
|
||||
"asset_type": "container",
|
||||
"external_id": container.get("id", "unknown-container"),
|
||||
"name": container.get("name", "unknown"),
|
||||
"hostname": container.get("name", "unknown"),
|
||||
"status": container.get("state", "unknown"),
|
||||
"ip_addresses": container.get("ip_addresses", []),
|
||||
"node": endpoint,
|
||||
"metadata": {
|
||||
"image": container.get("image", "unknown"),
|
||||
"ports": container.get("ports", []),
|
||||
"networks": container.get("networks", []),
|
||||
"labels": container.get("labels", {}),
|
||||
"collected_via": "docker-agent",
|
||||
},
|
||||
}
|
||||
)
|
||||
except Exception as exc:
|
||||
errors.append(f"{endpoint}: {exc}")
|
||||
|
||||
for host in self.config.docker_hosts:
|
||||
if host.startswith("unix://") or host.startswith("tcp://"):
|
||||
try:
|
||||
assets.extend(self._collect_via_docker_sdk(host))
|
||||
except Exception as exc:
|
||||
errors.append(f"{host}: {exc}")
|
||||
continue
|
||||
|
||||
base = host.rstrip("/")
|
||||
try:
|
||||
resp = requests.get(
|
||||
f"{base}/containers/json?all=1",
|
||||
headers=headers,
|
||||
timeout=self.config.request_timeout_seconds,
|
||||
)
|
||||
resp.raise_for_status()
|
||||
for container in resp.json():
|
||||
ports = container.get("Ports", [])
|
||||
networks = list((container.get("NetworkSettings", {}) or {}).get("Networks", {}).keys())
|
||||
assets.append(
|
||||
{
|
||||
"asset_type": "container",
|
||||
"external_id": container.get("Id", "unknown-container"),
|
||||
"name": (container.get("Names", ["unknown"])[0] or "unknown").lstrip("/"),
|
||||
"hostname": container.get("Names", ["unknown"])[0].lstrip("/"),
|
||||
"status": container.get("State", "unknown"),
|
||||
"ip_addresses": [],
|
||||
"node": host,
|
||||
"metadata": {
|
||||
"image": container.get("Image"),
|
||||
"ports": ports,
|
||||
"networks": networks,
|
||||
"collected_via": "docker-host-api",
|
||||
},
|
||||
}
|
||||
)
|
||||
except Exception as exc:
|
||||
errors.append(f"{host}: {exc}")
|
||||
|
||||
if errors and not assets:
|
||||
return CollectionResult(source=self.source_name, assets=[], status="error", error=" | ".join(errors))
|
||||
if errors:
|
||||
return CollectionResult(source=self.source_name, assets=assets, status="degraded", error=" | ".join(errors))
|
||||
return CollectionResult(source=self.source_name, assets=assets, status="ok")
|
||||
|
||||
def _collect_via_docker_sdk(self, host: str) -> List[Dict]:
|
||||
try:
|
||||
docker_sdk = importlib.import_module("docker")
|
||||
except Exception as exc:
|
||||
raise RuntimeError(f"Docker SDK unavailable: {exc}") from exc
|
||||
|
||||
assets: List[Dict] = []
|
||||
client = docker_sdk.DockerClient(base_url=host)
|
||||
try:
|
||||
for container in client.containers.list(all=True):
|
||||
ports = container.attrs.get("NetworkSettings", {}).get("Ports", {})
|
||||
networks = list((container.attrs.get("NetworkSettings", {}).get("Networks", {}) or {}).keys())
|
||||
state = container.attrs.get("State", {}).get("Status", "unknown")
|
||||
image_obj = container.image
|
||||
image_name = "unknown"
|
||||
if image_obj is not None:
|
||||
image_tags = image_obj.tags or []
|
||||
image_name = image_tags[0] if image_tags else image_obj.id
|
||||
assets.append(
|
||||
{
|
||||
"asset_type": "container",
|
||||
"external_id": container.id,
|
||||
"name": container.name,
|
||||
"hostname": container.name,
|
||||
"status": state,
|
||||
"ip_addresses": [],
|
||||
"node": host,
|
||||
"metadata": {
|
||||
"image": image_name,
|
||||
"ports": ports,
|
||||
"networks": networks,
|
||||
"collected_via": "docker-sdk",
|
||||
},
|
||||
}
|
||||
)
|
||||
finally:
|
||||
client.close()
|
||||
return assets
|
||||
80
collectors/nginx_from_agent.py
Normal file
80
collectors/nginx_from_agent.py
Normal file
@@ -0,0 +1,80 @@
|
||||
from typing import Dict, List
|
||||
|
||||
import requests
|
||||
|
||||
from config import AppConfig
|
||||
from .base import BaseCollector, CollectionResult
|
||||
|
||||
|
||||
class NginxFromAgentCollector(BaseCollector):
|
||||
source_name = "nginx"
|
||||
|
||||
def __init__(self, config: AppConfig):
|
||||
self.config = config
|
||||
|
||||
def collect(self) -> CollectionResult:
|
||||
if not self.config.docker_agent_endpoints:
|
||||
return CollectionResult(source=self.source_name, assets=[], status="skipped", error="No DOCKER_AGENT_ENDPOINTS configured")
|
||||
|
||||
headers = {"Accept": "application/json"}
|
||||
if self.config.docker_agent_token:
|
||||
headers["Authorization"] = f"Bearer {self.config.docker_agent_token}"
|
||||
|
||||
assets: List[Dict] = []
|
||||
errors: List[str] = []
|
||||
|
||||
for endpoint in self.config.docker_agent_endpoints:
|
||||
base = endpoint.rstrip("/")
|
||||
try:
|
||||
resp = requests.get(
|
||||
f"{base}/api/v1/nginx-configs",
|
||||
headers=headers,
|
||||
timeout=self.config.request_timeout_seconds,
|
||||
)
|
||||
resp.raise_for_status()
|
||||
payload = resp.json()
|
||||
configs = payload.get("configs", []) if isinstance(payload, dict) else []
|
||||
|
||||
for config in configs:
|
||||
path = config.get("path", "unknown.conf")
|
||||
server_names = config.get("server_names", []) or []
|
||||
listens = config.get("listens", []) or []
|
||||
proxy_pass = config.get("proxy_pass", []) or []
|
||||
proxy_pass_resolved = config.get("proxy_pass_resolved", []) or []
|
||||
upstreams = config.get("upstreams", []) or []
|
||||
upstream_servers = config.get("upstream_servers", []) or []
|
||||
inferred_targets = config.get("inferred_targets", []) or []
|
||||
|
||||
if not server_names:
|
||||
server_names = [path]
|
||||
|
||||
for server_name in server_names:
|
||||
assets.append(
|
||||
{
|
||||
"asset_type": "nginx_site",
|
||||
"external_id": f"{endpoint}:{path}:{server_name}",
|
||||
"name": server_name,
|
||||
"hostname": server_name,
|
||||
"status": "configured",
|
||||
"ip_addresses": [],
|
||||
"node": endpoint,
|
||||
"metadata": {
|
||||
"path": path,
|
||||
"listens": listens,
|
||||
"proxy_pass": proxy_pass,
|
||||
"proxy_pass_resolved": proxy_pass_resolved,
|
||||
"upstreams": upstreams,
|
||||
"upstream_servers": upstream_servers,
|
||||
"inferred_targets": inferred_targets,
|
||||
"collected_via": "docker-agent-nginx",
|
||||
},
|
||||
}
|
||||
)
|
||||
except Exception as exc:
|
||||
errors.append(f"{endpoint}: {exc}")
|
||||
|
||||
if errors and not assets:
|
||||
return CollectionResult(source=self.source_name, assets=[], status="error", error=" | ".join(errors))
|
||||
if errors:
|
||||
return CollectionResult(source=self.source_name, assets=assets, status="degraded", error=" | ".join(errors))
|
||||
return CollectionResult(source=self.source_name, assets=assets, status="ok")
|
||||
73
collectors/orchestrator.py
Normal file
73
collectors/orchestrator.py
Normal file
@@ -0,0 +1,73 @@
|
||||
import threading
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, List
|
||||
|
||||
from config import AppConfig
|
||||
from database import InventoryStore
|
||||
from .base import CollectionResult
|
||||
from .coolify import CoolifyCollector
|
||||
from .docker_hosts import DockerHostsCollector
|
||||
from .nginx_from_agent import NginxFromAgentCollector
|
||||
from .proxmox import ProxmoxCollector
|
||||
|
||||
|
||||
@dataclass
|
||||
class RunReport:
|
||||
run_id: int
|
||||
status: str
|
||||
results: List[CollectionResult]
|
||||
|
||||
|
||||
class InventoryCollectorOrchestrator:
|
||||
def __init__(self, config: AppConfig, store: InventoryStore):
|
||||
self.config = config
|
||||
self.store = store
|
||||
self._run_lock = threading.Lock()
|
||||
self.collectors = [
|
||||
ProxmoxCollector(config),
|
||||
DockerHostsCollector(config),
|
||||
CoolifyCollector(config),
|
||||
NginxFromAgentCollector(config),
|
||||
]
|
||||
|
||||
self.store.seed_sources(
|
||||
{
|
||||
"proxmox": config.proxmox_enabled,
|
||||
"docker": config.docker_enabled,
|
||||
"coolify": config.coolify_enabled,
|
||||
"nginx": bool(config.docker_agent_endpoints),
|
||||
}
|
||||
)
|
||||
|
||||
def collect_once(self) -> RunReport:
|
||||
if not self._run_lock.acquire(blocking=False):
|
||||
return RunReport(run_id=-1, status="running", results=[])
|
||||
|
||||
try:
|
||||
run_id = self.store.run_start()
|
||||
results: List[CollectionResult] = []
|
||||
errors: List[str] = []
|
||||
|
||||
for collector in self.collectors:
|
||||
result = collector.collect()
|
||||
results.append(result)
|
||||
self.store.set_source_status(result.source, result.status, result.error)
|
||||
if result.assets:
|
||||
self.store.upsert_assets(result.source, result.assets)
|
||||
if result.status == "error":
|
||||
errors.append(f"{result.source}: {result.error}")
|
||||
|
||||
overall_status = "error" if errors else "ok"
|
||||
self.store.run_finish(run_id=run_id, status=overall_status, error_summary=" | ".join(errors))
|
||||
return RunReport(run_id=run_id, status=overall_status, results=results)
|
||||
finally:
|
||||
self._run_lock.release()
|
||||
|
||||
def current_data(self) -> Dict:
|
||||
return {
|
||||
"summary": self.store.summary(),
|
||||
"topology": self.store.topology(),
|
||||
"assets": self.store.list_assets(),
|
||||
"sources": self.store.source_health(),
|
||||
"last_run": self.store.last_run(),
|
||||
}
|
||||
243
collectors/proxmox.py
Normal file
243
collectors/proxmox.py
Normal file
@@ -0,0 +1,243 @@
|
||||
import ipaddress
|
||||
import re
|
||||
from typing import Dict, List
|
||||
|
||||
import requests
|
||||
|
||||
from config import AppConfig
|
||||
from .base import BaseCollector, CollectionResult
|
||||
|
||||
|
||||
class ProxmoxCollector(BaseCollector):
|
||||
source_name = "proxmox"
|
||||
|
||||
def __init__(self, config: AppConfig):
|
||||
self.config = config
|
||||
|
||||
def collect(self) -> CollectionResult:
|
||||
if not self.config.proxmox_enabled:
|
||||
return CollectionResult(source=self.source_name, assets=[], status="disabled")
|
||||
if not self.config.proxmox_endpoints:
|
||||
return CollectionResult(source=self.source_name, assets=[], status="skipped", error="No PROXMOX_ENDPOINTS configured")
|
||||
|
||||
assets: List[Dict] = []
|
||||
errors: List[str] = []
|
||||
|
||||
for endpoint in self.config.proxmox_endpoints:
|
||||
try:
|
||||
assets.extend(self._collect_endpoint(endpoint))
|
||||
except Exception as exc:
|
||||
errors.append(f"{endpoint}: {exc}")
|
||||
|
||||
if errors and not assets:
|
||||
return CollectionResult(source=self.source_name, assets=[], status="error", error=" | ".join(errors))
|
||||
if errors:
|
||||
return CollectionResult(source=self.source_name, assets=assets, status="degraded", error=" | ".join(errors))
|
||||
return CollectionResult(source=self.source_name, assets=assets, status="ok")
|
||||
|
||||
def _collect_endpoint(self, endpoint: str) -> List[Dict]:
|
||||
endpoint = endpoint.rstrip("/")
|
||||
headers = {"Accept": "application/json"}
|
||||
cookies = None
|
||||
|
||||
if self.config.proxmox_token_id and self.config.proxmox_token_secret:
|
||||
headers["Authorization"] = (
|
||||
f"PVEAPIToken={self.config.proxmox_token_id}={self.config.proxmox_token_secret}"
|
||||
)
|
||||
elif self.config.proxmox_user and self.config.proxmox_password:
|
||||
token_resp = requests.post(
|
||||
f"{endpoint}/api2/json/access/ticket",
|
||||
data={"username": self.config.proxmox_user, "password": self.config.proxmox_password},
|
||||
timeout=self.config.request_timeout_seconds,
|
||||
verify=self.config.proxmox_verify_tls,
|
||||
)
|
||||
token_resp.raise_for_status()
|
||||
payload = token_resp.json().get("data", {})
|
||||
cookies = {"PVEAuthCookie": payload.get("ticket", "")}
|
||||
csrf = payload.get("CSRFPreventionToken")
|
||||
if csrf:
|
||||
headers["CSRFPreventionToken"] = csrf
|
||||
|
||||
nodes_resp = requests.get(
|
||||
f"{endpoint}/api2/json/nodes",
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
timeout=self.config.request_timeout_seconds,
|
||||
verify=self.config.proxmox_verify_tls,
|
||||
)
|
||||
nodes_resp.raise_for_status()
|
||||
nodes = nodes_resp.json().get("data", [])
|
||||
|
||||
assets: List[Dict] = []
|
||||
for node in nodes:
|
||||
node_name = node.get("node", "unknown-node")
|
||||
|
||||
qemu_resp = requests.get(
|
||||
f"{endpoint}/api2/json/nodes/{node_name}/qemu",
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
timeout=self.config.request_timeout_seconds,
|
||||
verify=self.config.proxmox_verify_tls,
|
||||
)
|
||||
qemu_resp.raise_for_status()
|
||||
|
||||
for vm in qemu_resp.json().get("data", []):
|
||||
vm_id = str(vm.get("vmid", ""))
|
||||
vm_ips = self._collect_qemu_ips(endpoint, node_name, vm_id, headers, cookies)
|
||||
assets.append(
|
||||
{
|
||||
"asset_type": "vm",
|
||||
"external_id": str(vm.get("vmid", vm.get("name", "unknown-vm"))),
|
||||
"name": vm.get("name") or f"vm-{vm.get('vmid', 'unknown')}",
|
||||
"hostname": vm.get("name"),
|
||||
"status": vm.get("status", "unknown"),
|
||||
"ip_addresses": vm_ips,
|
||||
"node": node_name,
|
||||
"cpu": vm.get("cpus"),
|
||||
"memory_mb": (vm.get("maxmem", 0) or 0) / (1024 * 1024),
|
||||
"disk_gb": (vm.get("maxdisk", 0) or 0) / (1024 * 1024 * 1024),
|
||||
"metadata": {
|
||||
"endpoint": endpoint,
|
||||
"uptime_seconds": vm.get("uptime"),
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
lxc_resp = requests.get(
|
||||
f"{endpoint}/api2/json/nodes/{node_name}/lxc",
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
timeout=self.config.request_timeout_seconds,
|
||||
verify=self.config.proxmox_verify_tls,
|
||||
)
|
||||
lxc_resp.raise_for_status()
|
||||
|
||||
for lxc in lxc_resp.json().get("data", []):
|
||||
lxc_id = str(lxc.get("vmid", ""))
|
||||
lxc_ips = self._collect_lxc_ips(endpoint, node_name, lxc_id, headers, cookies)
|
||||
assets.append(
|
||||
{
|
||||
"asset_type": "lxc",
|
||||
"external_id": str(lxc.get("vmid", lxc.get("name", "unknown-lxc"))),
|
||||
"name": lxc.get("name") or f"lxc-{lxc.get('vmid', 'unknown')}",
|
||||
"hostname": lxc.get("name"),
|
||||
"status": lxc.get("status", "unknown"),
|
||||
"ip_addresses": lxc_ips,
|
||||
"node": node_name,
|
||||
"cpu": lxc.get("cpus"),
|
||||
"memory_mb": (lxc.get("maxmem", 0) or 0) / (1024 * 1024),
|
||||
"disk_gb": (lxc.get("maxdisk", 0) or 0) / (1024 * 1024 * 1024),
|
||||
"metadata": {
|
||||
"endpoint": endpoint,
|
||||
"uptime_seconds": lxc.get("uptime"),
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
return assets
|
||||
|
||||
def _collect_qemu_ips(self, endpoint: str, node_name: str, vm_id: str, headers: Dict, cookies: Dict | None) -> List[str]:
|
||||
ips: List[str] = []
|
||||
|
||||
# Guest agent provides the most accurate runtime IP list when enabled.
|
||||
try:
|
||||
agent_resp = requests.get(
|
||||
f"{endpoint}/api2/json/nodes/{node_name}/qemu/{vm_id}/agent/network-get-interfaces",
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
timeout=self.config.request_timeout_seconds,
|
||||
verify=self.config.proxmox_verify_tls,
|
||||
)
|
||||
if agent_resp.ok:
|
||||
data = agent_resp.json().get("data", {})
|
||||
interfaces = data.get("result", []) if isinstance(data, dict) else []
|
||||
for interface in interfaces:
|
||||
for addr in interface.get("ip-addresses", []) or []:
|
||||
value = addr.get("ip-address")
|
||||
if value:
|
||||
ips.append(value)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
ips.extend(self._collect_config_ips(endpoint, node_name, "qemu", vm_id, headers, cookies))
|
||||
return self._normalize_ips(ips)
|
||||
|
||||
def _collect_lxc_ips(self, endpoint: str, node_name: str, vm_id: str, headers: Dict, cookies: Dict | None) -> List[str]:
|
||||
ips: List[str] = []
|
||||
|
||||
# Runtime interfaces capture DHCP-assigned addresses that are not present in static config.
|
||||
try:
|
||||
iface_resp = requests.get(
|
||||
f"{endpoint}/api2/json/nodes/{node_name}/lxc/{vm_id}/interfaces",
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
timeout=self.config.request_timeout_seconds,
|
||||
verify=self.config.proxmox_verify_tls,
|
||||
)
|
||||
if iface_resp.ok:
|
||||
interfaces = iface_resp.json().get("data", [])
|
||||
if isinstance(interfaces, list):
|
||||
for interface in interfaces:
|
||||
inet_values = interface.get("inet")
|
||||
if isinstance(inet_values, list):
|
||||
ips.extend(inet_values)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
ips.extend(self._collect_config_ips(endpoint, node_name, "lxc", vm_id, headers, cookies))
|
||||
return self._normalize_ips(ips)
|
||||
|
||||
def _collect_config_ips(
|
||||
self,
|
||||
endpoint: str,
|
||||
node_name: str,
|
||||
vm_type: str,
|
||||
vm_id: str,
|
||||
headers: Dict,
|
||||
cookies: Dict | None,
|
||||
) -> List[str]:
|
||||
try:
|
||||
config_resp = requests.get(
|
||||
f"{endpoint}/api2/json/nodes/{node_name}/{vm_type}/{vm_id}/config",
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
timeout=self.config.request_timeout_seconds,
|
||||
verify=self.config.proxmox_verify_tls,
|
||||
)
|
||||
if not config_resp.ok:
|
||||
return []
|
||||
config = config_resp.json().get("data", {})
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
values = []
|
||||
for key, value in config.items():
|
||||
if not isinstance(value, str):
|
||||
continue
|
||||
if key.startswith("net") or key in {"ipconfig0", "ipconfig1", "ipconfig2", "ipconfig3"}:
|
||||
values.append(value)
|
||||
|
||||
ips: List[str] = []
|
||||
for value in values:
|
||||
ips.extend(re.findall(r"\b(?:\d{1,3}\.){3}\d{1,3}(?:/\d{1,2})?\b", value))
|
||||
return ips
|
||||
|
||||
@staticmethod
|
||||
def _normalize_ips(values: List[str]) -> List[str]:
|
||||
normalized: List[str] = []
|
||||
seen = set()
|
||||
for value in values:
|
||||
candidate = value.strip()
|
||||
if "/" in candidate:
|
||||
candidate = candidate.split("/", 1)[0]
|
||||
try:
|
||||
ip_obj = ipaddress.ip_address(candidate)
|
||||
except ValueError:
|
||||
continue
|
||||
if ip_obj.is_loopback:
|
||||
continue
|
||||
text = str(ip_obj)
|
||||
if text not in seen:
|
||||
seen.add(text)
|
||||
normalized.append(text)
|
||||
return normalized
|
||||
77
config.py
Normal file
77
config.py
Normal file
@@ -0,0 +1,77 @@
|
||||
import os
|
||||
from dataclasses import dataclass
|
||||
from typing import List
|
||||
|
||||
|
||||
def _split_csv(value: str) -> List[str]:
|
||||
return [item.strip() for item in value.split(",") if item.strip()]
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class AppConfig:
|
||||
app_name: str
|
||||
database_path: str
|
||||
poll_interval_seconds: int
|
||||
scheduler_enabled: bool
|
||||
admin_token: str
|
||||
request_timeout_seconds: int
|
||||
|
||||
proxmox_enabled: bool
|
||||
proxmox_endpoints: List[str]
|
||||
proxmox_token_id: str
|
||||
proxmox_token_secret: str
|
||||
proxmox_user: str
|
||||
proxmox_password: str
|
||||
proxmox_verify_tls: bool
|
||||
|
||||
docker_enabled: bool
|
||||
docker_hosts: List[str]
|
||||
docker_bearer_token: str
|
||||
docker_agent_endpoints: List[str]
|
||||
docker_agent_token: str
|
||||
|
||||
coolify_enabled: bool
|
||||
coolify_endpoints: List[str]
|
||||
coolify_api_token: str
|
||||
|
||||
|
||||
def _bool_env(name: str, default: bool) -> bool:
|
||||
value = os.getenv(name)
|
||||
if value is None:
|
||||
return default
|
||||
return value.strip().lower() in {"1", "true", "yes", "on"}
|
||||
|
||||
|
||||
def load_config() -> AppConfig:
|
||||
base_dir = os.getenv("BASE_DIR", os.getcwd())
|
||||
os.makedirs(base_dir, exist_ok=True)
|
||||
|
||||
docker_hosts = _split_csv(os.getenv("DOCKER_HOSTS", ""))
|
||||
if not docker_hosts:
|
||||
single_docker_host = os.getenv("DOCKER_HOST", "").strip()
|
||||
if single_docker_host:
|
||||
docker_hosts = [single_docker_host]
|
||||
|
||||
return AppConfig(
|
||||
app_name=os.getenv("APP_NAME", "Home Lab Inventory"),
|
||||
database_path=os.getenv("DATABASE_PATH", os.path.join(base_dir, "inventory.db")),
|
||||
poll_interval_seconds=int(os.getenv("POLL_INTERVAL_SECONDS", "300")),
|
||||
scheduler_enabled=_bool_env("SCHEDULER_ENABLED", True),
|
||||
admin_token=os.getenv("ADMIN_TOKEN", ""),
|
||||
request_timeout_seconds=int(os.getenv("REQUEST_TIMEOUT_SECONDS", "10")),
|
||||
proxmox_enabled=_bool_env("PROXMOX_ENABLED", True),
|
||||
proxmox_endpoints=_split_csv(os.getenv("PROXMOX_ENDPOINTS", "")),
|
||||
proxmox_token_id=os.getenv("PROXMOX_TOKEN_ID", ""),
|
||||
proxmox_token_secret=os.getenv("PROXMOX_TOKEN_SECRET", ""),
|
||||
proxmox_user=os.getenv("PROXMOX_USER", ""),
|
||||
proxmox_password=os.getenv("PROXMOX_PASSWORD", ""),
|
||||
proxmox_verify_tls=_bool_env("PROXMOX_VERIFY_TLS", False),
|
||||
docker_enabled=_bool_env("DOCKER_ENABLED", True),
|
||||
docker_hosts=docker_hosts,
|
||||
docker_bearer_token=os.getenv("DOCKER_BEARER_TOKEN", ""),
|
||||
docker_agent_endpoints=_split_csv(os.getenv("DOCKER_AGENT_ENDPOINTS", "")),
|
||||
docker_agent_token=os.getenv("DOCKER_AGENT_TOKEN", ""),
|
||||
coolify_enabled=_bool_env("COOLIFY_ENABLED", True),
|
||||
coolify_endpoints=_split_csv(os.getenv("COOLIFY_ENDPOINTS", "")),
|
||||
coolify_api_token=os.getenv("COOLIFY_API_TOKEN", ""),
|
||||
)
|
||||
305
database.py
Normal file
305
database.py
Normal file
@@ -0,0 +1,305 @@
|
||||
import json
|
||||
import sqlite3
|
||||
from contextlib import closing
|
||||
from datetime import datetime, timezone
|
||||
from typing import Dict, Iterable, List, Optional
|
||||
|
||||
|
||||
def utc_now() -> str:
|
||||
return datetime.now(timezone.utc).isoformat()
|
||||
|
||||
|
||||
class InventoryStore:
|
||||
def __init__(self, database_path: str):
|
||||
self.database_path = database_path
|
||||
|
||||
def _connect(self) -> sqlite3.Connection:
|
||||
conn = sqlite3.connect(self.database_path, check_same_thread=False)
|
||||
conn.row_factory = sqlite3.Row
|
||||
return conn
|
||||
|
||||
def init(self) -> None:
|
||||
with closing(self._connect()) as conn:
|
||||
conn.executescript(
|
||||
"""
|
||||
PRAGMA journal_mode=WAL;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS sources (
|
||||
name TEXT PRIMARY KEY,
|
||||
enabled INTEGER NOT NULL,
|
||||
last_status TEXT,
|
||||
last_error TEXT,
|
||||
last_success TEXT,
|
||||
updated_at TEXT NOT NULL
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS collection_runs (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
started_at TEXT NOT NULL,
|
||||
finished_at TEXT,
|
||||
status TEXT NOT NULL,
|
||||
error_summary TEXT
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS assets (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
source TEXT NOT NULL,
|
||||
asset_type TEXT NOT NULL,
|
||||
external_id TEXT NOT NULL,
|
||||
name TEXT NOT NULL,
|
||||
hostname TEXT,
|
||||
status TEXT,
|
||||
ip_addresses TEXT,
|
||||
subnet TEXT,
|
||||
public_ip TEXT,
|
||||
node TEXT,
|
||||
parent_id TEXT,
|
||||
cpu REAL,
|
||||
memory_mb REAL,
|
||||
disk_gb REAL,
|
||||
metadata_json TEXT,
|
||||
last_seen TEXT NOT NULL,
|
||||
updated_at TEXT NOT NULL,
|
||||
UNIQUE(source, asset_type, external_id)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_assets_source ON assets(source);
|
||||
CREATE INDEX IF NOT EXISTS idx_assets_status ON assets(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_assets_subnet ON assets(subnet);
|
||||
"""
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
def seed_sources(self, source_states: Dict[str, bool]) -> None:
|
||||
now = utc_now()
|
||||
with closing(self._connect()) as conn:
|
||||
valid_sources = set(source_states.keys())
|
||||
for source, enabled in source_states.items():
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO sources(name, enabled, updated_at)
|
||||
VALUES(?, ?, ?)
|
||||
ON CONFLICT(name) DO UPDATE SET
|
||||
enabled=excluded.enabled,
|
||||
updated_at=excluded.updated_at
|
||||
""",
|
||||
(source, int(enabled), now),
|
||||
)
|
||||
|
||||
if valid_sources:
|
||||
placeholders = ",".join(["?"] * len(valid_sources))
|
||||
conn.execute(
|
||||
f"DELETE FROM sources WHERE name NOT IN ({placeholders})",
|
||||
tuple(valid_sources),
|
||||
)
|
||||
|
||||
conn.commit()
|
||||
|
||||
def run_start(self) -> int:
|
||||
with closing(self._connect()) as conn:
|
||||
cursor = conn.execute(
|
||||
"INSERT INTO collection_runs(started_at, status) VALUES(?, ?)",
|
||||
(utc_now(), "running"),
|
||||
)
|
||||
conn.commit()
|
||||
row_id = cursor.lastrowid
|
||||
if row_id is None:
|
||||
raise RuntimeError("Failed to create collection run")
|
||||
return int(row_id)
|
||||
|
||||
def run_finish(self, run_id: int, status: str, error_summary: str = "") -> None:
|
||||
with closing(self._connect()) as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
UPDATE collection_runs
|
||||
SET finished_at=?, status=?, error_summary=?
|
||||
WHERE id=?
|
||||
""",
|
||||
(utc_now(), status, error_summary.strip(), run_id),
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
def set_source_status(self, source: str, status: str, error: str = "") -> None:
|
||||
now = utc_now()
|
||||
success_ts = now if status == "ok" else None
|
||||
with closing(self._connect()) as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
UPDATE sources
|
||||
SET last_status=?,
|
||||
last_error=?,
|
||||
last_success=COALESCE(?, last_success),
|
||||
updated_at=?
|
||||
WHERE name=?
|
||||
""",
|
||||
(status, error.strip(), success_ts, now, source),
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
def upsert_assets(self, source: str, assets: Iterable[Dict]) -> None:
|
||||
now = utc_now()
|
||||
with closing(self._connect()) as conn:
|
||||
for asset in assets:
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO assets(
|
||||
source, asset_type, external_id, name, hostname, status,
|
||||
ip_addresses, subnet, public_ip, node, parent_id,
|
||||
cpu, memory_mb, disk_gb, metadata_json, last_seen, updated_at
|
||||
) VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(source, asset_type, external_id)
|
||||
DO UPDATE SET
|
||||
name=excluded.name,
|
||||
hostname=excluded.hostname,
|
||||
status=excluded.status,
|
||||
ip_addresses=excluded.ip_addresses,
|
||||
subnet=excluded.subnet,
|
||||
public_ip=excluded.public_ip,
|
||||
node=excluded.node,
|
||||
parent_id=excluded.parent_id,
|
||||
cpu=excluded.cpu,
|
||||
memory_mb=excluded.memory_mb,
|
||||
disk_gb=excluded.disk_gb,
|
||||
metadata_json=excluded.metadata_json,
|
||||
last_seen=excluded.last_seen,
|
||||
updated_at=excluded.updated_at
|
||||
""",
|
||||
(
|
||||
source,
|
||||
asset.get("asset_type", "unknown"),
|
||||
str(asset.get("external_id", asset.get("name", "unknown"))),
|
||||
asset.get("name", "unknown"),
|
||||
asset.get("hostname"),
|
||||
asset.get("status"),
|
||||
json.dumps(asset.get("ip_addresses", [])),
|
||||
asset.get("subnet"),
|
||||
asset.get("public_ip"),
|
||||
asset.get("node"),
|
||||
asset.get("parent_id"),
|
||||
asset.get("cpu"),
|
||||
asset.get("memory_mb"),
|
||||
asset.get("disk_gb"),
|
||||
json.dumps(asset.get("metadata", {})),
|
||||
now,
|
||||
now,
|
||||
),
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
def list_assets(self) -> List[Dict]:
|
||||
with closing(self._connect()) as conn:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT source, asset_type, external_id, name, hostname, status,
|
||||
ip_addresses, subnet, public_ip, node, parent_id,
|
||||
cpu, memory_mb, disk_gb, metadata_json, last_seen, updated_at
|
||||
FROM assets
|
||||
ORDER BY source, asset_type, name
|
||||
"""
|
||||
).fetchall()
|
||||
return [self._row_to_asset(row) for row in rows]
|
||||
|
||||
def source_health(self) -> List[Dict]:
|
||||
with closing(self._connect()) as conn:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT name, enabled, last_status, last_error, last_success, updated_at
|
||||
FROM sources
|
||||
ORDER BY name
|
||||
"""
|
||||
).fetchall()
|
||||
return [dict(row) for row in rows]
|
||||
|
||||
def last_run(self) -> Optional[Dict]:
|
||||
with closing(self._connect()) as conn:
|
||||
row = conn.execute(
|
||||
"""
|
||||
SELECT id, started_at, finished_at, status, error_summary
|
||||
FROM collection_runs
|
||||
ORDER BY id DESC
|
||||
LIMIT 1
|
||||
"""
|
||||
).fetchone()
|
||||
return dict(row) if row else None
|
||||
|
||||
def summary(self) -> Dict:
|
||||
with closing(self._connect()) as conn:
|
||||
totals = conn.execute(
|
||||
"""
|
||||
SELECT
|
||||
COUNT(*) AS total_assets,
|
||||
SUM(CASE WHEN status IN ('running', 'online', 'healthy', 'up') THEN 1 ELSE 0 END) AS online_assets,
|
||||
SUM(CASE WHEN status IN ('stopped', 'offline', 'down', 'error', 'unhealthy') THEN 1 ELSE 0 END) AS offline_assets,
|
||||
COUNT(DISTINCT source) AS source_count,
|
||||
COUNT(DISTINCT subnet) AS subnet_count
|
||||
FROM assets
|
||||
"""
|
||||
).fetchone()
|
||||
|
||||
by_type = conn.execute(
|
||||
"""
|
||||
SELECT asset_type, COUNT(*) AS count
|
||||
FROM assets
|
||||
GROUP BY asset_type
|
||||
ORDER BY count DESC
|
||||
"""
|
||||
).fetchall()
|
||||
|
||||
return {
|
||||
"total_assets": int(totals["total_assets"] or 0),
|
||||
"online_assets": int(totals["online_assets"] or 0),
|
||||
"offline_assets": int(totals["offline_assets"] or 0),
|
||||
"source_count": int(totals["source_count"] or 0),
|
||||
"subnet_count": int(totals["subnet_count"] or 0),
|
||||
"asset_breakdown": [{"asset_type": row["asset_type"], "count": row["count"]} for row in by_type],
|
||||
}
|
||||
|
||||
def topology(self) -> Dict:
|
||||
with closing(self._connect()) as conn:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT
|
||||
COALESCE(subnet, 'unassigned') AS subnet,
|
||||
COUNT(*) AS asset_count,
|
||||
COUNT(DISTINCT source) AS source_count,
|
||||
GROUP_CONCAT(DISTINCT public_ip) AS public_ips
|
||||
FROM assets
|
||||
GROUP BY COALESCE(subnet, 'unassigned')
|
||||
ORDER BY subnet
|
||||
"""
|
||||
).fetchall()
|
||||
|
||||
networks = []
|
||||
for row in rows:
|
||||
ips = [value for value in (row["public_ips"] or "").split(",") if value]
|
||||
networks.append(
|
||||
{
|
||||
"subnet": row["subnet"],
|
||||
"asset_count": row["asset_count"],
|
||||
"source_count": row["source_count"],
|
||||
"public_ips": ips,
|
||||
}
|
||||
)
|
||||
return {"networks": networks}
|
||||
|
||||
@staticmethod
|
||||
def _row_to_asset(row: sqlite3.Row) -> Dict:
|
||||
return {
|
||||
"source": row["source"],
|
||||
"asset_type": row["asset_type"],
|
||||
"external_id": row["external_id"],
|
||||
"name": row["name"],
|
||||
"hostname": row["hostname"],
|
||||
"status": row["status"],
|
||||
"ip_addresses": json.loads(row["ip_addresses"] or "[]"),
|
||||
"subnet": row["subnet"],
|
||||
"public_ip": row["public_ip"],
|
||||
"node": row["node"],
|
||||
"parent_id": row["parent_id"],
|
||||
"cpu": row["cpu"],
|
||||
"memory_mb": row["memory_mb"],
|
||||
"disk_gb": row["disk_gb"],
|
||||
"metadata": json.loads(row["metadata_json"] or "{}"),
|
||||
"last_seen": row["last_seen"],
|
||||
"updated_at": row["updated_at"],
|
||||
}
|
||||
9
docker_agent/Dockerfile
Normal file
9
docker_agent/Dockerfile
Normal file
@@ -0,0 +1,9 @@
|
||||
FROM python:3.13-alpine
|
||||
|
||||
WORKDIR /app
|
||||
COPY requirements.txt /app/requirements.txt
|
||||
RUN pip install --no-cache-dir -r /app/requirements.txt
|
||||
COPY app.py /app/app.py
|
||||
|
||||
EXPOSE 9090
|
||||
ENTRYPOINT ["python", "/app/app.py"]
|
||||
290
docker_agent/app.py
Normal file
290
docker_agent/app.py
Normal file
@@ -0,0 +1,290 @@
|
||||
import importlib
|
||||
import os
|
||||
import re
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Set
|
||||
|
||||
from flask import Flask, Request, jsonify, request
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
|
||||
def _authorized(req: Request) -> bool:
|
||||
token = os.getenv("AGENT_TOKEN", "").strip()
|
||||
if not token:
|
||||
return True
|
||||
header = req.headers.get("Authorization", "")
|
||||
expected = f"Bearer {token}"
|
||||
return header == expected
|
||||
|
||||
|
||||
def _docker_client():
|
||||
docker_sdk = importlib.import_module("docker")
|
||||
docker_host = os.getenv("DOCKER_HOST", "unix:///var/run/docker.sock")
|
||||
return docker_sdk.DockerClient(base_url=docker_host)
|
||||
|
||||
|
||||
def _container_to_json(container: Any) -> Dict[str, Any]:
|
||||
attrs = container.attrs or {}
|
||||
network_settings = attrs.get("NetworkSettings", {})
|
||||
networks = network_settings.get("Networks", {}) or {}
|
||||
|
||||
ip_addresses: List[str] = []
|
||||
for network in networks.values():
|
||||
ip = network.get("IPAddress")
|
||||
if ip:
|
||||
ip_addresses.append(ip)
|
||||
|
||||
ports = network_settings.get("Ports", {}) or {}
|
||||
labels = attrs.get("Config", {}).get("Labels", {}) or {}
|
||||
state = attrs.get("State", {}).get("Status", "unknown")
|
||||
|
||||
image_obj = container.image
|
||||
image_name = "unknown"
|
||||
if image_obj is not None:
|
||||
image_tags = image_obj.tags or []
|
||||
image_name = image_tags[0] if image_tags else image_obj.id
|
||||
|
||||
return {
|
||||
"id": container.id,
|
||||
"name": container.name,
|
||||
"state": state,
|
||||
"image": image_name,
|
||||
"ports": ports,
|
||||
"networks": list(networks.keys()),
|
||||
"ip_addresses": ip_addresses,
|
||||
"labels": labels,
|
||||
}
|
||||
|
||||
|
||||
def _nginx_root() -> Path:
|
||||
return Path(os.getenv("NGINX_CONFIG_DIR", "/mnt/nginx"))
|
||||
|
||||
|
||||
def _parse_nginx_file(content: str) -> Dict[str, Any]:
|
||||
server_names = re.findall(r"server_name\s+([^;]+);", content)
|
||||
listens = re.findall(r"listen\s+([^;]+);", content)
|
||||
proxy_pass_targets = re.findall(r"proxy_pass\s+([^;]+);", content)
|
||||
upstream_blocks: List[str] = []
|
||||
includes = re.findall(r"include\s+([^;]+);", content)
|
||||
set_matches = re.findall(r"set\s+\$([A-Za-z0-9_]+)\s+([^;]+);", content)
|
||||
upstream_servers: List[str] = []
|
||||
|
||||
for match in re.finditer(r"\bupstream\b\s+([^\s{]+)\s*\{(.*?)\}", content, flags=re.DOTALL):
|
||||
upstream_blocks.append(match.group(1).strip())
|
||||
block_body = match.group(2)
|
||||
for upstream_server in re.findall(r"server\s+([^;]+);", block_body):
|
||||
upstream_servers.append(upstream_server.strip())
|
||||
|
||||
set_variables: Dict[str, str] = {}
|
||||
for var_name, value in set_matches:
|
||||
candidate = value.strip().strip("\"'")
|
||||
set_variables[var_name] = candidate
|
||||
if "http://" in candidate or "https://" in candidate or ":" in candidate:
|
||||
proxy_pass_targets.append(candidate)
|
||||
|
||||
split_values = []
|
||||
for value in server_names:
|
||||
split_values.extend([name.strip() for name in value.split() if name.strip()])
|
||||
|
||||
inferred_targets: List[str] = []
|
||||
forward_host = set_variables.get("forward_host", "").strip()
|
||||
forward_port = set_variables.get("forward_port", "").strip()
|
||||
forward_scheme = set_variables.get("forward_scheme", "http").strip() or "http"
|
||||
|
||||
if forward_host:
|
||||
if forward_port:
|
||||
inferred_targets.append(f"{forward_scheme}://{forward_host}:{forward_port}")
|
||||
else:
|
||||
inferred_targets.append(f"{forward_scheme}://{forward_host}")
|
||||
|
||||
server_var = set_variables.get("server", "").strip()
|
||||
port_var = set_variables.get("port", "").strip()
|
||||
if server_var and server_var.startswith("$") is False:
|
||||
if port_var and port_var.startswith("$") is False:
|
||||
inferred_targets.append(f"{forward_scheme}://{server_var}:{port_var}")
|
||||
else:
|
||||
inferred_targets.append(f"{forward_scheme}://{server_var}")
|
||||
|
||||
return {
|
||||
"server_names": sorted(set(split_values)),
|
||||
"listens": sorted(set([value.strip() for value in listens if value.strip()])),
|
||||
"proxy_pass": sorted(set([value.strip() for value in proxy_pass_targets if value.strip()])),
|
||||
"upstreams": sorted(set([value.strip() for value in upstream_blocks if value.strip()])),
|
||||
"upstream_servers": sorted(set([value for value in upstream_servers if value])),
|
||||
"inferred_targets": sorted(set([value for value in inferred_targets if value])),
|
||||
"includes": sorted(set([value.strip() for value in includes if value.strip()])),
|
||||
"set_variables": set_variables,
|
||||
}
|
||||
|
||||
|
||||
def _expand_globbed_includes(root: Path, include_value: str) -> List[Path]:
|
||||
candidate = include_value.strip().strip("\"'")
|
||||
if not candidate:
|
||||
return []
|
||||
|
||||
if candidate.startswith("/"):
|
||||
include_path = root.joinpath(candidate.lstrip("/"))
|
||||
else:
|
||||
include_path = root.joinpath(candidate)
|
||||
|
||||
matches = [path for path in root.glob(str(include_path.relative_to(root))) if path.is_file()]
|
||||
return sorted(set(matches))
|
||||
|
||||
|
||||
def _resolve_value(raw: str, variables: Dict[str, str]) -> str:
|
||||
value = raw.strip().strip("\"'")
|
||||
|
||||
for _ in range(5):
|
||||
replaced = False
|
||||
for var_name, var_value in variables.items():
|
||||
token = f"${var_name}"
|
||||
if token in value:
|
||||
value = value.replace(token, var_value)
|
||||
replaced = True
|
||||
if not replaced:
|
||||
break
|
||||
|
||||
return value
|
||||
|
||||
|
||||
def _collect_proxy_targets(
|
||||
file_path: Path,
|
||||
parsed_map: Dict[Path, Dict[str, Any]],
|
||||
root: Path,
|
||||
inherited_variables: Dict[str, str],
|
||||
visited: Set[Path],
|
||||
) -> List[str]:
|
||||
if file_path in visited:
|
||||
return []
|
||||
visited.add(file_path)
|
||||
|
||||
parsed = parsed_map.get(file_path)
|
||||
if not parsed:
|
||||
return []
|
||||
|
||||
variables = dict(inherited_variables)
|
||||
variables.update(parsed.get("set_variables", {}))
|
||||
|
||||
targets: List[str] = []
|
||||
for proxy_value in parsed.get("proxy_pass", []):
|
||||
resolved = _resolve_value(proxy_value, variables)
|
||||
if resolved:
|
||||
targets.append(resolved)
|
||||
|
||||
for include_value in parsed.get("includes", []):
|
||||
for include_file in _expand_globbed_includes(root, include_value):
|
||||
targets.extend(_collect_proxy_targets(include_file, parsed_map, root, variables, visited))
|
||||
|
||||
return targets
|
||||
|
||||
|
||||
def _scan_nginx_configs() -> List[Dict[str, Any]]:
|
||||
root = _nginx_root()
|
||||
if not root.exists() or not root.is_dir():
|
||||
return []
|
||||
|
||||
records: List[Dict[str, Any]] = []
|
||||
discovered_files: Set[Path] = set()
|
||||
|
||||
for pattern in ("*.conf", "*.vhost", "*.inc"):
|
||||
for config_file in root.rglob(pattern):
|
||||
discovered_files.add(config_file)
|
||||
|
||||
for config_file in root.rglob("*"):
|
||||
if not config_file.is_file():
|
||||
continue
|
||||
if config_file.suffix:
|
||||
continue
|
||||
discovered_files.add(config_file)
|
||||
|
||||
parsed_map: Dict[Path, Dict[str, Any]] = {}
|
||||
|
||||
for config_file in sorted(discovered_files):
|
||||
try:
|
||||
content = config_file.read_text(encoding="utf-8", errors="ignore")
|
||||
except OSError:
|
||||
continue
|
||||
|
||||
parsed = _parse_nginx_file(content)
|
||||
parsed_map[config_file] = parsed
|
||||
|
||||
for config_file in sorted(parsed_map.keys()):
|
||||
parsed = parsed_map[config_file]
|
||||
resolved_targets = sorted(
|
||||
set(
|
||||
[
|
||||
target
|
||||
for target in _collect_proxy_targets(
|
||||
config_file,
|
||||
parsed_map,
|
||||
root,
|
||||
parsed.get("set_variables", {}),
|
||||
set(),
|
||||
)
|
||||
if target
|
||||
]
|
||||
)
|
||||
)
|
||||
|
||||
records.append(
|
||||
{
|
||||
"path": str(config_file.relative_to(root)),
|
||||
"server_names": parsed["server_names"],
|
||||
"listens": parsed["listens"],
|
||||
"proxy_pass": parsed["proxy_pass"],
|
||||
"proxy_pass_resolved": resolved_targets,
|
||||
"upstreams": parsed["upstreams"],
|
||||
"upstream_servers": parsed["upstream_servers"],
|
||||
"inferred_targets": parsed["inferred_targets"],
|
||||
}
|
||||
)
|
||||
|
||||
return records
|
||||
|
||||
|
||||
@app.route("/health", methods=["GET"])
|
||||
def health() -> Any:
|
||||
return jsonify(
|
||||
{
|
||||
"ok": True,
|
||||
"service": "docker-inventory-agent",
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@app.route("/api/v1/containers", methods=["GET"])
|
||||
def containers() -> Any:
|
||||
if not _authorized(request):
|
||||
return jsonify({"error": "Unauthorized"}), 401
|
||||
|
||||
client = _docker_client()
|
||||
try:
|
||||
data = [_container_to_json(container) for container in client.containers.list(all=True)]
|
||||
finally:
|
||||
client.close()
|
||||
|
||||
return jsonify({"containers": data})
|
||||
|
||||
|
||||
@app.route("/api/v1/nginx-configs", methods=["GET"])
|
||||
def nginx_configs() -> Any:
|
||||
if not _authorized(request):
|
||||
return jsonify({"error": "Unauthorized"}), 401
|
||||
|
||||
root = _nginx_root()
|
||||
configs = _scan_nginx_configs()
|
||||
return jsonify(
|
||||
{
|
||||
"config_root": str(root),
|
||||
"exists": root.exists() and root.is_dir(),
|
||||
"count": len(configs),
|
||||
"configs": configs,
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
app.run(host="0.0.0.0", port=int(os.getenv("PORT", "9090")))
|
||||
15
docker_agent/docker-compose.yml
Normal file
15
docker_agent/docker-compose.yml
Normal file
@@ -0,0 +1,15 @@
|
||||
services:
|
||||
docker-inventory-agent:
|
||||
build: .
|
||||
container_name: docker-inventory-agent
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
AGENT_TOKEN: change-me
|
||||
DOCKER_HOST: unix:///var/run/docker.sock
|
||||
NGINX_CONFIG_DIR: /mnt/nginx
|
||||
PORT: "9090"
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- /etc/nginx:/mnt/nginx:ro
|
||||
ports:
|
||||
- "9090:9090"
|
||||
2
docker_agent/requirements.txt
Normal file
2
docker_agent/requirements.txt
Normal file
@@ -0,0 +1,2 @@
|
||||
flask>=3.1.2
|
||||
docker>=7.1.0
|
||||
BIN
inventory.db
Normal file
BIN
inventory.db
Normal file
Binary file not shown.
@@ -1,10 +1,11 @@
|
||||
[project]
|
||||
name = "python-webserver-template"
|
||||
name = "inventory"
|
||||
version = "0.1.0"
|
||||
description = "Add your description here"
|
||||
description = "Server inventory system"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.13"
|
||||
dependencies = [
|
||||
"docker>=7.1.0",
|
||||
"flask>=3.1.2",
|
||||
"gunicorn>=23.0.0",
|
||||
"python-dotenv>=1.2.1",
|
||||
|
||||
284
server.py
284
server.py
@@ -1,20 +1,33 @@
|
||||
from flask import (
|
||||
Flask,
|
||||
make_response,
|
||||
request,
|
||||
jsonify,
|
||||
render_template,
|
||||
send_from_directory,
|
||||
send_file,
|
||||
)
|
||||
import logging
|
||||
import os
|
||||
import requests
|
||||
from urllib.parse import urlparse
|
||||
from datetime import datetime
|
||||
|
||||
import dotenv
|
||||
import requests
|
||||
from flask import Flask, Request, jsonify, make_response, render_template, request, send_file, send_from_directory
|
||||
|
||||
from collectors import InventoryCollectorOrchestrator
|
||||
from config import load_config
|
||||
from database import InventoryStore
|
||||
from services import CollectionScheduler, should_autostart_scheduler
|
||||
|
||||
dotenv.load_dotenv()
|
||||
|
||||
logging.basicConfig(level=os.getenv("LOG_LEVEL", "INFO"), format="%(asctime)s %(levelname)s %(name)s: %(message)s")
|
||||
LOGGER = logging.getLogger(__name__)
|
||||
|
||||
app = Flask(__name__)
|
||||
config = load_config()
|
||||
store = InventoryStore(config.database_path)
|
||||
store.init()
|
||||
orchestrator = InventoryCollectorOrchestrator(config, store)
|
||||
scheduler = CollectionScheduler(config, orchestrator)
|
||||
if should_autostart_scheduler():
|
||||
scheduler.start()
|
||||
if os.getenv("INITIAL_COLLECT_ON_STARTUP", "true").strip().lower() in {"1", "true", "yes", "on"}:
|
||||
LOGGER.info("Running initial inventory collection")
|
||||
orchestrator.collect_once()
|
||||
|
||||
|
||||
def find(name, path):
|
||||
@@ -71,13 +84,8 @@ def wellknown(path):
|
||||
# region Main routes
|
||||
@app.route("/")
|
||||
def index():
|
||||
# Print the IP address of the requester
|
||||
print(f"Request from IP: {request.remote_addr}")
|
||||
# And the headers
|
||||
print(f"Request headers: {request.headers}")
|
||||
# Get current time in the format "dd MMM YYYY hh:mm AM/PM"
|
||||
current_datetime = datetime.now().strftime("%d %b %Y %I:%M %p")
|
||||
return render_template("index.html", datetime=current_datetime)
|
||||
return render_template("index.html", datetime=current_datetime, app_name=config.app_name)
|
||||
|
||||
|
||||
@app.route("/<path:path>")
|
||||
@@ -107,25 +115,239 @@ def catch_all(path: str):
|
||||
|
||||
# region API routes
|
||||
|
||||
api_requests = 0
|
||||
|
||||
def _authorized(req: Request) -> bool:
|
||||
if not config.admin_token:
|
||||
return True
|
||||
return req.headers.get("X-Admin-Token", "") == config.admin_token
|
||||
|
||||
|
||||
@app.route("/api/v1/summary", methods=["GET"])
|
||||
def api_summary():
|
||||
payload = {
|
||||
"app_name": config.app_name,
|
||||
"summary": store.summary(),
|
||||
"last_run": store.last_run(),
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
}
|
||||
return jsonify(payload)
|
||||
|
||||
|
||||
@app.route("/api/v1/topology", methods=["GET"])
|
||||
def api_topology():
|
||||
return jsonify(store.topology())
|
||||
|
||||
|
||||
@app.route("/api/v1/assets", methods=["GET"])
|
||||
def api_assets():
|
||||
assets = store.list_assets()
|
||||
assets = _link_assets_to_proxmox(assets)
|
||||
source = request.args.get("source")
|
||||
status = request.args.get("status")
|
||||
search = (request.args.get("search") or "").strip().lower()
|
||||
|
||||
filtered = []
|
||||
for asset in assets:
|
||||
if source and asset.get("source") != source:
|
||||
continue
|
||||
if status and (asset.get("status") or "") != status:
|
||||
continue
|
||||
if search:
|
||||
haystack = " ".join(
|
||||
[
|
||||
asset.get("name") or "",
|
||||
asset.get("hostname") or "",
|
||||
asset.get("asset_type") or "",
|
||||
asset.get("source") or "",
|
||||
asset.get("subnet") or "",
|
||||
asset.get("public_ip") or "",
|
||||
]
|
||||
).lower()
|
||||
if search not in haystack:
|
||||
continue
|
||||
filtered.append(asset)
|
||||
|
||||
return jsonify({"count": len(filtered), "assets": filtered})
|
||||
|
||||
|
||||
def _extract_target_hosts(asset: dict) -> list[str]:
|
||||
metadata = asset.get("metadata") or {}
|
||||
raw_values: list[str] = []
|
||||
|
||||
for field in ("proxy_pass_resolved", "inferred_targets", "proxy_pass", "upstream_servers"):
|
||||
values = metadata.get(field) or []
|
||||
if isinstance(values, list):
|
||||
raw_values.extend([str(value) for value in values])
|
||||
|
||||
if asset.get("hostname"):
|
||||
raw_values.append(str(asset.get("hostname")))
|
||||
|
||||
hosts: list[str] = []
|
||||
for raw in raw_values:
|
||||
parts = [part.strip() for part in raw.split(",") if part.strip()]
|
||||
for part in parts:
|
||||
parsed_host = ""
|
||||
if part.startswith("http://") or part.startswith("https://"):
|
||||
parsed_host = urlparse(part).hostname or ""
|
||||
else:
|
||||
candidate = part.split("/", 1)[0]
|
||||
if ":" in candidate:
|
||||
candidate = candidate.split(":", 1)[0]
|
||||
parsed_host = candidate.strip()
|
||||
if parsed_host:
|
||||
hosts.append(parsed_host.lower())
|
||||
return hosts
|
||||
|
||||
|
||||
def _link_assets_to_proxmox(assets: list[dict]) -> list[dict]:
|
||||
proxmox_assets = [
|
||||
asset for asset in assets if asset.get("source") == "proxmox" and asset.get("asset_type") in {"vm", "lxc"}
|
||||
]
|
||||
|
||||
by_ip: dict[str, list[dict]] = {}
|
||||
by_name: dict[str, list[dict]] = {}
|
||||
|
||||
for asset in proxmox_assets:
|
||||
for ip in asset.get("ip_addresses") or []:
|
||||
by_ip.setdefault(str(ip).lower(), []).append(asset)
|
||||
for key in (asset.get("name"), asset.get("hostname")):
|
||||
if key:
|
||||
by_name.setdefault(str(key).lower(), []).append(asset)
|
||||
|
||||
for asset in assets:
|
||||
if asset.get("source") == "proxmox":
|
||||
continue
|
||||
|
||||
hosts = _extract_target_hosts(asset)
|
||||
linked = []
|
||||
seen = set()
|
||||
|
||||
for host in hosts:
|
||||
matches = by_ip.get(host, []) + by_name.get(host, [])
|
||||
for match in matches:
|
||||
match_key = f"{match.get('source')}:{match.get('asset_type')}:{match.get('external_id')}"
|
||||
if match_key in seen:
|
||||
continue
|
||||
seen.add(match_key)
|
||||
linked.append(
|
||||
{
|
||||
"source": match.get("source"),
|
||||
"asset_type": match.get("asset_type"),
|
||||
"external_id": match.get("external_id"),
|
||||
"name": match.get("name"),
|
||||
"hostname": match.get("hostname"),
|
||||
"ip_addresses": match.get("ip_addresses") or [],
|
||||
}
|
||||
)
|
||||
|
||||
metadata = dict(asset.get("metadata") or {})
|
||||
metadata["linked_proxmox_assets"] = linked
|
||||
asset["metadata"] = metadata
|
||||
|
||||
return assets
|
||||
|
||||
|
||||
@app.route("/api/v1/sources", methods=["GET"])
|
||||
def api_sources():
|
||||
return jsonify({"sources": store.source_health()})
|
||||
|
||||
|
||||
@app.route("/api/v1/nginx/routes", methods=["GET"])
|
||||
def api_nginx_routes():
|
||||
assets = store.list_assets()
|
||||
nginx_assets = [
|
||||
asset
|
||||
for asset in assets
|
||||
if asset.get("source") == "nginx" and asset.get("asset_type") == "nginx_site"
|
||||
]
|
||||
|
||||
routes = []
|
||||
for asset in nginx_assets:
|
||||
metadata = asset.get("metadata") or {}
|
||||
inferred_targets = metadata.get("inferred_targets") or []
|
||||
proxy_targets_resolved = metadata.get("proxy_pass_resolved") or []
|
||||
proxy_targets = metadata.get("proxy_pass") or []
|
||||
upstreams = metadata.get("upstreams") or []
|
||||
upstream_servers = metadata.get("upstream_servers") or []
|
||||
listens = metadata.get("listens") or []
|
||||
route_targets = (
|
||||
proxy_targets_resolved
|
||||
if proxy_targets_resolved
|
||||
else (
|
||||
inferred_targets
|
||||
if inferred_targets
|
||||
else (proxy_targets if proxy_targets else (upstream_servers if upstream_servers else upstreams))
|
||||
)
|
||||
)
|
||||
|
||||
routes.append(
|
||||
{
|
||||
"server_name": asset.get("name") or asset.get("hostname") or "unknown",
|
||||
"target": ", ".join(route_targets) if route_targets else "-",
|
||||
"listen": ", ".join(listens) if listens else "-",
|
||||
"source_host": asset.get("node") or "-",
|
||||
"config_path": metadata.get("path") or "-",
|
||||
"status": asset.get("status") or "unknown",
|
||||
}
|
||||
)
|
||||
|
||||
routes.sort(key=lambda item: (item["server_name"], item["source_host"]))
|
||||
return jsonify({"count": len(routes), "routes": routes})
|
||||
|
||||
|
||||
@app.route("/api/v1/health", methods=["GET"])
|
||||
def api_health():
|
||||
last_run = store.last_run()
|
||||
healthy = bool(last_run) and last_run.get("status") in {"ok", "running"}
|
||||
return jsonify(
|
||||
{
|
||||
"healthy": healthy,
|
||||
"last_run": last_run,
|
||||
"scheduler_enabled": config.scheduler_enabled,
|
||||
"poll_interval_seconds": config.poll_interval_seconds,
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@app.route("/api/v1/collect/trigger", methods=["POST"])
|
||||
def api_collect_trigger():
|
||||
if not _authorized(request):
|
||||
return jsonify({"error": "Unauthorized"}), 401
|
||||
|
||||
report = orchestrator.collect_once()
|
||||
status_code = 200
|
||||
if report.status == "running":
|
||||
status_code = 409
|
||||
elif report.status == "error":
|
||||
status_code = 500
|
||||
|
||||
return (
|
||||
jsonify(
|
||||
{
|
||||
"run_id": report.run_id,
|
||||
"status": report.status,
|
||||
"results": [
|
||||
{
|
||||
"source": result.source,
|
||||
"status": result.status,
|
||||
"asset_count": len(result.assets),
|
||||
"error": result.error,
|
||||
}
|
||||
for result in report.results
|
||||
],
|
||||
}
|
||||
),
|
||||
status_code,
|
||||
)
|
||||
|
||||
|
||||
@app.route("/api/v1/data", methods=["GET"])
|
||||
def api_data():
|
||||
"""
|
||||
Example API endpoint that returns some data.
|
||||
You can modify this to return whatever data you need.
|
||||
"""
|
||||
|
||||
global api_requests
|
||||
api_requests += 1
|
||||
|
||||
data = {
|
||||
"header": "Sample API Response",
|
||||
"content": f"Hello, this is a sample API response! You have called this endpoint {api_requests} times.",
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
}
|
||||
return jsonify(data)
|
||||
payload = orchestrator.current_data()
|
||||
payload["header"] = config.app_name
|
||||
payload["content"] = "Inventory snapshot"
|
||||
payload["timestamp"] = datetime.now().isoformat()
|
||||
return jsonify(payload)
|
||||
|
||||
|
||||
# endregion
|
||||
|
||||
3
services/__init__.py
Normal file
3
services/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
from .scheduler import CollectionScheduler, should_autostart_scheduler
|
||||
|
||||
__all__ = ["CollectionScheduler", "should_autostart_scheduler"]
|
||||
57
services/scheduler.py
Normal file
57
services/scheduler.py
Normal file
@@ -0,0 +1,57 @@
|
||||
import fcntl
|
||||
import logging
|
||||
import os
|
||||
import threading
|
||||
|
||||
from collectors import InventoryCollectorOrchestrator
|
||||
from config import AppConfig
|
||||
|
||||
LOGGER = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class CollectionScheduler:
|
||||
def __init__(self, config: AppConfig, orchestrator: InventoryCollectorOrchestrator):
|
||||
self.config = config
|
||||
self.orchestrator = orchestrator
|
||||
self._stop_event = threading.Event()
|
||||
self._thread: threading.Thread | None = None
|
||||
self._leader_file = None
|
||||
|
||||
def start(self) -> bool:
|
||||
if not self.config.scheduler_enabled:
|
||||
LOGGER.info("Scheduler disabled via config")
|
||||
return False
|
||||
|
||||
leader_file = open("/tmp/inventory-scheduler.lock", "w", encoding="utf-8")
|
||||
try:
|
||||
fcntl.flock(leader_file.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
|
||||
except OSError:
|
||||
leader_file.close()
|
||||
LOGGER.info("Another worker owns scheduler lock")
|
||||
return False
|
||||
|
||||
self._leader_file = leader_file
|
||||
|
||||
self._thread = threading.Thread(target=self._run_loop, name="inventory-scheduler", daemon=True)
|
||||
self._thread.start()
|
||||
LOGGER.info("Scheduler started with %s second interval", self.config.poll_interval_seconds)
|
||||
return True
|
||||
|
||||
def _run_loop(self) -> None:
|
||||
while not self._stop_event.is_set():
|
||||
self.orchestrator.collect_once()
|
||||
self._stop_event.wait(self.config.poll_interval_seconds)
|
||||
|
||||
def shutdown(self) -> None:
|
||||
self._stop_event.set()
|
||||
if self._thread and self._thread.is_alive():
|
||||
self._thread.join(timeout=1.0)
|
||||
if self._leader_file:
|
||||
fcntl.flock(self._leader_file.fileno(), fcntl.LOCK_UN)
|
||||
self._leader_file.close()
|
||||
self._leader_file = None
|
||||
|
||||
|
||||
def should_autostart_scheduler() -> bool:
|
||||
# Do not auto-start on Flask's reloader child process.
|
||||
return os.getenv("WERKZEUG_RUN_MAIN", "true") != "false"
|
||||
@@ -1,41 +1,344 @@
|
||||
:root {
|
||||
--bg: #0e1117;
|
||||
--panel: #171b23;
|
||||
--ink: #e6edf3;
|
||||
--muted: #9aa6b2;
|
||||
--line: #2d3748;
|
||||
--accent: #2f81f7;
|
||||
--good: #2ea043;
|
||||
--bad: #f85149;
|
||||
--warn: #d29922;
|
||||
}
|
||||
|
||||
* {
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
body {
|
||||
background-color: #000000;
|
||||
color: #ffffff;
|
||||
}
|
||||
h1 {
|
||||
font-size: 50px;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
font-family: "IBM Plex Sans", "Segoe UI", sans-serif;
|
||||
color: var(--ink);
|
||||
background:
|
||||
radial-gradient(1200px 600px at 5% -10%, #1f2937 0%, transparent 65%),
|
||||
radial-gradient(1000px 500px at 95% 0%, #102a43 0%, transparent 55%),
|
||||
var(--bg);
|
||||
}
|
||||
.centre {
|
||||
margin-top: 10%;
|
||||
text-align: center;
|
||||
|
||||
.dashboard {
|
||||
max-width: 1240px;
|
||||
margin: 0 auto;
|
||||
padding: 2rem 1rem 3rem;
|
||||
}
|
||||
a {
|
||||
|
||||
.hero {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: end;
|
||||
gap: 1rem;
|
||||
margin-bottom: 1.25rem;
|
||||
}
|
||||
|
||||
h1 {
|
||||
margin: 0;
|
||||
font-family: "Space Grotesk", "IBM Plex Sans", sans-serif;
|
||||
font-size: clamp(1.9rem, 4.5vw, 2.9rem);
|
||||
}
|
||||
|
||||
.subtitle,
|
||||
.meta {
|
||||
margin: 0.2rem 0;
|
||||
color: var(--muted);
|
||||
}
|
||||
|
||||
.hero-actions {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: end;
|
||||
gap: 0.5rem;
|
||||
}
|
||||
|
||||
button {
|
||||
border: none;
|
||||
border-radius: 999px;
|
||||
padding: 0.65rem 1rem;
|
||||
background: var(--accent);
|
||||
color: #ffffff;
|
||||
font-weight: 700;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
button:disabled {
|
||||
opacity: 0.7;
|
||||
cursor: wait;
|
||||
}
|
||||
|
||||
.stats-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
|
||||
gap: 0.75rem;
|
||||
margin-bottom: 0.9rem;
|
||||
}
|
||||
|
||||
.stat-card,
|
||||
.panel,
|
||||
.topology-card,
|
||||
.source-card {
|
||||
background: color-mix(in srgb, var(--panel) 95%, #000000);
|
||||
border: 1px solid var(--line);
|
||||
border-radius: 14px;
|
||||
}
|
||||
|
||||
.stat-card {
|
||||
padding: 0.85rem;
|
||||
}
|
||||
|
||||
.stat-card span {
|
||||
color: var(--muted);
|
||||
font-size: 0.85rem;
|
||||
}
|
||||
|
||||
.stat-card strong {
|
||||
display: block;
|
||||
font-size: 1.6rem;
|
||||
}
|
||||
|
||||
.panel {
|
||||
padding: 0.95rem;
|
||||
margin-top: 0.9rem;
|
||||
}
|
||||
|
||||
.panel h2 {
|
||||
margin: 0 0 0.8rem;
|
||||
font-size: 1.1rem;
|
||||
}
|
||||
|
||||
.topology-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(220px, 1fr));
|
||||
gap: 0.7rem;
|
||||
}
|
||||
|
||||
.topology-card,
|
||||
.source-card {
|
||||
padding: 0.75rem;
|
||||
}
|
||||
|
||||
.topology-card h3,
|
||||
.source-card h3 {
|
||||
margin: 0 0 0.35rem;
|
||||
font-size: 1rem;
|
||||
}
|
||||
|
||||
.topology-card p,
|
||||
.source-card p {
|
||||
margin: 0.2rem 0;
|
||||
color: var(--muted);
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.source-list {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(220px, 1fr));
|
||||
gap: 0.7rem;
|
||||
}
|
||||
|
||||
.source-card {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
gap: 0.8rem;
|
||||
}
|
||||
|
||||
.panel-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
gap: 0.8rem;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.filters {
|
||||
display: flex;
|
||||
gap: 0.5rem;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
input,
|
||||
select {
|
||||
border: 1px solid var(--line);
|
||||
border-radius: 8px;
|
||||
padding: 0.5rem 0.55rem;
|
||||
background: #0f141b;
|
||||
color: var(--ink);
|
||||
}
|
||||
|
||||
input {
|
||||
min-width: 220px;
|
||||
}
|
||||
|
||||
.table-wrap {
|
||||
overflow-x: auto;
|
||||
margin-top: 0.7rem;
|
||||
}
|
||||
|
||||
table {
|
||||
width: 100%;
|
||||
border-collapse: collapse;
|
||||
min-width: 720px;
|
||||
}
|
||||
|
||||
thead th {
|
||||
text-align: left;
|
||||
font-weight: 700;
|
||||
color: var(--muted);
|
||||
border-bottom: 1px solid var(--line);
|
||||
padding: 0.5rem;
|
||||
}
|
||||
|
||||
tbody td {
|
||||
border-bottom: 1px solid #202734;
|
||||
padding: 0.56rem;
|
||||
font-size: 0.93rem;
|
||||
}
|
||||
|
||||
.asset-row {
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.asset-row:hover td {
|
||||
background: #1a2230;
|
||||
}
|
||||
|
||||
.status-badge {
|
||||
border-radius: 999px;
|
||||
padding: 0.2rem 0.55rem;
|
||||
font-size: 0.8rem;
|
||||
font-weight: 700;
|
||||
text-transform: lowercase;
|
||||
}
|
||||
|
||||
.status-online {
|
||||
background: color-mix(in srgb, var(--good) 20%, #ffffff);
|
||||
color: var(--good);
|
||||
}
|
||||
|
||||
.status-offline {
|
||||
background: color-mix(in srgb, var(--bad) 18%, #ffffff);
|
||||
color: var(--bad);
|
||||
}
|
||||
|
||||
.status-unknown {
|
||||
background: color-mix(in srgb, var(--warn) 22%, #ffffff);
|
||||
color: var(--warn);
|
||||
}
|
||||
|
||||
.empty-state {
|
||||
color: var(--muted);
|
||||
}
|
||||
|
||||
.hidden {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.modal {
|
||||
position: fixed;
|
||||
inset: 0;
|
||||
background: rgba(5, 10, 15, 0.72);
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
z-index: 1000;
|
||||
padding: 1rem;
|
||||
}
|
||||
|
||||
.modal.hidden {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.modal-card {
|
||||
width: min(900px, 100%);
|
||||
max-height: 88vh;
|
||||
overflow: auto;
|
||||
overflow-x: hidden;
|
||||
background: #111722;
|
||||
border: 1px solid var(--line);
|
||||
border-radius: 12px;
|
||||
padding: 1rem;
|
||||
}
|
||||
|
||||
.modal-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
.modal-header h3 {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.modal-body h4 {
|
||||
margin: 1rem 0 0.4rem;
|
||||
}
|
||||
|
||||
.modal-body {
|
||||
overflow-wrap: anywhere;
|
||||
}
|
||||
|
||||
.detail-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(220px, 1fr));
|
||||
gap: 0.7rem;
|
||||
}
|
||||
|
||||
.detail-grid div {
|
||||
background: #151d2b;
|
||||
border: 1px solid #293547;
|
||||
border-radius: 8px;
|
||||
padding: 0.6rem;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.2rem;
|
||||
min-width: 0;
|
||||
}
|
||||
|
||||
.detail-grid strong {
|
||||
color: var(--muted);
|
||||
font-size: 0.8rem;
|
||||
}
|
||||
|
||||
.modal pre {
|
||||
background: #0b111b;
|
||||
border: 1px solid #243146;
|
||||
border-radius: 8px;
|
||||
padding: 0.65rem;
|
||||
overflow: auto;
|
||||
color: #c9d7e8;
|
||||
white-space: pre-wrap;
|
||||
word-break: break-word;
|
||||
overflow-wrap: anywhere;
|
||||
}
|
||||
|
||||
table a {
|
||||
color: #58a6ff;
|
||||
text-decoration: none;
|
||||
}
|
||||
a:hover {
|
||||
|
||||
table a:hover {
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
/* Mike section styling */
|
||||
.mike-section {
|
||||
margin-top: 30px;
|
||||
max-width: 600px;
|
||||
margin-left: auto;
|
||||
margin-right: auto;
|
||||
padding: 20px;
|
||||
background-color: rgba(50, 50, 50, 0.3);
|
||||
border-radius: 8px;
|
||||
}
|
||||
@media (max-width: 760px) {
|
||||
.hero {
|
||||
flex-direction: column;
|
||||
align-items: flex-start;
|
||||
}
|
||||
|
||||
.mike-section h2 {
|
||||
color: #f0f0f0;
|
||||
margin-top: 0;
|
||||
}
|
||||
.hero-actions {
|
||||
align-items: flex-start;
|
||||
}
|
||||
|
||||
.mike-section p {
|
||||
line-height: 1.6;
|
||||
margin-bottom: 15px;
|
||||
input {
|
||||
min-width: 100%;
|
||||
}
|
||||
}
|
||||
@@ -4,55 +4,311 @@
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Nathan.Woodburn/</title>
|
||||
<title>{{app_name}}</title>
|
||||
<link rel="icon" href="/assets/img/favicon.png" type="image/png">
|
||||
<link rel="stylesheet" href="/assets/css/index.css">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<div class="spacer"></div>
|
||||
<div class="centre">
|
||||
<h1>Nathan.Woodburn/</h1>
|
||||
<span>The current date and time is {{datetime}}</span>
|
||||
<main class="dashboard">
|
||||
<header class="hero">
|
||||
<div>
|
||||
<h1>{{app_name}}</h1>
|
||||
<p class="subtitle">Home lab inventory dashboard</p>
|
||||
<p class="meta">Loaded at {{datetime}}</p>
|
||||
</div>
|
||||
<div class="hero-actions">
|
||||
<button id="refresh-button" type="button">Run Collection</button>
|
||||
<span id="last-run">Last run: waiting</span>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<section class="stats-grid" id="stats-grid"></section>
|
||||
|
||||
<section class="panel">
|
||||
<h2>Topology</h2>
|
||||
<div id="topology-grid" class="topology-grid"></div>
|
||||
</section>
|
||||
|
||||
<section class="panel">
|
||||
<h2>Sources</h2>
|
||||
<div id="source-list" class="source-list"></div>
|
||||
</section>
|
||||
|
||||
<section class="panel">
|
||||
<div class="panel-header">
|
||||
<h2>Asset Inventory</h2>
|
||||
<div class="filters">
|
||||
<input id="search-input" type="search" placeholder="Search assets, hosts, subnets">
|
||||
<select id="source-filter">
|
||||
<option value="">All sources</option>
|
||||
</select>
|
||||
<select id="status-filter">
|
||||
<option value="">All statuses</option>
|
||||
</select>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="spacer"></div>
|
||||
<div class="centre">
|
||||
<h2 id="test-content-header">Pulling data</h2>
|
||||
<span class="test-content">This is a test content area that will be updated with data from the server.</span>
|
||||
<br>
|
||||
<br>
|
||||
<span class="test-content-timestamp">Timestamp: Waiting to pull data</span>
|
||||
<div class="table-wrap">
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Name</th>
|
||||
<th>Type</th>
|
||||
<th>Source</th>
|
||||
<th>Status</th>
|
||||
<th>Host</th>
|
||||
<th>Subnet</th>
|
||||
<th>Public IP</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="asset-table-body"></tbody>
|
||||
</table>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<div id="asset-modal" class="modal hidden" role="dialog" aria-modal="true" aria-labelledby="asset-modal-title">
|
||||
<div class="modal-card">
|
||||
<div class="modal-header">
|
||||
<h3 id="asset-modal-title">Service Details</h3>
|
||||
<button id="modal-close" type="button">Close</button>
|
||||
</div>
|
||||
<div id="asset-modal-body" class="modal-body"></div>
|
||||
</div>
|
||||
</div>
|
||||
</main>
|
||||
|
||||
<script>
|
||||
function fetchData() {
|
||||
// Fetch the data from the server
|
||||
fetch('/api/v1/data')
|
||||
.then(response => response.json())
|
||||
.then(data => {
|
||||
// Get the data header element
|
||||
const dataHeader = document.getElementById('test-content-header');
|
||||
// Update the header with the fetched data
|
||||
dataHeader.textContent = data.header;
|
||||
// Get the test content element
|
||||
const testContent = document.querySelector('.test-content');
|
||||
// Update the content with the fetched data
|
||||
testContent.textContent = data.content;
|
||||
const statsGrid = document.getElementById('stats-grid');
|
||||
const topologyGrid = document.getElementById('topology-grid');
|
||||
const sourceList = document.getElementById('source-list');
|
||||
const sourceFilter = document.getElementById('source-filter');
|
||||
const statusFilter = document.getElementById('status-filter');
|
||||
const searchInput = document.getElementById('search-input');
|
||||
const assetTableBody = document.getElementById('asset-table-body');
|
||||
const lastRunElement = document.getElementById('last-run');
|
||||
const refreshButton = document.getElementById('refresh-button');
|
||||
const assetModal = document.getElementById('asset-modal');
|
||||
const assetModalBody = document.getElementById('asset-modal-body');
|
||||
const modalClose = document.getElementById('modal-close');
|
||||
|
||||
// Get the timestamp element
|
||||
const timestampElement = document.querySelector('.test-content-timestamp');
|
||||
// Update the timestamp with the fetched data
|
||||
timestampElement.textContent = `Timestamp: ${data.timestamp}`;
|
||||
})
|
||||
.catch(error => console.error('Error fetching data:', error));
|
||||
let allAssets = [];
|
||||
|
||||
function statusClass(status) {
|
||||
if (!status) return 'status-unknown';
|
||||
const value = status.toLowerCase();
|
||||
if (['running', 'online', 'healthy', 'up', 'configured', 'ok'].includes(value)) return 'status-online';
|
||||
if (['stopped', 'offline', 'error', 'down', 'unhealthy'].includes(value)) return 'status-offline';
|
||||
return 'status-unknown';
|
||||
}
|
||||
|
||||
// Initial fetch after 2 seconds
|
||||
setTimeout(fetchData, 2000);
|
||||
function renderStats(summary) {
|
||||
const stats = [
|
||||
{ label: 'Total Assets', value: summary.total_assets || 0 },
|
||||
{ label: 'Online', value: summary.online_assets || 0 },
|
||||
{ label: 'Offline', value: summary.offline_assets || 0 },
|
||||
{ label: 'Sources', value: summary.source_count || 0 },
|
||||
{ label: 'Subnets', value: summary.subnet_count || 0 },
|
||||
];
|
||||
|
||||
// Then fetch every 2 seconds
|
||||
setInterval(fetchData, 2000);
|
||||
statsGrid.innerHTML = stats
|
||||
.map(item => `
|
||||
<article class="stat-card">
|
||||
<span>${item.label}</span>
|
||||
<strong>${item.value}</strong>
|
||||
</article>
|
||||
`)
|
||||
.join('');
|
||||
}
|
||||
|
||||
function renderTopology(topology) {
|
||||
const networks = topology.networks || [];
|
||||
if (!networks.length) {
|
||||
topologyGrid.innerHTML = '<p class="empty-state">No subnet mapping discovered yet.</p>';
|
||||
return;
|
||||
}
|
||||
|
||||
topologyGrid.innerHTML = networks
|
||||
.map(network => `
|
||||
<article class="topology-card">
|
||||
<h3>${network.subnet}</h3>
|
||||
<p>${network.asset_count} assets</p>
|
||||
<p>${network.source_count} sources</p>
|
||||
<p>${(network.public_ips || []).join(', ') || 'No public IP'}</p>
|
||||
</article>
|
||||
`)
|
||||
.join('');
|
||||
}
|
||||
|
||||
function renderSources(sources, selectedSource = '') {
|
||||
if (!sources.length) {
|
||||
sourceList.innerHTML = '<p class="empty-state">No source status available.</p>';
|
||||
return;
|
||||
}
|
||||
|
||||
sourceList.innerHTML = sources
|
||||
.map(source => `
|
||||
<article class="source-card">
|
||||
<div>
|
||||
<h3>${source.name}</h3>
|
||||
<p>${source.last_error || 'No errors'}</p>
|
||||
</div>
|
||||
<span class="status-badge ${statusClass(source.last_status || 'unknown')}">${source.last_status || 'unknown'}</span>
|
||||
</article>
|
||||
`)
|
||||
.join('');
|
||||
|
||||
const availableSources = [...new Set(sources.map(item => item.name))];
|
||||
sourceFilter.innerHTML = '<option value="">All sources</option>' +
|
||||
availableSources.map(source => `<option value="${source}">${source}</option>`).join('');
|
||||
|
||||
if (selectedSource && availableSources.includes(selectedSource)) {
|
||||
sourceFilter.value = selectedSource;
|
||||
}
|
||||
}
|
||||
|
||||
function renderAssets() {
|
||||
const source = sourceFilter.value;
|
||||
const status = statusFilter.value;
|
||||
const term = searchInput.value.trim().toLowerCase();
|
||||
|
||||
const filtered = allAssets.filter(asset => {
|
||||
if (source && asset.source !== source) return false;
|
||||
if (status && (asset.status || '') !== status) return false;
|
||||
if (!term) return true;
|
||||
const text = [asset.name, asset.hostname, asset.asset_type, asset.source, asset.subnet, asset.public_ip]
|
||||
.filter(Boolean)
|
||||
.join(' ')
|
||||
.toLowerCase();
|
||||
return text.includes(term);
|
||||
});
|
||||
|
||||
if (!filtered.length) {
|
||||
assetTableBody.innerHTML = '<tr><td colspan="7" class="empty-state">No assets match this filter.</td></tr>';
|
||||
return;
|
||||
}
|
||||
|
||||
assetTableBody.innerHTML = filtered
|
||||
.map((asset, index) => `
|
||||
<tr class="asset-row" data-index="${index}">
|
||||
<td>${asset.name || '-'}</td>
|
||||
<td>${asset.asset_type || '-'}</td>
|
||||
<td>${asset.source || '-'}</td>
|
||||
<td><span class="status-badge ${statusClass(asset.status)}">${asset.status || 'unknown'}</span></td>
|
||||
<td>${asset.hostname || asset.node || '-'}</td>
|
||||
<td>${asset.subnet || '-'}</td>
|
||||
<td>${asset.public_ip || '-'}</td>
|
||||
</tr>
|
||||
`)
|
||||
.join('');
|
||||
|
||||
assetTableBody.querySelectorAll('.asset-row').forEach(row => {
|
||||
row.addEventListener('click', () => {
|
||||
const idx = Number(row.getAttribute('data-index'));
|
||||
const selected = filtered[idx];
|
||||
if (selected) {
|
||||
openAssetModal(selected);
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
function openAssetModal(asset) {
|
||||
const metadata = asset.metadata || {};
|
||||
const linked = metadata.linked_proxmox_assets || [];
|
||||
|
||||
assetModalBody.innerHTML = `
|
||||
<div class="detail-grid">
|
||||
<div><strong>Name</strong><span>${asset.name || '-'}</span></div>
|
||||
<div><strong>Type</strong><span>${asset.asset_type || '-'}</span></div>
|
||||
<div><strong>Source</strong><span>${asset.source || '-'}</span></div>
|
||||
<div><strong>Status</strong><span>${asset.status || '-'}</span></div>
|
||||
<div><strong>Hostname</strong><span>${asset.hostname || '-'}</span></div>
|
||||
<div><strong>Node</strong><span>${asset.node || '-'}</span></div>
|
||||
<div><strong>IPs</strong><span>${(asset.ip_addresses || []).join(', ') || '-'}</span></div>
|
||||
<div><strong>Last Seen</strong><span>${asset.last_seen || '-'}</span></div>
|
||||
</div>
|
||||
<h4>Linked Proxmox Assets</h4>
|
||||
<pre>${linked.length ? JSON.stringify(linked, null, 2) : 'No proxmox links found'}</pre>
|
||||
<h4>Metadata</h4>
|
||||
<pre>${JSON.stringify(metadata, null, 2)}</pre>
|
||||
`;
|
||||
|
||||
assetModal.classList.remove('hidden');
|
||||
}
|
||||
|
||||
function closeAssetModal() {
|
||||
assetModal.classList.add('hidden');
|
||||
assetModalBody.innerHTML = '';
|
||||
}
|
||||
|
||||
async function loadStatus() {
|
||||
const selectedSource = sourceFilter.value;
|
||||
const selectedStatus = statusFilter.value;
|
||||
|
||||
const [summaryResponse, topologyResponse, sourcesResponse, assetsResponse] = await Promise.all([
|
||||
fetch('/api/v1/summary'),
|
||||
fetch('/api/v1/topology'),
|
||||
fetch('/api/v1/sources'),
|
||||
fetch('/api/v1/assets')
|
||||
]);
|
||||
|
||||
const summaryData = await summaryResponse.json();
|
||||
const topologyData = await topologyResponse.json();
|
||||
const sourcesData = await sourcesResponse.json();
|
||||
const assetsData = await assetsResponse.json();
|
||||
|
||||
renderStats(summaryData.summary || {});
|
||||
renderTopology(topologyData);
|
||||
renderSources(sourcesData.sources || [], selectedSource);
|
||||
|
||||
allAssets = assetsData.assets || [];
|
||||
const statuses = [...new Set(allAssets.map(item => item.status).filter(Boolean))];
|
||||
statusFilter.innerHTML = '<option value="">All statuses</option>' +
|
||||
statuses.map(item => `<option value="${item}">${item}</option>`).join('');
|
||||
|
||||
if (selectedStatus && statuses.includes(selectedStatus)) {
|
||||
statusFilter.value = selectedStatus;
|
||||
}
|
||||
|
||||
renderAssets();
|
||||
const run = summaryData.last_run;
|
||||
lastRunElement.textContent = run ? `Last run: ${run.status} at ${run.finished_at || run.started_at}` : 'Last run: waiting';
|
||||
}
|
||||
|
||||
async function manualRefresh() {
|
||||
refreshButton.disabled = true;
|
||||
refreshButton.textContent = 'Collecting...';
|
||||
try {
|
||||
const response = await fetch('/api/v1/collect/trigger', { method: 'POST' });
|
||||
const payload = await response.json();
|
||||
if (!response.ok) {
|
||||
alert(`Collection failed: ${payload.error || payload.status || 'unknown error'}`);
|
||||
}
|
||||
} catch (error) {
|
||||
alert(`Collection request failed: ${error}`);
|
||||
} finally {
|
||||
refreshButton.disabled = false;
|
||||
refreshButton.textContent = 'Run Collection';
|
||||
await loadStatus();
|
||||
}
|
||||
}
|
||||
|
||||
sourceFilter.addEventListener('change', renderAssets);
|
||||
statusFilter.addEventListener('change', renderAssets);
|
||||
searchInput.addEventListener('input', renderAssets);
|
||||
refreshButton.addEventListener('click', manualRefresh);
|
||||
modalClose.addEventListener('click', closeAssetModal);
|
||||
assetModal.addEventListener('click', (event) => {
|
||||
if (event.target === assetModal) {
|
||||
closeAssetModal();
|
||||
}
|
||||
});
|
||||
|
||||
loadStatus().catch(error => console.error('Initial load failed:', error));
|
||||
setInterval(() => {
|
||||
loadStatus().catch(error => console.error('Status refresh failed:', error));
|
||||
}, 15000);
|
||||
</script>
|
||||
|
||||
</body>
|
||||
|
||||
62
uv.lock
generated
62
uv.lock
generated
@@ -156,6 +156,37 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "inventory"
|
||||
version = "0.1.0"
|
||||
source = { virtual = "." }
|
||||
dependencies = [
|
||||
{ name = "flask" },
|
||||
{ name = "gunicorn" },
|
||||
{ name = "python-dotenv" },
|
||||
{ name = "requests" },
|
||||
]
|
||||
|
||||
[package.dev-dependencies]
|
||||
dev = [
|
||||
{ name = "pre-commit" },
|
||||
{ name = "ruff" },
|
||||
]
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "flask", specifier = ">=3.1.2" },
|
||||
{ name = "gunicorn", specifier = ">=23.0.0" },
|
||||
{ name = "python-dotenv", specifier = ">=1.2.1" },
|
||||
{ name = "requests", specifier = ">=2.32.5" },
|
||||
]
|
||||
|
||||
[package.metadata.requires-dev]
|
||||
dev = [
|
||||
{ name = "pre-commit", specifier = ">=4.4.0" },
|
||||
{ name = "ruff", specifier = ">=0.14.5" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "itsdangerous"
|
||||
version = "2.2.0"
|
||||
@@ -281,37 +312,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/14/1b/a298b06749107c305e1fe0f814c6c74aea7b2f1e10989cb30f544a1b3253/python_dotenv-1.2.1-py3-none-any.whl", hash = "sha256:b81ee9561e9ca4004139c6cbba3a238c32b03e4894671e181b671e8cb8425d61", size = 21230, upload-time = "2025-10-26T15:12:09.109Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "python-webserver-template"
|
||||
version = "0.1.0"
|
||||
source = { virtual = "." }
|
||||
dependencies = [
|
||||
{ name = "flask" },
|
||||
{ name = "gunicorn" },
|
||||
{ name = "python-dotenv" },
|
||||
{ name = "requests" },
|
||||
]
|
||||
|
||||
[package.dev-dependencies]
|
||||
dev = [
|
||||
{ name = "pre-commit" },
|
||||
{ name = "ruff" },
|
||||
]
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "flask", specifier = ">=3.1.2" },
|
||||
{ name = "gunicorn", specifier = ">=23.0.0" },
|
||||
{ name = "python-dotenv", specifier = ">=1.2.1" },
|
||||
{ name = "requests", specifier = ">=2.32.5" },
|
||||
]
|
||||
|
||||
[package.metadata.requires-dev]
|
||||
dev = [
|
||||
{ name = "pre-commit", specifier = ">=4.4.0" },
|
||||
{ name = "ruff", specifier = ">=0.14.5" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyyaml"
|
||||
version = "6.0.3"
|
||||
|
||||
Reference in New Issue
Block a user