Vehicle detection (YOLOv8), building footprint extraction (SAMGeo), LiDAR DL classification (RandLA-Net), temporal change detection, and bring-your-own-model inference — all behind a REST API.
Most geospatial teams have datasets and use cases — they don't have GPU clusters, MLOps engineers, or model deployment experience.
GPU infrastructure nobody on the team manages
SAMGeo, YOLOv8, RandLA-Net set up from scratch each time
Inference pipelines fragile and project-specific
BYOM (bring your own model) handcrafted with bash scripts
Sixteen AI/ML workflows from foundation models to bring-your-own-model inference.
| Workflow | Name & what it automates | Time saved |
|---|---|---|
| #265 | Building Footprint Extraction SAMGeo from aerial imagery with regularization to clean polygons | — |
| #266 | Vehicle Detection YOLOv8 vehicle detection from high-res orthomosaic with classes | — |
| #270 | Deep-Learning LiDAR Classification RandLA-Net semantic segmentation beyond rule-based #07 | — |
| #272 | Solar Panel Detection Aerial imagery → per-roof solar panel polygon detection | — |
| #273 | Land Cover Change Detection Multi-temporal Sentinel-2 classification with change matrix | — |
| #276 | M3C2 LiDAR Epoch Comparison Statistically rigorous distance computation with LOD95 | — |
| #278 | BYOM Inference Endpoint Bring your own ONNX / TorchScript model — we serve it on GPU | — |
| #280 | Temporal Analytics Dashboard API TimescaleDB REST API for any change-detection time series | — |
Modeled on a geospatial team launching three AI-powered analytics products without hiring an MLOps engineer.
| Metric | Manual / current tooling | GeoDataConverter |
|---|---|---|
| Time-to-first-inference | Weeks | Hours |
| GPU infrastructure cost | $36K+/year | Pay-per-call |
| MLOps engineer requirement | 1 FTE | 0 |
| Model deployment cycle | Bespoke per model | Standardized BYOM |
Workflow #278 accepts your trained ONNX or TorchScript model, registers it as a workflow, and serves it on GPU through the same OGC API as every other workflow on the platform. You keep model ownership and training data, we handle the inference plumbing.
Pick a foundation model from our catalog or upload your own. We'll run inference on your data and benchmark cost and latency.