TESSERA — A pixel-wise earth observation foundation model
embeddings
| Source: Lobsters | Original article
TESSERA, a new foundation model for earth observation, has been released with open data, weights and pre‑computed embeddings that compress a full year of satellite imagery into dense, per‑pixel vectors at 10‑metre resolution. The model encodes each location’s spectral and temporal signature into a 128‑dimensional embedding, allowing downstream tasks—such as land‑cover classification, crop‑yield forecasting or flood detection—to be tackled by simple linear probes rather than bespoke deep‑learning pipelines.
The breakthrough lies in its pixel‑wise approach. Traditional remote‑sensing models are trained for a fixed set of classes; TESSERA instead learns a universal representation that can be queried for any downstream objective. Built on a hybrid Vision‑Transformer and Mamba state‑space architecture, the system outperforms conventional U‑Net baselines on regression benchmarks while requiring fewer FLOPs, according to the authors’ arXiv pre‑print. By making the embeddings publicly available, the team removes the computational barrier of processing terabytes of raw imagery, opening high‑resolution analysis to researchers, NGOs and municipal planners who lack large GPU clusters.
The release could accelerate climate‑impact studies, precision agriculture and disaster‑response workflows across the Nordic region, where detailed, timely surface data are critical for managing forest health and coastal erosion. Moreover, the open‑source nature invites community‑driven fine‑tuning and integration into existing GIS stacks, potentially spawning a new ecosystem of plug‑and‑play geospatial tools.
Watch for the upcoming Earth Observation Foundation Models workshop, where TESSERA will be benchmarked against emerging models such as the Vision‑Language hybrids highlighted in recent surveys. Follow‑up work is expected on scaling the embeddings to sub‑meter resolutions and extending the temporal horizon beyond a single year, steps that could make real‑time, planet‑wide monitoring a practical reality.
Sources
Back to AIPULSEN