Semantic Reach Delivers SOTA on LongMemEval Benchmark
Today we are proud to announce that Semantic Reach has achieved SOTA on the LongMemEval benchmark, with an average 94% across all metrics: single session, multisession, knowledge update, and temporal reasoning.
This result places us firmly ahead of Emergence AI, who recently announced a score of 86%. Furthermore, since we believe our solution is mathematically optimal for the problem, we expect to eventually saturate the LongMemEval benchmark and whatever comes next.
While everyone else is working comfortably within the existing vector database RAG paradigm, we’re developing a fundamentally new backend architecture grounded in a more powerful mathematics for modeling cognitively plausible auto-associative memory that can be both flexible and exact when needed. Because we’re building on a mathematically principled foundation for cognitively plausible memory backed by decades of deep science on hyperdimensional computing, we’re building the best agentic memory substrate and general backend for LLM workloads possible, intended to supersede rather than increment on current generation vector databases and RAG methodologies.
Recent news has shown that the infinite scaling story is finally hitting a wall with little room for denial. We need fresh, bold ideas about how to move forward. One way to do that, we believe, is to build smarter infrastructure around the models to reduce costs and boost intelligence without the endless spending on data centers, power plants, GPUs, context tokens, and model training for diminishing returns.
Stay tuned for a more detailed white paper publication announcement in the coming weeks.