AI Inference
at the Speed
of Life
Neural Network Accelerator
IP Reimagined
Experience an AI accelerator that exceeds all expectations and outperforms the competition. The neural processor from Expedera is finely tuned to support increasingly complex AI networks on devices with tight PPA constraints.
PPA Perfected
for Edge
Devices
A Lean Mean AI Machine
Our unique packet-based architecture delivers the most efficient NPU IP solutions in the industry. The fully scalable architecture achieves ultra-efficient workload scheduling and memory management, with up to 90% processor utilization.
Customized
for Optimal Performance
Nothing More, Nothing Less
Based on customer PPA and model requirements, Expedera tailors a solution that precisely fits the application. The result is a right-sized NPU that addresses current and future network support, eliminates dark silicon waste, and minimizes memory overhead.
Unleash AI Performance
Origin™ is a neural engine IP line of products from Expedera that reduces memory requirements to the bare minimum and dramatically slashes processing overhead. Its unique packet-based architecture is far more efficient than the common layer-based architectures underlying other NPU implementations. The architecture subdivides each layer into self-contained executable fragments that can be scheduled independently. This enables parallel execution across multiple layers, better resource utilization, and deterministic performance. It also eliminates the need for hardware-specific optimizations, allowing customers to run their trained neural networks unchanged, with no reduction in model accuracy. This innovative approach greatly increases performance while lowering power, area, and latency.
Efficiency
Experience industry-leading
18 TOPS/W, TOPS/mm²
Scalability
Options from 3 GOPS to 128
TOPS per core, up to PetaOps
Flexibility
Native support for popular
networks and data types
Predictability
Deterministic performance
and latency per workload
Configurability
Rightsized for current and
future networks
Reliability
Field-proven in over 10M
consumer devices
Applications
Products
Origin E1
Origin E1 neural engines are optimized for networks commonly used in always-on applications in home appliances, smartphones, and edge nodes that require about 1 TOPS performance. The E1 LittleNPU processors are further streamlined, making them ideal for the most cost- and area-sensitive applications.
Origin E2
Origin E2 NPU cores are power- and area-optimized to save system power in smartphones, edge nodes, and other consumer and industrial devices. Through careful attention to processor utilization and memory requirements, E2 NPUs deliver optimal performance with minimal latency. The E2 is highly configurable, offering 1 to 20 TOPS performance supporting common RNN, LSTM, CNN, DNN, and other network types.
Origin E6
Origin E6 NPU IP cores are performance-optimized for smartphones, AR/VR headsets, and other device applications that require image transformer, stable diffusion and point cloud-related AI. Through careful attention to processor utilization and external memory usage, E6 NPUs improve power efficiency and reduce latency to an absolute minimum. They offer single-core performance from 16 to 32 TOPS.
Origin E8
Designed for performance-intensive applications such as automotive/ADAS and data centers, Origin E8 NPU IP cores excel at complex AI tasks, including computer vision, LLMs, warping, point cloud, grid sample, image classification, and object detection. They offer single-core performance ranging from 32 to 128 TOPS.
TimbreAI T3
TimbreAI T3 is an ultra-low power Artificial Intelligence (AI) Inference engine designed for noise reduction use cases in power-constrained devices such as headsets. TimbreAI requires no external memory access, saving system power while increasing performance and reducing chip size.