Mobile
AI in Your Pocket
Every smartphone uses AI to enhance the user experience. Designers use on-device AI to enable new features and reduce reliance on the cloud. Choosing the right AI processor IP for mobile SoC and ASIC designs is essential for a great user experience.
Enhancing the User Experience Through AI
Smartphone makers are adding more AI to their products. It's challenging because they must balance ever-increasing computational requirements with the constraints of tight power and area budgets. They can no longer rely on the general-purpose NPUs typically used in application processors (APs), which are often underperforming and power inefficient. Instead, system architects are moving to AI co-processors. These co-processors allow AI processing to be tailored to specific smartphone use cases, enabling immense gains in performance without sacrificing battery life. Seamlessly integrated on-device AI dramatically enhances the user experience and becomes a competitive differentiator.
An Always-Sensing Specialized NPU
Always-sensing cameras continually sample and analyze visual data to identify triggers relevant to the user’s behavior and environment. Like always-listening audio applications, always sensing enables a more natural and seamless user experience. However, camera data has quality, richness, and privacy concerns which require specialized AI processing. While application processors have built-in NPUs, they are not suited for the unique requirements of always-sensing. Expedera’s LittleNPU IP is optimized for the low-power, high-quality neural networks leading OEMs use in always-sensing applications. Requiring minimal power, often as low as 10-20mW, the LittleNPU keeps all camera data within the always-sensing subsystem, working hand in hand with device security implementations to safeguard user data.
An Ideal Architecture for Smartphones
The Origin neural engine IP uses Expedera’s unique packet-based architecture, which is far more efficient than common layer-based architectures. The architecture enables parallel execution across multiple layers, achieving better resource utilization and deterministic performance. It also eliminates the need for hardware-specific optimizations, allowing customers to run their trained neural networks unchanged without reducing model accuracy. This innovative approach greatly increases performance while lowering power, area, and latency.
Purpose-Build for Your Application
Customization brings many advantages, including increased performance, lower latency, reduced power consumption, and eliminating dark silicon waste. Expedera works with customers to understand their use case(s), PPA goals, and deployment needs during their design stage. Using this information, we configure Origin IP to create a customized solution that perfectly fits the application.
Future-Proof with In-Field Updates
With the rapid evolution of AI technologies and networks, any AI deployment must be capable of in-field updates, including new networks. Origin IP is flexible enough to allow the deployment of public, private, and custom neural networks after your smartphone or other device has shipped.
Ultra Power-Efficient Performance
Users want feature-rich devices with all-day battery life. With the ideal balance of power and performance, Origin IP enables new and emerging AI use cases while requiring far less power than general-purpose NPUs.