Entertainment
An AI Upgrade for Consumer Electronics
AI is transforming entertainment. Products like TVs, video game consoles, and remote controlled cars are getting an AI upgrade. On-device AI solutions improve usability, speed, and data security by eliminating the need to send data to the cloud.
AI Brings Advanced Features
AI unlocks all sorts of capabilities and features. For example, video and image-based neural networks can support enhanced use cases such as biometrics, facial detection/recognition, augmented and virtual reality, image filters, enhanced photography, and countless more. In addition to image and video use cases, AI use in audio applications is also growing.
Reducing On-Device AI Power Consumption
All chip designs for consumer devices must balance performance with power efficiency and area requirements. One of the essential performance metrics for handhelds and wearables is battery life. Accordingly, OEMs strive to reduce power consumption to extend battery life. Neural Processing Units (NPUs) can consume significant power—especially if the NPU is not appropriately matched to the application use cases. Comparing and understanding the power efficiencies of NPUs can be complicated. We can make it easy—Expedera’s Origin™ IP averages a market-leading 18 TOPS/W. Origin is the industry's most power-efficient NPU per our benchmarks and confirmed by customers and analysts.
A Customized NPU Solution
While many general-purpose NPUs are available, a one-size-fits-all solution is rarely the most efficient, especially for cost- and power-sensitive consumer devices. General-purpose AI processors are often larger and consume more power than necessary. Expedera’s Origin IP cores are optimized for unique consumer and entertainment device use cases. The Origin IP provides optimal PPA (power, performance, area) and achieves superior performance with 50% or less required silicon area compared to other NPUs. By embedding Expedera Origin NPU IP into consumer and entertainment SoCs and ASICs, device makers can reduce latency, increase performance, decrease DRAM requirements, and extend battery life.
An Ideal Architecture for Entertainment
The Origin E2 neural engine uses Expedera’s unique packet-based architecture, which is far more efficient than common layer-based architectures. The architecture enables parallel execution across multiple layers, achieving better resource utilization and deterministic performance. It also eliminates the need for hardware-specific optimizations, allowing customers to run their trained neural networks unchanged without reducing model accuracy. This innovative approach greatly increases performance while lowering power, area, and latency.
Purpose-Build for Your Application
Customization brings many advantages, including increased performance, lower latency, reduced power consumption, and eliminating dark silicon waste. Expedera works with customers to understand their use case(s), PPA goals, and deployment needs during their design stage. Using this information, we configure Origin IP to create a customized solution that perfectly fits the application.
Area-Efficient for Cost-Effective Deployments
One of the hardest challenges in deploying AI is finding a solution that fits the tight budgets of OEMs. Expedera’s Origin NPU requires minimal silicon area, ensuring that AI can be deployed cost-effectively.
Successful Customer Deployments
Quality is key to any successful product. Expedera’s Origin IP has successfully deployed in over 10 million consumer devices.