"Intel Boosts Performance with 500 Optimized AI Models for Core Ultra Processors"
In a significant milestone for Intel, the semiconductor giant has announced that over 500 AI models have been optimized for its Core Ultra processors. This development is part of Intel's efforts to position itself as the leading chip supplier for AI-powered PCs.
The optimized AI models span over 20 categories, including large language, diffusion, super resolution, object detection, and computer vision. These models are accessible through industry sources such as Hugging Face, ONNX Model Zoo, OpenVINO Model Zoo, and PyTorch.
Notable AI models included in the optimization project are Microsoft's Phi-2 small language model, Meta's Llama large language model, OpenAI's Whisper speech recognition model, Stability AI's Stable Diffusion 1.5 text-to-image generation model, Google's Bert natural language understanding model, and the Mistral language model.
Intel emphasized the importance of its optimization work, stating that models form the backbone of AI-enhanced software features such as object removal, image super-resolution, or text summarization. The models can be used across the Core Ultra's CPU, GPU, and neural processing unit (NPU).
Intel is in an arms race with rivals AMD and Qualcomm to provide the best processors for AI PCs and enable compelling software experiences to create greater demand for their respective products. Intel's software enablement work is made possible by the investments it has made into client AI processing, framework optimizations, AI tools like OpenVINO, and the broader work required to establish AI PCs as a new device category.
Robert Hallock, vice president and general manager of AI and technical marketing in Intel's Client Computing Group, stated that Intel's unmatched selection of models reflects its commitment to building the most robust toolchain for AI developers and a rock-solid foundation for AI software users.
This development marks a significant step forward in Intel's efforts to lead the AI PC market and establish itself as the go-to supplier for AI chip solutions.
Source: <https://www.crn.com/news/components-peripherals/2024/intel-500-ai-models-have-been-optimized-for-core-ultra-processors>
The optimized AI models span over 20 categories, including large language, diffusion, super resolution, object detection, and computer vision. These models are accessible through industry sources such as Hugging Face, ONNX Model Zoo, OpenVINO Model Zoo, and PyTorch.
Notable AI models included in the optimization project are Microsoft's Phi-2 small language model, Meta's Llama large language model, OpenAI's Whisper speech recognition model, Stability AI's Stable Diffusion 1.5 text-to-image generation model, Google's Bert natural language understanding model, and the Mistral language model.
Intel emphasized the importance of its optimization work, stating that models form the backbone of AI-enhanced software features such as object removal, image super-resolution, or text summarization. The models can be used across the Core Ultra's CPU, GPU, and neural processing unit (NPU).
Intel is in an arms race with rivals AMD and Qualcomm to provide the best processors for AI PCs and enable compelling software experiences to create greater demand for their respective products. Intel's software enablement work is made possible by the investments it has made into client AI processing, framework optimizations, AI tools like OpenVINO, and the broader work required to establish AI PCs as a new device category.
Robert Hallock, vice president and general manager of AI and technical marketing in Intel's Client Computing Group, stated that Intel's unmatched selection of models reflects its commitment to building the most robust toolchain for AI developers and a rock-solid foundation for AI software users.
This development marks a significant step forward in Intel's efforts to lead the AI PC market and establish itself as the go-to supplier for AI chip solutions.
Source: <https://www.crn.com/news/components-peripherals/2024/intel-500-ai-models-have-been-optimized-for-core-ultra-processors>