Io.net Integrates Apple Silicon Chips to Power AI and Machine Learning Services
Summary:
Decentralized physical infrastructure network io.net is set to incorporate Apple's silicon chip hardware for its machine learning and AI services. The platform, built on Solana's blockchain, leverages GPU computations from various sources. The recent upgrade makes io.net the first cloud service to support Apple silicon chips for machine learning applications, allowing users to offer computational power from a range of Apple chips. This integration could address the rising demand for AI and ML computing resources, while enabling millions of Apple users to contribute spare chip and computing resources.
The newly introduced decentralized physical infrastructure network (DePIN) io.net is gearing up to integrate Apple's silicon chip hardware into its AI and machine learning offerings. Io.net has crafted a Solana-endorsed decentralized network that garners GPU computational power from varied data centers, crypto miners, and decentralised storage providers, enabling robust AI and ML computing. The beta version of the platform was unveiled at the Solana Breakpoint conference in Amsterdam in November 2023, aligning with the establishment of a new partnership with Render Network. Io.net boasts its cutting-edge upgrade makes it the first-ever cloud service to enable clustering of Apple's silicon chips for machine learning applications, making it possible for engineers to link Apple chips for AI and ML computations from all corners of the world.
Earlier articles on Cointelegraph have extensively covered how io.net leverages low-priced GPU computing resources for AI and ML purposes. Payments to GPU and central processing unit computing providers are facilitated by Solana's blockchain technology. Tory Green, the Chief Operating Officer of io.net, appreciates Solana's structure for its apt fit to accommodate the magnitude of transactions and inferences that io.net will enable. Io.net's technology sources GPU computational power in clusters, entails thousands of inferences, and uses related microtransactions for hardware utilization. The platform's upgrade now empowers io.net users to offer computational power from an expansive variety of Apple Silicon chips, including the M1, M1 Max, M1 Pro, M1 Ultra, M2, M2 Max, M2 Pro, M2 Ultra, and the M3, M3 Max and M3 Pro models.
Apple's M3 chip series is premised on ultra-modern three-nanometer process technology, an attribute that Apple lauds as the future of GPU architecture. In comparison, io.net observed that the 128-megabyte memory architecture of Apple's M3 chips outperforms Nvidia's peak A100-80 gigabyte graphics cards capabilities. With an enhanced neural engine that is 60% faster than its M1 variant, Apple's M3 chips are well-suited for model inference tasks such as running live data through an AI model for predictions or solutions. Ahmad Shadid, the founder of io.net, believes that the integration of Apple chip support could balance hardware supplies with the burgeoning demand for AI and ML computational capacities. He considers this a step towards democratizing access to powerful computational resources and an opportunity for millions of Apple users to earn rewards for feeding the AI revolution. By enabling Apple hardware support, io.net expects millions of Apple product owners to lend their spare chips and computational resources for AI and ML initiatives.
Published At
3/13/2024 5:00:00 PM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.