icon
icon
icon
icon
Upgrade
Upgrade

News /

Articles /

Super Micro Computer's Stock Drops 15% in Two Days Despite Record Revenue Growth and Top AI Benchmark Performance

Market BriefFriday, Apr 4, 2025 7:47 pm ET
2min read

On April 4, 2025, super micro computer (SMCI) closed at $34.24, marking a 7.74% decline, extending its two-day losing streak with a total drop of 15.02%. The trading volume reached 15.91 billion, placing it at the 99th position in the day's market activity.

Super Micro Computer reported an impressive 110% year-over-year revenue growth in FY2024, with GAAP net income surging by 80% compared to the previous year. This significant financial performance underscores the company's robust growth trajectory.

Super Micro Computer has achieved industry-leading performance in MLPerf Inference v5.0 benchmarks using nvidia HGX B200 8-GPU systems. The company's 4U liquid-cooled and 10U air-cooled systems demonstrated over 3x token generation per second compared to H200 8-GPU systems for Llama2-70B and Llama3.1-405B benchmarks. Key performance highlights include 129,000 tokens/second for Mixtral 8x7B Inference, over 1,000 tokens/second for Llama3.1-405b model, and 62,265.70 Tokens/s for llama2-70b-interactive-99. The company offers over 100 GPU-optimized systems with both cooling options. The new liquid-cooling technology features enhanced cold plates and a 250kW coolant distribution unit, doubling previous generation cooling capacity. The air-cooled 10U system accommodates eight 1000W TDP Blackwell GPUs, delivering up to 15x inference and 3x training performance. Supermicro's announcement marks a significant competitive advantage in the AI infrastructure market with their first-to-market NVIDIA HGX B200 systems. The 3x performance increase in token generation compared to previous H200-based systems represents a substantial leap forward for large language model inference workloads. What's particularly impressive is Supermicro's dual approach offering both liquid-cooled and air-cooled solutions that achieved top benchmark positions. This demonstrates exceptional thermal engineering capabilities, especially considering the 1000W TDP of each Blackwell GPU. Their innovations in cooling technology—with newly developed cold plates and 250kW coolant distribution units—effectively address the critical power density challenges that have become bottlenecks in AI data centers. The MLPerf benchmark results provide credible validation of Supermicro's performance claims, with standout results including 129,047 tokens/second for Mixtral-8x7b and 1,521 tokens/second for the massive Llama3.1-405b model. These metrics suggest Supermicro's systems will be particularly compelling for enterprises running inference workloads at scale. Supermicro's building block architecture has enabled their rapid time-to-market—a crucial advantage in the fast-moving AI hardware space where being first with next-generation technology typically translates to premium pricing opportunities and customer mindshare. The fact that they're already delivering these systems to customers while conducting benchmarks indicates strong production readiness and supply chain execution. The performance leadership Supermicro has demonstrated with their B200-based systems represents more than just incremental improvement—it's a step-change that could reshape AI infrastructure economics. When inference performance increases by 3x, organizations can achieve the same workload throughput with fewer systems, potentially reducing total cost of ownership despite higher upfront hardware costs. Supermicro's engineering achievement is particularly notable in the context of the world's largest language models. Their systems achieved 1,080 tokens/second on Llama3.1-405b (server) benchmarks—models of this scale were previously impractical for real-time inference. This capability opens new possibilities for enterprise AI applications requiring both massive parameter counts and responsive user experiences. The dual cooling approach (air and liquid) shows strategic market awareness. While liquid cooling delivers maximum performance density, many enterprises still prefer air cooling for simplicity and compatibility with existing infrastructure. By optimizing both solutions to perform comparably "within operating margin," Supermicro addresses the full spectrum of deployment environments. Their rack-scale redesign with vertical coolant distribution manifolds that no longer consume valuable rack units demonstrates sophisticated system-level thinking. This allows packing up to 96 NVIDIA Blackwell GPUs in a 52U rack—extraordinary compute density that addresses data center space constraints while maximizing AI processing capability per square foot. For organizations building or expanding AI infrastructure, this density advantage translates to tangible real estate and operational savings.

Ask Aime: What drove Super Micro Computer's sharp drop in value?

Comments

Add a public comment...
Post
Refresh
Disclaimer: the above is a summary showing certain market information. AInvest is not responsible for any data errors, omissions or other information that may be displayed incorrectly as the data is derived from a third party source. Communications displaying market prices, data and other information available in this post are meant for informational purposes only and are not intended as an offer or solicitation for the purchase or sale of any security. Please do your own research when investing. All investments involve risk and the past performance of a security, or financial product does not guarantee future results or returns. Keep in mind that while diversification may help spread risk, it does not assure a profit, or protect against loss in a down market.
You Can Understand News Better with AI.
Whats the News impact on stock market?
Its impact is
fork
logo
AInvest
Aime Coplilot
Invest Smarter With AI Power.
Open App