AMD showcased its AI innovations, including new accelerators and collaborations with major tech companies at the 2025 Advancing AI event.
Quiver AI Summary
AMD announced its end-to-end AI platform at the 2025 Advancing AI event, showcasing its comprehensive AI infrastructure that integrates leadership GPUs, CPUs, networking, and open software. The company introduced the AMD Instinct MI350 Series accelerators, which offer significant advances in AI compute and inferencing capabilities, achieving notable efficiency improvements. Partners like Meta, OpenAI, and Microsoft discussed their collaborations with AMD to enhance AI solutions and infrastructure. AMD's initiatives also highlighted the expansion of the ROCm open software stack and a new target for increased energy efficiency in AI training by 2030. The launch of the AMD Developer Cloud aims to lower barriers for AI development, enabling wider access to high-performance computing resources and fostering innovation through strategic partnerships.
Potential Positives
- AMD introduced its new AMD Instinct MI350 Series GPUs, achieving a significant 4x increase in AI compute performance and a 35x improvement in inferencing compared to previous generations.
- The company is advancing its open rack-scale AI infrastructure, which is already being deployed in hyperscaler environments like Oracle Cloud Infrastructure, indicating strong demand and collaboration within the industry.
- AMD showcased its new 2030 goal to achieve a 20x increase in rack-scale energy efficiency, which reflects the company's commitment to sustainability and innovation in AI technology.
- The broad availability of the AMD Developer Cloud aims to lower barriers for developers, expanding access to high-performance AI development tools and fostering wider adoption of AMD solutions in the AI ecosystem.
Potential Negatives
- AMD faces significant competition from established players like Intel and Nvidia, which are noted for their aggressive business practices, potentially impacting AMD's market position and sales.
- The reliance on third-party manufacturers for product supply may pose risks related to timely production and technology competitiveness.
- The press release highlights numerous forward-looking statements, which indicate potential risks and uncertainties that could significantly affect AMD's projected outcomes and market performance.
FAQ
What is AMD's role in the AI ecosystem?
AMD powers the full spectrum of AI through leadership in GPUs, CPUs, networking, and open software for unmatched performance.
Which companies are partnering with AMD for AI?
Partners include Meta, OpenAI, Oracle, Microsoft, Cohere, and others collaborating on AI solutions.
What is the AMD Instinct MI350 Series?
The MI350 Series accelerators offer significant performance improvements in AI compute and inferencing capabilities.
What advancements does AMD ROCm 7 bring?
ROCm 7 enhances support for standard frameworks and provides new tools to improve AI development efficiencies.
How does AMD's AI infrastructure help users?
Amd's open rack-scale infrastructure provides flexible, high-performance environments for AI projects, lowering barriers for developers.
Disclaimer: This is an AI-generated summary of a press release distributed by GlobeNewswire. The model used to summarize this release may make mistakes. See the full release here.
$AMD Congressional Stock Trading
Members of Congress have traded $AMD stock 16 times in the past 6 months. Of those trades, 12 have been purchases and 4 have been sales.
Here’s a breakdown of recent trading of $AMD stock by members of Congress over the last 6 months:
- REPRESENTATIVE ROBERT BRESNAHAN has traded it 3 times. They made 1 purchase worth up to $50,000 on 05/15 and 2 sales worth up to $65,000 on 04/08, 01/13.
- REPRESENTATIVE MARJORIE TAYLOR GREENE has traded it 9 times. They made 9 purchases worth up to $135,000 on 05/14, 05/05, 04/09, 04/08, 01/21, 01/08 and 0 sales.
- REPRESENTATIVE JEFFERSON SHREVE has traded it 3 times. They made 2 purchases worth up to $150,000 on 03/31, 03/05 and 1 sale worth up to $100,000 on 04/07.
- SENATOR MARKWAYNE MULLIN sold up to $100,000 on 02/13.
To track congressional stock trading, check out Quiver Quantitative's congressional trading dashboard.
$AMD Insider Trading Activity
$AMD insiders have traded $AMD stock on the open market 9 times in the past 6 months. Of those trades, 2 have been purchases and 7 have been sales.
Here’s a breakdown of recent trading of $AMD stock by insiders over the last 6 months:
- MARK D PAPERMASTER (Chief Technology Officer & EVP) has made 0 purchases and 7 sales selling 52,500 shares for an estimated $5,465,637.
- PHILIP GUIDO (EVP & Chief Commercial Officer) has made 2 purchases buying 13,445 shares for an estimated $1,498,944 and 0 sales.
To track insider transactions, check out Quiver Quantitative's insider trading dashboard.
$AMD Hedge Fund Activity
We have seen 1,162 institutional investors add shares of $AMD stock to their portfolio, and 1,513 decrease their positions in their most recent quarter.
Here are some of the largest recent moves:
- FISHER ASSET MANAGEMENT, LLC removed 22,735,111 shares (-94.4%) from their portfolio in Q1 2025, for an estimated $2,335,805,304
- JENNISON ASSOCIATES LLC removed 9,096,394 shares (-100.0%) from their portfolio in Q1 2025, for an estimated $934,563,519
- INFINITUM ASSET MANAGEMENT, LLC added 7,191,800 shares (+inf%) to their portfolio in Q1 2025, for an estimated $738,885,532
- MARSHALL WACE, LLP added 6,944,358 shares (+327.1%) to their portfolio in Q1 2025, for an estimated $713,463,340
- BAILLIE GIFFORD & CO removed 6,350,514 shares (-100.0%) from their portfolio in Q1 2025, for an estimated $652,451,808
- JANE STREET GROUP, LLC added 6,187,535 shares (+238.4%) to their portfolio in Q1 2025, for an estimated $635,707,345
- T. ROWE PRICE INVESTMENT MANAGEMENT, INC. added 5,915,185 shares (+131.5%) to their portfolio in Q1 2025, for an estimated $607,726,106
To track hedge funds' stock portfolios, check out Quiver Quantitative's institutional holdings dashboard.
$AMD Analyst Ratings
Wall Street analysts have issued reports on $AMD in the last several months. We have seen 8 firms issue buy ratings on the stock, and 1 firms issue sell ratings.
Here are some recent analyst ratings:
- Cantor Fitzgerald issued a "Overweight" rating on 06/09/2025
- Barclays issued a "Overweight" rating on 05/16/2025
- Wedbush issued a "Outperform" rating on 05/07/2025
- UBS issued a "Buy" rating on 05/07/2025
- Bank of America Securities issued a "Buy" rating on 04/22/2025
- Piper Sandler issued a "Overweight" rating on 02/19/2025
- Wells Fargo issued a "Overweight" rating on 02/19/2025
To track analyst ratings and price targets for $AMD, check out Quiver Quantitative's $AMD forecast page.
$AMD Price Targets
Multiple analysts have issued price targets for $AMD recently. We have seen 17 analysts offer price targets for $AMD in the last 6 months, with a median target of $140.0.
Here are some recent targets:
- Tom O'Malley from Barclays set a target price of $130.0 on 05/16/2025
- Timothy Arcuri from UBS set a target price of $150.0 on 05/07/2025
- Ben Reitzes from Melius Research set a target price of $110.0 on 05/07/2025
- Ruben Roy from Stifel Nicolaus set a target price of $132.0 on 04/28/2025
- Vivek Arya from Loop Capital Markets set a target price of $160.0 on 04/16/2025
- An analyst from Argus Research set a target price of $160.0 on 02/06/2025
- An analyst from Truist Financial set a target price of $130.0 on 02/05/2025
Full Release
— Only AMD powers the full spectrum of AI, bringing together leadership GPUs, CPUs, networking and open software to deliver unmatched flexibility and performance —
— Meta, OpenAI, xAI, Oracle, Microsoft, Cohere, HUMAIN, Red Hat, Astera Labs and Marvell discussed how they are partnering with AMD for AI solutions —
SANTA CLARA, Calif., June 12, 2025 (GLOBE NEWSWIRE) -- AMD (NASDAQ: AMD) delivered its comprehensive, end-to-end integrated AI platform vision and introduced its open, scalable rack-scale AI infrastructure built on industry standards at its 2025 Advancing AI event.
AMD and its partners showcased:
- How they are building the open AI ecosystem with the new AMD Instinct™ MI350 Series accelerators
- The continued growth of the AMD ROCm™ ecosystem
-
The company’s powerful, new, open rack-scale designs and roadmap that bring leadership rack-scale AI performance beyond 2027
“AMD is driving AI innovation at an unprecedented pace, highlighted by the launch of our AMD Instinct MI350 series accelerators, advances in our next generation AMD ‘Helios’ rack-scale solutions, and growing momentum for our ROCm open software stack,” said Dr. Lisa Su, AMD chair and CEO. “We are entering the next phase of AI, driven by open standards, shared innovation and AMD’s expanding leadership across a broad ecosystem of hardware and software partners who are collaborating to define the future of AI.”
AMD Delivers Leadership Solutions to Accelerate an Open AI Ecosystem
AMD announced a broad portfolio of hardware, software and solutions to power the full spectrum of AI:
- AMD unveiled the Instinct MI350 Series GPUs, setting a new benchmark for performance, efficiency and scalability in generative AI and high-performance computing. The MI350 Series, consisting of both Instinct MI350X and MI355X GPUs and platforms, delivers a 4x, generation-on-generation AI compute increase i and a 35x generational leap in inferencing ii , paving the way for transformative AI solutions across industries. MI355X also delivers significant price-performance gains, generating up to 40% more tokens-per-dollar compared to competing solutions iii . More details are available in this blog from Vamsi Boppana, AMD SVP, AI .
- AMD demonstrated end-to-end, open-standards rack-scale AI infrastructure—already rolling out with AMD Instinct MI350 Series accelerators, 5th Gen AMD EPYC™ processors and AMD Pensando™ Pollara NICs in hyperscaler deployments such as Oracle Cloud Infrastructure (OCI) and set for broad availability in 2H 2025.
- AMD also previewed its next generation AI rack called “Helios.” It will be built on the next-generation AMD Instinct MI400 Series GPUs – which compared to the previous generation are expected to deliver up to 10x more performance running inference on Mixture of Experts models iv , the “Zen 6”-based AMD EPYC “Venice” CPUs and AMD Pensando “Vulcano” NICs. More details are available in this blog post .
- The latest version of the AMD open-source AI software stack, ROCm 7, is engineered to meet the growing demands of generative AI and high-performance computing workloads—while dramatically improving developer experience across the board. ROCm 7 features improved support for industry-standard frameworks, expanded hardware compatibility and new development tools, drivers, APIs and libraries to accelerate AI development and deployment. More details are available in this blog post from Anush Elangovan, AMD CVP of AI Software Development .
- The Instinct MI350 Series exceeded AMD’s five-year goal to improve the energy efficiency of AI training and high-performance computing nodes by 30x, ultimately delivering a 38x improvement v . AMD also unveiled a new 2030 goal to deliver a 20x increase in rack-scale energy efficiency from a 2024 base year vi , enabling a typical AI model that today requires more than 275 racks to be trained in fewer than one fully utilized rack by 2030, using 95% less electricity vii . More details are available in this blog post from Sam Naffziger, AMD SVP and Corporate Fellow .
- AMD also announced the broad availability of the AMD Developer Cloud for the global developer and open-source communities. Purpose-built for rapid, high-performance AI development, users will have access to a fully managed cloud environment with the tools and flexibility to get started with AI projects – and grow without limits. With ROCm 7 and the AMD Developer Cloud, AMD is lowering barriers and expanding access to next-gen compute. Strategic collaborations with leaders like Hugging Face, OpenAI and Grok are proving the power of co-developed, open solutions.
Broad Partner Ecosystem Showcases AI Progress Powered by AMD
Today, seven of the 10 largest model builders and Al companies are running production workloads on Instinct accelerators. Among those companies are Meta, OpenAI, Microsoft and xAI, who joined AMD and other partners at Advancing AI, to discuss how they are working with AMD for AI solutions to train today’s leading AI models, power inference at scale and accelerate AI exploration and development:
- Meta detailed how Instinct MI300X is broadly deployed for Llama 3 and Llama 4 inference. Meta shared excitement for MI350 and its compute power, performance-per-TCO and next-generation memory. Meta continues to collaborate closely with AMD on AI roadmaps, including plans for the Instinct MI400 Series platform.
- OpenAI CEO Sam Altman discussed the importance of holistically optimized hardware, software and algorithms and OpenAI’s close partnership with AMD on AI infrastructure, with research and GPT models on Azure in production on MI300X, as well as deep design engagements on MI400 Series platforms.
- Oracle Cloud Infrastructure (OCI) is among the first industry leaders to adopt the AMD open rack-scale AI infrastructure with AMD Instinct MI355X GPUs. OCI leverages AMD CPUs and GPUs to deliver balanced, scalable performance for AI clusters, and announced it will offer zettascale AI clusters accelerated by the latest AMD Instinct processors with up to 131,072 MI355X GPUs to enable customers to build, train and inference AI at scale.
- HUMAIN discussed its landmark agreement with AMD to build open, scalable, resilient and cost-efficient AI infrastructure leveraging the full spectrum of computing platforms only AMD can provide.
- Microsoft announced Instinct MI300X is now powering both proprietary and open-source models in production on Azure.
- Cohere shared that its high-performance, scalable Command models are deployed on Instinct MI300X, powering enterprise-grade LLM inference with high throughput, efficiency and data privacy.
- Red Hat described how its expanded collaboration with AMD enables production-ready AI environments, with AMD Instinct GPUs on Red Hat OpenShift AI delivering powerful, efficient AI processing across hybrid cloud environments.
- Astera Labs highlighted how the open UALink ecosystem accelerates innovation and delivers greater value to customers and shared plans to offer a comprehensive portfolio of UALink products to support next-generation AI infrastructure.
-
Marvell joined AMD to highlight its collaboration as part of the UALink Consortium developing an open interconnect, bringing the ultimate flexibility for AI infrastructure.
Supporting Resources
- Learn more about the event here .
- Access the AAI 2025 press kit here .
- Learn more about AMD AI solutions here .
- Connect with AMD on Linkedin
-
Follow AMD on X:
AMD
,
AMD AI
About AMD
For more than 55 years, AMD has driven innovation in high-performance computing, graphics, and visualization technologies. Hundreds of millions of consumers, Fortune 500 businesses, and leading scientific research facilities around the world rely on AMD technology to improve how they live, work, and play. AMD employees are focused on building leadership high-performance and adaptive products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit
www.amd.com
.
Cautionary Statement
This press release contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the features, functionality, performance, availability, timing and expected benefits of AMD products and roadmaps; AMD’s AI platform; and AMD’s partner ecosystem, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as "would," "may," "expects," "believes," "plans," "intends," "projects" and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this press release are based on current beliefs, assumptions and expectations, speak only as of the date of this press release and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Material factors that could cause actual results to differ materially from current expectations include, without limitation, the following: Intel Corporation’s dominance of the microprocessor market and its aggressive business practices; Nvidia’s dominance in the graphics processing unit market and its aggressive business practices; competitive markets in which AMD’s products are sold; the cyclical nature of the semiconductor industry; market conditions of the industries in which AMD products are sold; AMD’s ability to introduce products on a timely basis with expected features and performance levels; loss of a significant customer; economic and market uncertainty; quarterly and seasonal sales patterns; AMD's ability to adequately protect its technology or other intellectual property; unfavorable currency exchange rate fluctuations; ability of third party manufacturers to manufacture AMD's products on a timely basis in sufficient quantities and using competitive technologies; availability of essential equipment, materials, substrates or manufacturing processes; ability to achieve expected manufacturing yields for AMD’s products; AMD's ability to generate revenue from its semi-custom SoC products; potential security vulnerabilities; potential security incidents including IT outages, data loss, data breaches and cyberattacks; uncertainties involving the ordering and shipment of AMD’s products; AMD’s reliance on third-party intellectual property to design and introduce new products; AMD's reliance on third-party companies for design, manufacture and supply of motherboards, software, memory and other computer platform components; AMD's reliance on Microsoft and other software vendors' support to design and develop software to run on AMD’s products; AMD’s reliance on third-party distributors and add-in-board partners; impact of modification or interruption of AMD’s internal business processes and information systems; compatibility of AMD’s products with some or all industry-standard software and hardware; costs related to defective products; efficiency of AMD's supply chain; AMD's ability to rely on third party supply-chain logistics functions; AMD’s ability to effectively control sales of its products on the gray market; long-term impact of climate change on AMD’s business; impact of government actions and regulations such as export regulations, tariffs and trade protection measures, and licensing requirements; AMD’s ability to realize its deferred tax assets; potential tax liabilities; current and future claims and litigation; impact of environmental laws, conflict minerals related provisions and other laws or regulations; evolving expectations from governments, investors, customers and other stakeholders regarding corporate responsibility matters; issues related to the responsible use of AI; restrictions imposed by agreements governing AMD’s notes, the guarantees of Xilinx’s notes, the revolving credit agreement and the ZT Systems credit agreement; impact of acquisitions, joint ventures and/or strategic investments on AMD’s business and AMD’s ability to integrate acquired businesses, including ZT Systems; AMD’s ability to sell the ZT Systems manufacturing business; impact of any impairment of the combined company’s assets; political, legal and economic risks and natural disasters; future impairments of technology license purchases; AMD’s ability to attract and retain qualified personnel; and AMD’s stock price volatility. Investors are urged to review in detail the risks and uncertainties in AMD’s Securities and Exchange Commission filings, including but not limited to AMD’s most recent reports on Forms 10-K and 10-Q.
AMD, the AMD Arrow logo, EPYC, AMD CDNA, AMD Instinct, Pensando, ROCm, Ryzen, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other names are for informational purposes only and may be trademarks of their respective owners.
__________________________
i
Based on calculations by AMD Performance Labs in May 2025, to determine the peak theoretical precision performance of eight (8) AMD Instinct™ MI355X and MI350X GPUs (Platform) and eight (8) AMD Instinct MI325X, MI300X, MI250X and MI100 GPUs (Platform) using the FP16, FP8, FP6 and FP4 datatypes with Matrix. Server manufacturers may vary configurations, yielding different results. Results may vary based on use of the latest drivers and optimizations.
MI350-004
ii MI350-044: Based on AMD internal testing as of 6/9/2025. Using 8 GPU AMD Instinct™ MI355X Platform measuring text generated online serving inference throughput for Llama 3.1-405B chat model (FP4) compared 8 GPU AMD Instinct™ MI300X Platform performance with (FP8). Test was performed using input length of 32768 tokens and an output length of 1024 tokens with concurrency set to best available throughput to achieve 60ms on each platform, 1 for MI300X (35.3ms) and 64ms for MI355X platforms (50.6ms). Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations.
iii Based on performance testing by AMD Labs as of 6/6/2025, measuring the text generated inference throughput on the LLaMA 3.1-405B model using the FP4 datatype with various combinations of input, output token length with AMD Instinct™ MI355X 8x GPU, and published results for the NVIDIA B200 HGX 8xGPU. Performance per dollar calculated with current pricing for NVIDIA B200 available from Coreweave website and expected Instinct MI355X based cloud instance pricing. Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations. Current customer pricing as of June 10, 2025, and subject to change. MI350-049
iv
MI400-001: Performance projection as of 06/05/2025 using engineering estimates based on the design of a future AMD Instinct MI400 Series GPU compared to the Instinct MI355x, with 2K and 16K prefill with TP8, EP8 and projected inference performance, and using a GenAI training model evaluated with GEMM and Attention algorithms for the Instinct MI400 Series. Results may vary when products are released in market. (MI400-001)
v
EPYC-030a: Calculation includes 1) base case kWhr use projections in 2025 conducted with Koomey Analytics based on available research and data that includes segment specific projected 2025 deployment volumes and data center power utilization effectiveness (PUE) including GPU HPC and machine learning (ML) installations and 2) AMD CPU and GPU node power consumptions incorporating segment-specific utilization (active vs. idle) percentages and multiplied by PUE to determine actual total energy use for calculation of the performance per Watt. 38x is calculated using the following formula: (base case HPC node kWhr use projection in 2025 * AMD 2025 perf/Watt improvement using DGEMM and TEC +Base case ML node kWhr use projection in 2025 *AMD 2025 perf/Watt improvement using ML math and TEC) /(Base case projected kWhr usage in 2025). For more information,
https://www.amd.com/en/corporate/corporate-responsibility/data-center-sustainability.html
.
vi
AMD based advanced racks for AI training/inference in each year (2024 to 2030) based on AMD roadmaps, also examining historical trends to inform rack design choices and technology improvements to align projected goals and historical trends. The 2024 rack is based on the MI300X node, which is comparable to the Nvidia H100 and reflects current common practice in AI deployments in 2024/2025 timeframe. The 2030 rack is based on an AMD system and silicon design expectations for that time frame. In each case, AMD specified components like GPUs, CPUs, DRAM, storage, cooling, and communications, tracking component and defined rack characteristics for power and performance. Calculations do not include power used for cooling air or water supply outside the racks but do include power for fans and pumps internal to the racks.
Performance improvements are estimated based on progress in compute output (delivered, sustained, not peak FLOPS), memory (HBM) bandwidth, and network (scale-up) bandwidth, expressed as indices and weighted by the following factors for training and inference.
Training | FLOPS | HBM BW | Scale-up BW |
Inference | 70.0% | 10.0% | 20.0% |
45.0% | 32.5% | 22.5% | |
Performance and power use per rack together imply trends in performance per watt over time for training and inference, then indices for progress in training and inference are weighted 50:50 to get the final estimate of AMD projected progress by 2030 (20x). The performance number assumes continued AI model progress in exploiting lower precision math formats for both training and inference which results in both an increase in effective FLOPS and a reduction in required bandwidth per FLOP.
vii AMD estimated the number of racks to train a typical notable AI model based on EPOCH AI data ( https://epoch.ai ). For this calculation we assume, based on these data, that a typical model takes 10 25 floating point operations to train (based on the median of 2025 data), and that this training takes place over 1 month. FLOPs needed = 10^25 FLOPs/(seconds/month)/Model FLOPs utilization (MFU) = 10^25/(2.6298*10^6)/0.6. Racks = FLOPs needed/(FLOPS/rack in 2024 and 2030). The compute performance estimates from the AMD roadmap suggests that approximately 276 racks would be needed in 2025 to train a typical model over one month using the MI300X product (assuming 22.656 PFLOPS/rack with 60% MFU) and <1 fully utilized rack would be needed to train the same model in 2030 using a rack configuration based on an AMD roadmap projection. These calculations imply a >276-fold reduction in the number of racks to train the same model over this six-year period. Electricity use for a MI300X system to completely train a defined 2025 AI model using a 2024 rack is calculated at ~7GWh, whereas the future 2030 AMD system could train the same model using ~350 MWh, a 95% reduction. AMD then applied carbon intensities per kWh from the International Energy Agency World Energy Outlook 2024 [https://www.iea.org/reports/world-energy-outlook-2024]. IEA’s stated policy case gives carbon intensities for 2023 and 2030. We determined the average annual change in intensity from 2023 to 2030 and applied that to the 2023 intensity to get 2024 intensity (434 CO2 g/kWh) versus the 2030 intensity (312 CO2 g/kWh). Emissions for the 2024 baseline scenario of 7 GWh x 434 CO2 g/kWh equates to approximately 3000 metric tC02, versus the future 2030 scenario of 350 MWh x 312 CO 2 g/kWh equates to around100 metric tCO2.
Contact:
Brandi Martina
AMD Communications
(512) 705-1720
[email protected]
Liz Stine
AMD Investor Relations
+1 720-652-3965
[email protected]