Senior Software Development Engineer - LLM Inference Framework
Advanced Micro Devices, Inc. | |
$192,000.00/Yr.-$288,000.00/Yr.
| |
United States, California, Santa Clara | |
2485 Augustine Drive (Show on map) | |
Jan 17, 2026 | |
|
WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: As a senior member of the LLM inference framework team, you will be responsible for building and optimizing production-grade single-node and distributed inference runtimes for large language models on AMD GPUs. You will work at the framework and runtime layer, driving performance, scalability, and reliability, enabling tensor parallelism, pipeline parallelism, expert parallelism (MoE), and single-node or multi-node inference at scale. Your work will directly power customer-facing deployments and benchmarking platforms (e.g., InferenceMax, MLPerf, strategic partners, and cloud providers) and will be upstreamed into open-source inference frameworks such as vLLM and SGLang to make AMD a first-class platform for LLM serving. This role sits at the intersection of inference engines, distributed systems, and GPU runtime and kernel backends. THE PERSON: You are a systems-minded ML engineer who thinks in terms of throughput, latency, memory movement, and scheduling, not just model code. You are comfortable reading and modifying large-scale inference frameworks, debugging performance across GPUs and nodes, and collaborating with kernel, compiler, and networking teams to close end-to-end performance gaps. You enjoy working in open source and driving architecture-level improvements in inference platforms. KEY RESPONSIBILITIES: Inference Framework & Runtime
Performance & Scalability
GPU & Backend Integration
Open Source & Customer Enablement
PREFERRED EXPERIENCE: Inference Stack Knowledge
Deep Learning Integration
Kernel & Inference Frameworks
Software Engineering
High-Performance Computing
Compiler & Runtime Optimization
ACADEMIC CREDENTIALS:
#LI-JG1 Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's "Responsible AI Policy" is available here.
This posting is for an existing vacancy. | |
$192,000.00/Yr.-$288,000.00/Yr.
Jan 17, 2026