Gpu Performance Engineer - Paris, France - Adaptive ML

Adaptive ML
Adaptive ML
Entreprise vérifiée
Paris, France

il y a 4 semaines

Sophie Dupont

Posté par:

Sophie Dupont

beBee Recruiter


Description

About the team:


Adaptive is helping companies build singular generative AI experiences by democratizing the use of reinforcement learning. We are building the foundational technologies, tools, and products required for models to learn directly from users' interactions and for models to self-critique and self-improve from simple written guidelines. Our tightly-knit team was previously involved in the creation of state-of-the-art open-access large language models such as Falcon-180B. We have closed a $20M seed with Index & ICONIQ, and are looking forward to shipping a first version of our platform, Adaptive Engine, in early 2024.


Our Technical Staff is responsible for building the foundational technology powering Adaptive, in line with requests and requirements identified by our Product and Commercial Staff.

We strive to build excellent, robust, and efficient technology, and to conduct at-scale, honest research with high-impact for our roadmap and customers.


About the role:


As a GPU Performance Engineer in our Technical Staff,
you will help ensure that our LLM stack (Adaptive Harmony
) delivers state of the art performance across a wide variety of settings; including in latency-bound regimes where serving requests with sub-second response times is key, to throughput-bound regimes during training and offline inference.


You will help build the foundational technology powering Adaptive by delivering performance improvements directly to our clients as well as to our internal workloads.

Some examples of tasks you will encounter during your work:

  • Profile and iterate GPU inference kernels in Triton or CUDA, identifying memory bottlenecks and optimizing latency—and decide how to adequately benchmark an inference service;
  • Systematically identify and eliminate synchronization points between the CPU and GPU, enabling asynchronous communication of results from Python workers to our Rust backend;
  • Work with quantization methods to minimize the memory footprint of our models;
  • Modify existing implementation of kernels to support requested features, and efficiently implement novel operations entirely from scratch.
We are looking for self-driven, intense individuals, who value
technical excellency, honesty, and growth.


Your responsibilities:

Generally,
-
Contribute to our product roadmap, by identifying promising trends that can improve performance;

  • Report clearly on your work to a distributed collaborative team, with a
    bias for asynchronous written communication.
On the engineering side,
-
Write high-quality software in CUDA and/or Triton, with a focus on performance and robustness;
-
Profile dedicated GPU kernels in CUDA or Triton, optimizing across latency/compute-bound regimes for complex workloads.


Your (ideal) background:

-
A M.Sc
./Ph.
D. in computer science, or demonstrated experience in software engineering
, preferably with a focus on GPU-optimization;
-
Strong programming skills, preferably with a focus on systems and general purpose GPU programming;
-
Contributions to relevant open-source projects, such as CUTLASS, Triton and MLIR;
-
A track record of writing high performance kernels, preferably demonstrated ability to reach state of the art performance on well defined tasks;
-
Passionate about the future of generative AI, and eager to build foundational technology to help machines deliver more singular experiences.


Benefits:


  • Comprehensive medical (health, dental, and vision) insurance;
  • 401(k) plan with 4% matching (or equivalent);
  • Unlimited PTO — we strongly encourage at least 5 weeks each year;
  • Mental health, wellness, and personal development stipends;
  • Visa sponsorship if you wish to relocate to New York or Paris.

Plus d'emplois de Adaptive ML