We’ve spent almost a decade helping AI PhDs and this is the best market we've seen – AI hiring is very much “up and to the right.”
However - given that AI professionals still make up a very small percentage of the overall tech industry, there’s little published content about the recruiting, interviewing, and negotiation processes for these roles.
Let's go over the AI roles and interview processes at Apple, OpenAI, Bloomberg, and JPMorgan. (If you missed part 1, here’s where we reviewed AI hiring and interviews at Amazon, Google, Meta, and Netflix).
Apple is notoriously secretive about everything from its hiring process to what each team is working on. In fact, most teams at Apple don’t even have visibility into what other teams or orgs are working on. This can be particularly challenging as an employee when you need input or advice beyond your team because it’s hard to identify the right people to reach out to.
Let’s first unpack some of the mysteries of AI research titles at Apple.
One of the most complicated things about applying and interviewing at Apple is sorting through titling. Apple has a very relaxed titling process. For example, research roles can be titled Machine Learning Engineer, Machine Learning Research, or Machine Learning Scientist despite being the same role.
And, to make things more complicated, sometimes the same title has different meanings in terms of your day-to-day work. For example - an ML Engineer might be very research-focused or be more skewed towards writing code.
Past Rora clients have interviewed for roles titled ML Researcher but on their official offer, the title was ML Engineer – despite the role being a research role. To further complicate things, externally, you can use any title you like!
The best way to identify what type of role you will be in is by understanding the day-to-day work you will do.
If you have a research background and the team you’d be joining is made up of all researchers then “ML Engineer” really means a research role. But if you are an engineer, then ML engineering is likely not research-focused, and - in your interviews - you should expect more Leetcode, systems design, and ML design questions.
Previous clients of Rora have provided a few recommendations on how to get interviews at Apple. The #1 piece of advice is that applying online and even having a generic referral won’t lead to interviews.
Even candidates from the country’s top research labs have applied cold to Apple and not heard back!
Instead, have an internal Apple employee reach out to the hiring manager directly on your behalf. If you aren’t sure how to do that, reach out to us and we’re happy to give you some tips!
Most teams at Apple will ask you to do a presentation on your research. Almost all teams will allow you to pick one of your papers to present. Make sure to pick a paper that you feel comfortable with – and that best represents your knowledge and expertise. In very rare circumstances we’ve seen interviewers select a paper from your published papers. If that happens, you will be given advance notice.
Normally, your presentation will be attended by the people who will interview you in subsequent rounds, many of whom are future colleagues/teammates while others will be on tangential research teams.
The presentation will be split into a research presentation and a Q&A. Prior to the interview, ask your recruiter for a list of attendees. The questions you get are often very related to each attendee’s background. For example, if they come from a math background, expect detailed questions on the math and assumptions in your work.
For hardware research roles, it’s less common to be asked to present your research. For software research roles, sometimes it’s mandatory but sometimes it’s optional.
If you are offered the optional presentation - we always recommend doing the presentation. Not only does it show initiative, but your future interviewers will attend your presentation and they will spend a large portion of your future interviews asking follow-up questions.
Since your research is your area of expertise it dramatically increases your chances of success!
For your remaining AI research interviews, you can expect up to two coding questions. Normally, one will be Leetcode and one will be ML fundamentals (e.g. PyTorch). As a researcher, you don’t need to flawlessly solve a Leetcode question. Ask for input or guidance if you get stuck. No matter what, don’t give up. We’ve seen researchers fail Leetcode questions and still get offers!
You will also have one interview with a senior member of your org - likely Director-level or higher. Expect to get asked about your presentation, past research, and your future research aspirations, as well as high-level questions about ML system design, how to approach choosing a research problem, etc.
For hardware researchers, you can expect a mixture of high-level questions (e.g. there is a defect in the iPad screen, how would you quantify the magnitude of the problem, what is your experience with X skill).
Prior to your interviews, read the job description closely or ask your recruiter for input on what the team does day-to-day. Then, think through the skills needed and your experiences with each so that you can ace skill-based questions.
Some of the interviewers will ask you math-based questions but it's rarely theoretical proofs and instead applied mathematics on a real-world problem (e.g. given the sound waves emitted are in a sigmoidal shape, how would you find...).
There have been a few changes in how publishing is perceived at Apple.
Historically, Apple rarely published papers, and doing so was frowned upon. But, it’s becoming more common for Apple to support AI researchers in publishing their work (if they want to). For hardware researchers, and for any research that is mission-critical to Apple products, it’s unlikely you will have org support to publish.
Finally, a word of caution with Apple.
Apple recruiters stack rank candidates and treat them quite poorly in the interview process. If you are their #1 choice they will try and rush you to make a decision – and if you aren't #1 they will often ghost you until they've gotten confirmation if their #1 choice is going to join or not!
For more on negotiating with Apple - check out our complete guide here.
Over the past few years, we’ve seen Bloomberg make a push into AI – recruiting more and more AI researchers. In fact - in 2023 - as things slowed down at many tech companies, Bloomberg continued to recruit ambitiously.
As a financial data company, the AI work at Bloomberg is naturally different from that at FAANG. One key difference is almost all of the work that Bloomberg does is to support its products – meaning your research will likely be tied to a specific business need.
Many of our previous clients who’ve ended up at Bloomberg have found it very satisfying to see their work lead to tangible benefits.
We see a lot of title confusion at Bloomberg. We've had clients get roles as Research Engineers, Applied Researchers, Research Scientists, and even Applied Machine Learning Scientists.
Research Scientist tends to be the most common position for a CS PhD, however, you may also do research and/or publish as an Applied Researcher or Applied ML Scientist. Unlike at FAANG companies, many of the roles allow both Masters or PhDs to apply.
It's challenging to know what the day-to-day work will be like by reading the title but generally, teams at Bloomberg have up to three focus areas:
To figure out what type of team you are interviewing for:
Each team at Bloomberg has flexibility on how they conduct their interviews.
Almost all interviews will start with a recruiter screen and then a hiring manager screen. Most hiring managers want to assess whether you are a culture fit because Bloomberg is very focused on hiring “nice, thoughtful people.”
Expect questions about your past research, your interest in the role and team, what you want to work on, and the skills you want to develop.
After the hiring manager screen, there are two general formats we’ve seen for AI roles at Bloomberg:
Regardless of the format given you will normally be interviewed by two people from your direct team, one senior leader from your organization, and one senior leader from a sister team.
Additionally, you will have one culture fit interview - the most important part of the culture interview is to be nice, thoughtful, and intellectually curious.
Finally, Bloomberg is generally a very reasonable company. In the past, we've seen them expedite interview processes and even skip interviews for candidates that they are excited about - so don’t hesitate to ask if you’re interested in Bloomberg but have another offer with a tight deadline.
If you’re interested in the application of AI within finance, JP Morgan has also been ramping up hiring for AI Researchers. Much of their research is applied to financial problems and doesn't have a huge overlap with the problems you might work on at a FAANG, but JPM is growing in popularity.
There are two core technical roles at JPM - Engineer and Scientist. All other titles are a derivative of one of these two. For example, a Researcher is the same as a Scientist.
At JPM there are two different science/research pathways and teams.
The first, and most common, takes real-world problems that JPM is trying to solve and builds models to solve them.
Typically, these projects would run on an 8-12 month cycle where you would spend 1-2 months reading the latest research. Then 6+ months building the models and solutions and finally you could optionally spend 1+ months publishing a paper.
At the end of the cycle, the engineers on your team will help you deploy the models to production. If you love engineering, you can often choose to deploy to production yourself.
It’s worth noting that this role doesn’t require you to be a production-grade engineer but you are expected to build your own models which can include building the data pipelines to feed those models.
The second is a purely research-based team that publishes papers. Often there is still a financial element to the research, but these teams are focused on making novel contributions to research.
Unfortunately, there is no good way to know which team you are interviewing for. When you apply, managers will review your resume and choose to interview you. Once you’ve been selected for an interview you can ask your recruiter for input on what the team does and the work they will cover.
If you have a strong preference and were selected for a team that isn’t the right fit, you can ask the manager or recruiter for introductions to other teams.
However, since managers typically select candidates (vs. being introduced to them), you should expect a <50% chance of being introduced to another team.
The first round interview generally consists of questions on the basics like probabilities, stats, and ML basics (back-propagation, etc.).
The second round is normally a coding interview. Expect a Leetcode question and potentially some stats and math to be weaved in – especially if your interviewer has a math background (research this ahead of your interview!).
Unfortunately, there is no guarantee what format the coding interview will take, it is entirely interviewer-dependent. We’ve seen researchers with limited coding backgrounds conduct some of these interviews and they rarely present Leetcode questions and instead provide general coding questions (e.g. build tic-tac-toe).
The third round is a super day of interviews including a presentation on your research and two more machine-learning interviews (coding, stats, probabilities, fundamentals, etc.).
JP Morgan historically has some banking culture that spreads into engineering (Google this if you’re not sure what we’re referring to 🙂).
However, the research culture has been consistently strong, and past Rora clients have enjoyed their experiences working there. Several clients have gone on to stay at JPM for multiple years in research roles!
One unique thing about OpenAI is that they work hard to treat all roles equally – and not value one role (e.g. Researcher) over another (e.g. SWE).
Over the long term, we believe that this could be a major differentiator for OpenAI and lead to superior products.
OpenAI has an incredible number of applicants and is exceptionally selective. Their team has a bias toward researchers from top institutions and with numerous papers in top journals. Historically, they've also liked researchers from Google Brain / Deepmind.
That said, one of the things we've liked about OpenAI is that when an internal team member refers you, the recruiting team reviews your resume. That sounds like it should always be the default but at many large tech companies (like Microsoft) most referrals go unread.
If you can’t get a referral to OpenAI we’ve also had clients successfully reach out to recruiters and hiring managers through cold emails and had success entering the interview process that way.
One approach to joining OpenAI as a researcher is to join the Residency Program. It's a paid 6-month program and after completing the program (if it's a mutual fit) you will be given the chance to convert to a full-time employee.
For engineering roles we generally find the interviews to be very similar to FAANG where you are given medium/hard Leetcode questions and system design questions.
Additionally, you can expect statistics questions and machine learning questions, like to approach fine-tuning a model. If you are a traditional, backend, or systems SWE it's unlikely that you will get more than a few high-level machine learning questions.
For research roles, we've seen a large variety of questions asked. For example, some interviews will focus on machine learning fundamentals like backpropagation. Others will focus on math fundamentals in linear algebra or calculus.
You will likely have two interviews with questions on your past research and a discussion on the 'bleeding edge' of your field of research and how you think about progressing that edge.
Their interviews are demanding so we'd recommend scheduling them near the end of your interview tour when you feel most ready.
Every team will have different interview processes but we are excited to see a big focus on diving into research papers, and many teams focusing on practical questions.
If you are ever unsure of what an interview will be about or what sort of compensation you might see from a role, shoot us a message with the title of the role and company and we’ll send you information (or ask a past client if we don’t have it)!
Brian is the founder and CEO of Rora. He's spent his career in education - first building Leada, a Y-Combinator backed ed-tech startup that was Codecademy for Data Science.
Brian founded Rora in 2018 with a mission to shift power to candidates and employees and has helped hundreds of people negotiate for fairer pay, better roles, and more power at work.
Brian is a graduate of UC Berkeley's Haas School of Business.
Over 1000 individuals have used Rora to negotiate more than $10M in pay increases at companies like Amazon, Google, Meta, hundreds of startups, as well as consulting firms such as Vanguard, Cornerstone, BCG, Bain, and McKinsey. Their work has been featured in Forbes, ABC News, The TODAY Show, and theSkimm.
Step 1 is defining the strategy, which often starts by helping you create leverage for your negotiation (e.g. setting up conversations with FAANG recruiters).
Step 2 we decide on anchor numbers and target numbers with the goal of securing a top of band offer, based on our internal verified data sets.
Step 3 we create custom scripts for each of your calls, practice multiple 1:1 mock negotiations, and join your recruiter calls to guide you via chat.