We write the #1 salary negotiation newsletter for AI researchers & engineers. Sign up to get it →
Open or Closed-Source AI
Newsletter
February 15, 2024

Will Open or Closed-Source AI Inevitably Dominate?

Whether open or closed source dominates, is likely less of a moral or ethical question, it will likely come down to three things: resources & architectural paradigms, model use cases, and enterprise privacy. 

Ilya and Sam at OpenAI have argued that it wasn't possible to create an open-source, public-benefit AI company because the resources and capital required to train a great model were so high that only a for-profit company could achieve them. The cost to train GPT-4 is rumored to be ~$100M. Raising enough capital for a not-for-profit to successfully keep training models is challenging. It's particularly hard because models have only gotten more expensive since the biggest factor spurring their growth in capabilities is 'scale' (i.e. bigger models with more parameters trained on more data). If this scaling factor continues to be a driving component, the cost of future models will keep accelerating.

The resources argument is central to why open source might struggle. But several counter-opinions are worth considering. 

  1. Private companies care about open source. Yann LeCun, who leads AI at Meta, has open-sourced Llama despite high LLM training costs, Meta will continue to fund and develop new open-source foundational models. From a resourcing perspective, Meta has enough capital to keep up with closed sources for a long time. 
  2. Elad Gil, the famed investor, highlights that in the past open source has been funded by private companies. A great example of this is IBM sponsoring Linux so that there would be an alternative to Windows. A movement that has largely been successful - almost every datacenter and firmware device runs on Linux. Potentially companies like Amazon will sponsor open source to compete with Google and OpenAI. 
  3. Scaling might not be the long-term solution. Today, many of GPT-4's gains over GPT 3.5 were from scaling with more training data and more parameters. But we might be close to the limit of how much text we can use to scale models. Future gains may come from different areas like a new architecture that replaces transformers. 
  4. Efficiencies are found over time. Today, GPUs are expensive, they are likely not the optimal "end-state" AI chip. Across the software and hardware stack, there is a lot of low-hanging fruit to drive down the cost of training AI models making model training more approachable for open source. 

One way to think about whether closed or open source will become dominant is to think about how the models will be used. General purpose models like GPT-4 are powerful but are they the optimal model choice? Since ChatGPT came out, Huggingface has had over 100K models uploaded. The most frequently used models on Huggingface are small models with 500K to 5M parameters. These models are likely popular because they are easy to fine-tune and cheap to deploy and use at scale. Large LLMs are needed when the scope of the input is extensive - for example, ChatGPT where a user can ask practically anything. But in many enterprise use cases, a model might be doing something specific like extracting text from a document. The model doesn't need to have a world view to do that and small models can perform well on these specialized tasks. 

If small models continue to accelerate in adoption, the open-source community will gain traction. Not only do these small models need far fewer resources to be trained but trying to train 100s or 1000s of domain-specific models is challenging as a closed source company because it often requires specialized knowledge in that domain - aggregating specialized knowledge generally happens faster in the open source world. A good example of this is Wikipedia - they beat out closed-source competitors because they could crowdsource information faster and better.

The other core consideration of closed vs. open source is enterprise data and security. Open source models can be deployed on private infrastructure allowing enterprises to feel confident that they own the data and the E2E process. There is a wave of well-resourced startups that are building infrastructure to help enterprises securely deploy models. Even if the models are worse than closed sources, it might not matter. Enterprises will generally trade off performance to gain security. Our take is that closed source will develop good ways to segregate data and provide enterprise-grade controls - we wouldn't bet on security being the reason that closed source losses.

Finally, Yann LeCun argues that open source has to win for political reasons. He argues that governments and organizations will ultimately not allow powerful AI to be in the hands of a few companies and they will want to shape LLMs to their unique culture and views. As a result, they will be backers and supports (regulatory, funding, etc.) of open source. 

Our take is that the amount of resources needed to train the next decade of AI models will heavily dictate whether open or closed source is dominant. But it's hard to predict how many resources are needed - will scaling continue to be the biggest driver of emergent behavior or will a new architecture or approach cause the cost to train and serve models to drop dramatically? After a decade, resource costs likely won't matter for this debate because if open source doesn't win, the gap between the two in capabilities will be too large to overcome. 

So who should you join? For now, we advise our AI clients to interview with both closed and open-source companies. That said, the pay gap is generally pretty big. AI compensation varies significantly. Offers at OpenAI and Google (AI orgs) are routinely above $800K. In the open-source world, Facebook is consistently paying competitive AI salaries but not all of the work they do is open-sourced. Huggingface has a great open source culture but compensation trails. Beyond compensation, Huggingface and Cohere (closed source) are remote so depending on where you live they could be the most practical options. 

If you're thinking through which companies to interview with or which offer to take, respond to this email and I'm happy to talk through the decision with you. If needed, I can also put you in touch with past clients who have had to make similar decisions.

Sameer is a Lead Negotiator at Rora where helps individuals understand their market value and supports them during the negotiation process. Sameer has done over 400 negotiations and has been negotiating professionally for 2 years.

Previously - Sameer worked in Venture Capital in North America and multiple start-ups in the Middle East, where he frequently used financial modelling and operational analytics to negotiate equity with investors.

As a negotiator, Sameer has assisted several clients in increasing their offers by millions of dollars, and has helped hundreds of talented candidates advocate to receive their appropriate compensation and seniority.

Over 1000 individuals have used Rora to negotiate more than $10M in pay increases at companies like Amazon, Google, Meta, hundreds of startups, as well as consulting firms such as Vanguard, Cornerstone, BCG, Bain, and McKinsey. Their work has been featured in Forbes, ABC News, The TODAY Show, and theSkimm.

1:1 Salary Negotiation Support

Negotiation strategy

Step 1 is defining the strategy, which often starts by helping you create leverage for your negotiation (e.g. setting up conversations with FAANG recruiters).

Negotiation anchor number

Step 2 we decide on anchor numbers and target numbers with the goal of securing a top of band offer, based on our internal verified data sets.

Negotiation execution plan

Step 3 we create custom scripts for each of your calls, practice multiple 1:1 mock negotiations, and join your recruiter calls to guide you via chat.

Frequently Asked Questions

Similar Posts

June 26, 2023
A Fireside Chat with a Machine Learning Researcher from Hugging Face
June 26, 2023
June 2023: The Top AI Teams & Companies (By Research Area)
August 10, 2023
6 Things CS PhDs Would Do Differently in Their Next Industry Job Search