How to choose the right LLM for your use-case

December 11, 2023


Large language models (LLMs) like GPT have proven to be one of the most powerful and versatile tools over the past year. As they can be used to build a wide range of applications, from chatbots and content generators to coding assistants and question answering systems; these systems offer a wide variety of capabilities and customisations that can optimise industry and personal workflows significantly.

While developing with LLMs is a rapidly evolving process with ever changing best practices, the larger question of how to choose the appropriate language model for a use-case is a question that has many right answers.

Broadly, we can classify use-case worthy LLMs into two categories:

  1. Proprietary LLMs (offered via APIs)
  2. Open Source LLMs 

Proprietary LLMs:

When building initial applications powered by large language models (LLMs), developers can reduce friction by leveraging proprietary pre-trained models through easy-to-use APIs. For instance, OpenAI grants access to capable models such as GPT-3.5 and the recently launched GPT-4 Turbo via simple API calls. This convenient approach circumvents the expertise needed to train or deploy custom LLMs before application development can even begin.

A logical starting point involves experimenting with LLM orchestration frameworks tailored for downstream use cases. Tools such as Langchain and Haystack streamline retrieval-augmented generation, allowing pre-trained LLMs to enhance responses by drawing relevant context from external knowledge sources. With production-ready models and purpose-built orchestration tools readily available, developers can focus prototyping efforts on exploring capabilities rather than wrestling with implementation details.

Open Source LLMs

While convenient, proprietary large language models (LLMs) can rack up high usage costs when scaled, diminishing budget efficiency. Consequently, many developers are transitioning to open source LLMs granting fuller control over expenses, speed, and security.

One popular open source offering is Meta AI's compact yet capable LLaMA family of models. Despite requiring explicit guidance, LLaMA exhibits responsive performance, strong stability, and surprisingly affordable pricing. Certain hosting providers like AWS with Bedrock offer LLaMA rates as low as $1 per 1 million generated tokens.

However, operating open source LLMs possesses underappreciated intricacies. Cost-efficiently managing LLM resources demands expertise across model optimization, hardware configuration, request batching, and autoscaling capabilities. Therefore, although counterintuitive initially, leveraging a proven provider's high-performance endpoints often proves the most practical path for efficiently scaling. The specialized resources and operational experience that third-party LLM hosting services provide must factor into total cost of ownership, in addition to raw usage rates.

Final Words: 

Cost Considerations

When initially exploring capabilities, relying on convenient proprietary LLMs seems prudent. However, as promising prototypes transition to production applications at scale, usage costs grow exponentially. What appears affordable during testing quickly becomes prohibitive for end-user viability. Consequently, proprietary LLMs often prove most economical for narrow, intermittent uses rather than widespread, high-frequency integration. For broad adoption across workflows, open source alternatives grant greater potential for cost-efficient scaling despite heightened deployment complexity. Evaluating long-term costs is vital when choosing the optimal language model for your use case.

Data Security Considerations

Constructing applications powered by large language models (LLMs) proves both complex and rewarding. The journey demands balancing exploration, optimization, and solution evolution in equal measure. Practitioners must comprehend capabilities, push boundaries, and craft offerings matching customer effectiveness and efficiency needs alike.

An overarching concern persists across all development stages: safeguarding data and intellectual property. Relying solely on public APIs poses potential privacy and customization limitations when handling sensitive data or requiring specialized model tuning.

Securing core IP represents an underappreciated yet vital component of responsible LLM adoption. Even open source models can enable extracting proprietary training data. And leaked datasets, scraped documents, or stolen code amount to far more than bits and bytes; they constitute the lifeblood enabling emerging technology breakthroughs. We all have an obligation to acknowledge and address the interconnected data and model protections vital to pioneering new innovations while preventing misconduct.

Learn more