Intro to Waystone
The unified LLM gateway making developing AI apps easy.
What is Waystone?
Your application starts with just a single LLM query, which is easy enough to manage. But now you’re getting users and need control usage to limit costs. Then you add different pricing tiers - how do you track users’ token usage and limit access to the product? Oh, and what if OpenAI goes down? Now you need to set up a fallback to Anthropic. And what if people start abusing the application, how do you monitor for abuse or block their access? And then, and then, and then…
Waystone takes care of all the difficult, boring, extra stuff that you shouldn’t have to build when creating AI applications:
- Routing to different models and providers for different prompts/use-cases
- Model fallbacks (e.g. for when OpenAI is down)
- Monitoring per-user usage (cost, limits, abuse, etc.)
- Per-user model access (e.g. different pricing tiers, regions, domains)
- (COMING SOON) Monitoring and analysing prompts and responses
- (COMING SOON) A/B testing models and prompts
- (COMING SOON) Sanitising responses (blocking, censoring, structure-enforcement, etc.)
- … and so much more
Get started
Start by forwarding your LLM requests through our gateway with a two-line change to your code, then take advantage of our powerful rules engine and observability tooling (coming soon!) to route requests, set model fallbacks, retries, and more based on which of your users is making the request.
What’s Next?
Quickstart
Set up your Waystone account, integrate with the gateway and create your first rule.
API reference
Check out our REST API reference. Our gateway API is a super-powered superset of the OpenAI API, and is fully compatible!
Checkout an example usecase
See a recipe for managing how users in different pricing tiers can access different models in your application.