Rendered at 11:50:48 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
mips_avatar 15 hours ago [-]
So it's basically just openrouter with cloudflare argo networking? I feel like they could do some much more interesting stuff with their replicate acquisition. Application specific RL is getting so good but there's no good way to deploy these models in a scalable way. Even the providers like fireworks which claim to let you deploy LORAs in a scalable way can't do it. For now I literally have to host base load on my application on a rack of 3090s in my garage which seems silly but it saves me $1k a month.
ascorbic 11 minutes ago [-]
The interesting part is that you can use the same API with Workers AI models (hosted at the edge) and proxied models (OpenRouter-style).
Disclaimer: I work at Cloudflare, but not on this.
bryden_cruz 4 hours ago [-]
Running a rack of 3090s in your garage to avoid provider lock-in/costs is the most Hacker News thing. Out of curiosity, what are you doing for uptime/failover? If you are running production traffic to that garage rack, does your app just degrade gracefully if your home internet drops, or do you have a cloud fallback?
handfuloflight 3 hours ago [-]
[flagged]
jonfromsf 8 hours ago [-]
Gilfoyle? Is that you?
mips_avatar 7 hours ago [-]
I think these gpus were actually used for bitcoin mining before I bought them
vladgur 14 hours ago [-]
Curious which models are you able to run and how many 3090s do they require at scale?
mips_avatar 13 hours ago [-]
4 3090s with nvlinks on each pair. Super fast inference on Moe models around 20-36b
whereistejas 21 hours ago [-]
This actually looks very useful. Cloudflare seems to be brining together a great set of tools. Not to mention, D2 is literally the only sqlite-as-a-service solution out there whose reliability is great and free tier limits are generous.
brikym 4 hours ago [-]
There is always one thing that bites you because Cloudflare is different. I just built an AI game (sleuththetruth.com) and the primary reason it's so slow to prompt a new board is actually not because of AI latency. It's because CF workers have a limit of 6 connections (including spawned workers). There is no way to gulp down all the wiki images I want all at once. If I had put the backend on Railway I don't think I'd have this issue.
ncrmro 58 minutes ago [-]
Turso/libsql has been great for poc project so far
mikeocool 18 hours ago [-]
Agreed -- except that all of their docs and marketing pitches it for use cases like "per-user, per-tenant or per-entity databases" -- which would be SO great.
But in practice, it's basically impossible to use that way in conjunctions with workers, since you have to bind every database you want to use to the worker and binding a new database requires redeploying the worker.
AgentME 16 hours ago [-]
If you want to dynamically create sqlite databases, then moving to durable objects which are each backed by an sqlite database seems to be the way to go currently.
eis 4 hours ago [-]
And now you've put everything on the equivalent of a single NodeJS process running on a tiny VM. Next step: spread out over multiple durable objects but that means implementing a sharding logic. Complexity escalates very fast once you leave toy project territory.
eis 17 hours ago [-]
D1 reliability has been bad in our experience. We've had queries hanging on their internal network layer for several seconds, sometimes double digits over extended periods (on the order of weeks). Recently I've seen a few times plain network exceptions - again, these are internal between their worker and the D1 hosts. And many of the hung queries wouldn't even show up under traces in their observability dashboard so unless you have your own timeout detection you wouldn't even know things are not working. It was hard to get someone on their side to take a look and actually acknowledge and understand the problem.
But even without network issues that have plagued it I would hesitate to build anything for production on it because it can't even do transactions and the product manager for D1 openly stated they wont implement them [0]. Your only way to ensure data consistency is to use a Durable Object which comes with its own costs and tradeoffs.
The basic idea of D1 is great. I just don't trust the implementation.
For a hobby project it's a neat product for sure.
ignoramous 13 hours ago [-]
> And many of the hung queries wouldn't even show up under traces in their observability dashboard
How did you work around this problem? As in, how do you monitor for hung queries and cancel them?
> D1 reliability has been bad in our experience.
What about reads? We use D1 in prod & our traffic pattern may not be similar to yours (our workload is async queue-driven & so retries last in order of weeks), nor have we really observed D1 erroring out for extended periods or frequently.
eis 5 hours ago [-]
> How did you work around this problem? As in, how do you monitor for hung queries and cancel them?
You just wrap your DB queries in your own timeout logic. You can then continue your business logic but you can't truly cancel the query because well, the communication layer for it is stuck and you can't kill it via a new connection. Your only choice is to abandon that query. Sometimes we could retry and it would immediately succeed suggesting that the original query probably had something like packetloss that wasn't handled properly by CF. Easy when it's a read but when you have writes then it gets complicated fast and you have to ensure your writes are idempotent. And since they don't support transactions it's even more complex.
Aphyr would have a field day with D1 I'd imagine.
> What about reads? We use D1 in prod & our traffic pattern may not be similar to yours (our workload is async queue-driven & so retries last in order of weeks), nor have we really observed D1 erroring out for extended periods or frequently.
We have reads and writes which most of the time are latency sensitive (direct user feedback). A user interaction can usually involve 3-5 queries and they might need to run in sequence. When queries take 500ms+ the system starts to feel sluggish. When they take 2-3s it's very frustrating. The high latencies happened for both reads and writes, you can do a simple "SELECT 123" and it would hang. You could even reproduce that from the Cloudflare dashboard when it's in this degradated state.
From the comments of others who had similar issues I think it heavily depends on the CF locations or D1 hosts. Most people probably are lucky and don't get one of the faulty D1 servers. But there are a few dozen people who were not so lucky, you can find them complaining on Github, on the CF forum etc. but simply not heard. And you can find these complaints going back years.
This long timeframe without fixes to their network stack (networking is CF's bread and butter!), the refusal to implement transactions, the silence in their forum to cries for help, the absurdly low 10GB limit for databases... it just all adds up. We made the decision to not implement any new product on D1 and just continue using proper databases. It's a shame because workers + a close-by read replica could be absolutely great for latency. Paradoxically it was the opposite outcome.
kylehotchkiss 19 hours ago [-]
* D1, but agreed. I wish Cloudflare would offer a built-in D1-R2 backups system though! (Can be done with custom code in a worker, but wish it was first-party)
Normal_gaussian 12 hours ago [-]
yeah this really sucks.
No downtime snapshots would be the best but I'd be quite happy with a blocking backup on a set schedule that can be set from the GUI / from the cli / from a config file. Its a huge PITA having to play 'trust me bro' to clients and their admins with custom workers and backups.
I currently stream it D1 dump -> worker(encrypt w/ key wrapping) -> R2 on a schedule, then have a container spin up once a day and create changesets from the dumps. An external tool pulls the dumps and changesets.
rs_rs_rs_rs_rs 17 hours ago [-]
Yeah but the 10GB limit for D1 is crazy, can you really start building on that? Other than toy projects?
jillesvangurp 3 hours ago [-]
Most website content management systems would never get close to that size. If you need a bigger database, D1 is probably the wrong solution to begin with. 10GB can be millions of records depending on your table structure. But if you are gathering some survey data, running a CMS, etc. you probably should be fine with even just a few MB of data; which is probably the sweet spot for D1.
dpark 16 hours ago [-]
Really depends on what you’re putting in the DB. Cloudflare is clear that these are supposed to be very localized DBs. Per user or tenant.
BoorishBears 17 hours ago [-]
> For those who don’t use Workers, we’ll be releasing REST API support in the coming weeks, so you can access the full model catalog from any environment.
Cloudflare seems to be building for lock-in and I don't love it. I especially don't understand how you build an OpenRouter and only have bindings for your custom runtime at launch.
switz 17 hours ago [-]
Workers runtime is open source and permissively licensed fwiw
Yes but that is just a tiny part of the whole CF worker ecosystem. The other services are not open source and so the lock-in is very very real. There are no API compatible alternatives that cover a good chunk of the services. If you build your application around workers and make use of the integrated services and APIs there is no way for you to switch to another provider because well, there is none.
strimoza 57 minutes ago [-]
Interesting timing — I've been using Bunny CDN for video delivery and considering moving parts to Cloudflare. Anyone have experience comparing the two for media streaming specifically?
__jonas 46 minutes ago [-]
Wondering why you're considering this move, I also use Bunny for some embedded videos and am considering fully moving my websites away from Cloudflare's CDN to Bunny
hemangjoshi37a 2 hours ago [-]
The interesting question isn't "can CF run agent inference" — it's what the routing layer needs to look like for multi-turn workflows. Shipping agent systems to enterprise clients the last year, the bottleneck is never raw tokens/sec. It's (a) state checkpointing betweentool calls, (b) cold-start latency on embedding/rerank models, (c) rate-limit coordination across concurrent agent loops. Does CF expose per-session state, or still stateless-per-request? Without that, you end up building the interesting part yourself.
Yes, you can see the same "hosted" ones on there, but when you look at the models endpoint, there are much less options at the "workers-ai/*" namespace. Is that intentional?
james2doyle 17 hours ago [-]
To better clarify, I don’t see "workers-ai/@cf/google/gemma-4-26b-a4b-it" in the /models enpoint in gateway.ai.cloudflare.com but it does seem to exist as a hosted model. Same with "workers-ai/@cf/nvidia/nemotron-3-120b-a12b" which I would expect to see
samjs 16 hours ago [-]
Hey James.
Thanks for the feedback, and good catch. Looks like that endpoint is pulling from a slightly out of date data source. The docs/dashboard currently are the best resources for the full catalog, but we'll update that API to match.
TheServitor 3 hours ago [-]
That's so brilliant that it's already a thing called openrouter!
RITESH1985 14 hours ago [-]
The inference layer question is getting solved fast. The harder problem coming
next is the governance layer — what agents are authorised to do and proving
it later.
Curious if Cloudflare is thinking about this layer too.
halJordan 14 hours ago [-]
You'd think there'd be some sort of automatic system where there's zero trust and each agent would have to provide its rbac creds to something to get authorization.
claud_ia 2 hours ago [-]
[dead]
datadrivenangel 17 hours ago [-]
Good to see their purchase of Replicate paying off!
13 hours ago [-]
bm-rf 22 hours ago [-]
Not seeing any pricing info on the models[1] page. Wonder how much of a lift this is over paying providers directly. Perhaps Cloudflare is doing this at cost? Also interesting that zero data retention is not on by default, and is not supported with all providers[2]. Finally, would be great if this could return OpenAI AND Anthropic style completions.
We'll be adding prices to the docs and the model catalog in the dashboard shortly.
In short: currently the pricing matches whatever the provider charges. You can buy unified billing credits [1] which charges a small processing fee.
> Finally, would be great if this could return OpenAI AND Anthropic style completions.
Agreed! This will be coming shortly. Currently we'll match the provider themselves, but we plan to make it possible to specify an API format when using LLMs.
Thanks, I don't see pricing for foundation models however, such as GPT-5.4
20 hours ago [-]
ashleypeacock 19 hours ago [-]
It’s at-cost pricing I believe, no mark up
messh 15 hours ago [-]
So, is this similar to openrouter?
pizzly 15 hours ago [-]
Yes with less models to choose from unless you bring your own model.
mips_avatar 15 hours ago [-]
with Argo networking
pprotas 22 hours ago [-]
Can't wait for the free tier!
yoavm 22 hours ago [-]
Workers AI had a free tier since it launched, I think? See the pricing page I linked to above.
indigodaddy 20 hours ago [-]
So looks like the AI Platform free tier will have access to the open models only perhaps? And the 10,000 neuron thing? I don't see any mention of frontier models in the url you linked in the other comment ( https://news.ycombinator.com/item?id=47792538#47793142 )
ramesh31 21 hours ago [-]
Big, could be a viable Bedrock alternative. Probably better uptime than Anthropic or AWS, too.
kinnth 12 hours ago [-]
openrouter works perfectly well for me called by cloudflare workers. open router also has superior cascading and waterfalling if models are offline. Not sure they have that working from V1.
I love everything about openrouter. So kinda a fan boy.
Jack5500 22 hours ago [-]
Sadly no mention on regions.
pjmlp 18 hours ago [-]
It will work great in Spain! /s
throwpoaster 22 hours ago [-]
Anthropic gonna acquire Cloudflare for stock. Solves their infrastructure problems in one shot.
kylehotchkiss 19 hours ago [-]
No way! Cloudflare will buy anthropic when the economy begins self-correcting. Looking forward to Workers AI getting all those H100s to run more Qwens
neya 21 hours ago [-]
I'm not ready to for another rug pull, so please no :( I really enjoy Cloudflare's CDN.
6thbit 22 hours ago [-]
don’t attach to a single AI provider when you can attach to cloudflare as your single AI gateway provider!
rant aside, they are greatly positioned network wise to offer this service, i wonder about their princing and potential markup on top of token usage?
i presume they wont let you “manage all your AI spend in one place” for free.
koolba 22 hours ago [-]
> i presume they wont let you “manage all your AI spend in one place” for free.
Of course they will. In return they get to control who they’re routing requests to. I wouldn’t be surprised if this turns I to the LLM equivalent of “paying for order flow”.
6thbit 22 hours ago [-]
i got shivers thinking about a future ai dynamic pricing and automatic gateway choosing the cheapest provider available
nhecker 21 hours ago [-]
Openrouter already does this, unless I've misunderstood the premise.
6thbit 14 hours ago [-]
They can route between models but you pay the standard rate for whichever model is selected (plus 5% fee). Afaik all current model providers have fixed prices per tokens which don't vary depending on, say, demand or hardware availability.
nubg 18 hours ago [-]
shivers? as in it frightens you? i believe there is no way around tokens being prices like gasoline at the gas station - it changes every hour. Any other system means you are either over- or underspending.
wahnfrieden 22 hours ago [-]
No spending limit / no ability to set a budget, unlike Google or OpenAI. Be prepared for an eye-watering invoice if you have a bug or get hacked.
edit: Why downvote? It's correct, and it's a risk that competitors handle better, including for their CDN products (compared to Bunny CDN). Maybe you are just used to the risk and haven't felt the burn yourself yet. Or you have the mistaken notion that there is no price at which temporary downtime is worthwhile to avoid paying.
rl3 11 hours ago [-]
>Be prepared for an eye-watering invoice if you have a bug or get hacked.
I really hope that person gets a resolution from Cloudflare that doesn't financially ruin them.
james2doyle 17 hours ago [-]
I just added some credits to my account. You can set a daily $ spend limit as well as add credits without auto-refill
ernsheong 21 hours ago [-]
What is Cloudflare trying to be? Everything everywhere all at once?
charcircuit 21 hours ago [-]
They want to be an edge networking platform. Anything that would be useful doing on an edge node close to the end user is in scope.
21 hours ago [-]
PUSH_AX 20 hours ago [-]
A CSP.
reconnecting 14 hours ago [-]
`Unified inference layer` is a polite way to say: "proxy that knows every prompt and every response".
mbtrucks 21 hours ago [-]
Can I set a hard cost limit ? Else I'm not interested, don't be like googles mess of billing.
james2doyle 17 hours ago [-]
Seems like it. I just added some credits to my account. You can set a daily $ spend limit as well as add credits without auto-refill
mbtrucks 21 hours ago [-]
Can I set a hard cost limit per day ? With no drift, else I'm not interested.
tln 17 hours ago [-]
I think you should look at OpenRouter. It has budget controls
stult 21 hours ago [-]
A few weeks ago, I ran into a bug with Cloudflare's DNS server not detecting when I updated the records with the registrar. The bug was 100% on their end, entirely unsolvable by me, yet they have made it literally impossible to contact them to file a bug report. Their standard user help workflow dead-ended by forcing me to talk to their absolutely useless AI help chatbot, which proceeded to regurgitate their FAQ (inaccurately, uselessly), then referred me to a phone number that was disconnected/not in service, then gave me an email address that auto-replied it was no longer in use, then just looped back to the FAQ. There was no way for me to even send them an email to let them know they have a major bug.
I immediately pulled all my sites off of Cloudflare and I will never use that godawful nightmare of a company for anything ever again. If they can't even host a generic help bot without screwing it up that badly, why would I ever use them for anything at all, never mind an AI platform?
allthetime 17 hours ago [-]
What was the bug? I configure DNS for both public and private networks on cloudflare semi-frequently and always see changes in minutes or less.
Disclaimer: I work at Cloudflare, but not on this.
But in practice, it's basically impossible to use that way in conjunctions with workers, since you have to bind every database you want to use to the worker and binding a new database requires redeploying the worker.
But even without network issues that have plagued it I would hesitate to build anything for production on it because it can't even do transactions and the product manager for D1 openly stated they wont implement them [0]. Your only way to ensure data consistency is to use a Durable Object which comes with its own costs and tradeoffs.
https://github.com/cloudflare/workers-sdk/issues/2733#issuec...
The basic idea of D1 is great. I just don't trust the implementation.
For a hobby project it's a neat product for sure.
How did you work around this problem? As in, how do you monitor for hung queries and cancel them?
> D1 reliability has been bad in our experience.
What about reads? We use D1 in prod & our traffic pattern may not be similar to yours (our workload is async queue-driven & so retries last in order of weeks), nor have we really observed D1 erroring out for extended periods or frequently.
You just wrap your DB queries in your own timeout logic. You can then continue your business logic but you can't truly cancel the query because well, the communication layer for it is stuck and you can't kill it via a new connection. Your only choice is to abandon that query. Sometimes we could retry and it would immediately succeed suggesting that the original query probably had something like packetloss that wasn't handled properly by CF. Easy when it's a read but when you have writes then it gets complicated fast and you have to ensure your writes are idempotent. And since they don't support transactions it's even more complex.
Aphyr would have a field day with D1 I'd imagine.
> What about reads? We use D1 in prod & our traffic pattern may not be similar to yours (our workload is async queue-driven & so retries last in order of weeks), nor have we really observed D1 erroring out for extended periods or frequently.
We have reads and writes which most of the time are latency sensitive (direct user feedback). A user interaction can usually involve 3-5 queries and they might need to run in sequence. When queries take 500ms+ the system starts to feel sluggish. When they take 2-3s it's very frustrating. The high latencies happened for both reads and writes, you can do a simple "SELECT 123" and it would hang. You could even reproduce that from the Cloudflare dashboard when it's in this degradated state.
From the comments of others who had similar issues I think it heavily depends on the CF locations or D1 hosts. Most people probably are lucky and don't get one of the faulty D1 servers. But there are a few dozen people who were not so lucky, you can find them complaining on Github, on the CF forum etc. but simply not heard. And you can find these complaints going back years.
This long timeframe without fixes to their network stack (networking is CF's bread and butter!), the refusal to implement transactions, the silence in their forum to cries for help, the absurdly low 10GB limit for databases... it just all adds up. We made the decision to not implement any new product on D1 and just continue using proper databases. It's a shame because workers + a close-by read replica could be absolutely great for latency. Paradoxically it was the opposite outcome.
No downtime snapshots would be the best but I'd be quite happy with a blocking backup on a set schedule that can be set from the GUI / from the cli / from a config file. Its a huge PITA having to play 'trust me bro' to clients and their admins with custom workers and backups.
I currently stream it D1 dump -> worker(encrypt w/ key wrapping) -> R2 on a schedule, then have a container spin up once a day and create changesets from the dumps. An external tool pulls the dumps and changesets.
Cloudflare seems to be building for lock-in and I don't love it. I especially don't understand how you build an OpenRouter and only have bindings for your custom runtime at launch.
https://github.com/cloudflare/workerd
Yes, you can see the same "hosted" ones on there, but when you look at the models endpoint, there are much less options at the "workers-ai/*" namespace. Is that intentional?
Thanks for the feedback, and good catch. Looks like that endpoint is pulling from a slightly out of date data source. The docs/dashboard currently are the best resources for the full catalog, but we'll update that API to match.
[1] https://developers.cloudflare.com/ai/models/
[2] https://developers.cloudflare.com/ai-gateway/features/unifie...
We'll be adding prices to the docs and the model catalog in the dashboard shortly.
In short: currently the pricing matches whatever the provider charges. You can buy unified billing credits [1] which charges a small processing fee.
> Finally, would be great if this could return OpenAI AND Anthropic style completions.
Agreed! This will be coming shortly. Currently we'll match the provider themselves, but we plan to make it possible to specify an API format when using LLMs.
[1]: https://developers.cloudflare.com/ai-gateway/features/unifie...
I love everything about openrouter. So kinda a fan boy.
rant aside, they are greatly positioned network wise to offer this service, i wonder about their princing and potential markup on top of token usage?
i presume they wont let you “manage all your AI spend in one place” for free.
Of course they will. In return they get to control who they’re routing requests to. I wouldn’t be surprised if this turns I to the LLM equivalent of “paying for order flow”.
edit: Why downvote? It's correct, and it's a risk that competitors handle better, including for their CDN products (compared to Bunny CDN). Maybe you are just used to the risk and haven't felt the burn yourself yet. Or you have the mistaken notion that there is no price at which temporary downtime is worthwhile to avoid paying.
Speaking of:
https://news.ycombinator.com/item?id=47787042
I really hope that person gets a resolution from Cloudflare that doesn't financially ruin them.
I immediately pulled all my sites off of Cloudflare and I will never use that godawful nightmare of a company for anything ever again. If they can't even host a generic help bot without screwing it up that badly, why would I ever use them for anything at all, never mind an AI platform?