by a_tartaruga on 3/1/25, 5:38 PM with 106 comments
by Kapura on 3/1/25, 6:11 PM
Concentrated capital is truly a wild thing.
by Chance-Device on 3/1/25, 6:17 PM
Orion does seem to have been a failure, but I also find it a bit weird that they seemingly decided to release the full model rather than a distillation, which is the pattern we now usually see with foundation models.
So, did they simply decide that it wasn’t worth the effort and dedicate the compute to other, better things? Were they pushed by sama to release it anyway, to look like they were still making progress while developing something really next gen?
by RDaneel0livaw on 3/1/25, 6:03 PM
by redwood on 3/1/25, 6:04 PM
This layer itself will inevitably see its cost come down per unit of use on a long road towards commoditization. It will probably get better and better and more sophisticated but again the value will be primarily up stack, not accrued primarily from a company like this. It's not to say they couldn't be a great company... even Google is a great company that has enabled countless other companies to bloom. The myopic way people look to these one size fit all companies is just so disconnected from our economy works.
by cs702 on 3/1/25, 6:32 PM
But here, I think he's right about business matters. The massive investment in computing capacity we've seen in recent years, by Open AI and others, can generate positive returns only if the technology continues to improve rapidly so it can overcome its limitations and failure modes in the short run.
If the rate of improvement has slowed down, even temporarily, OpenAI and others like Anthropic are likely to face financial difficulties.
---
[a] In the words of Geoff Hinton: https://www.youtube.com/watch?v=d7ltNiRrDHQ
---
Note: At the moment, the OP is flagged. To the mods: It shouldn't be, because it conforms to the HN guidelines.
by qntmfred on 3/1/25, 5:58 PM
by tananaev on 3/1/25, 6:02 PM
by Willingham on 3/1/25, 6:04 PM
by fancyfredbot on 3/1/25, 6:09 PM
The compute for training is beginning to seem a poor investment since it is depreciating fast and isn't producing value in this case. That's a seriously big investment to make if it's not productive but since a lot of it actually belongs to Azure they could cut back here fast if they had to. I hope they won't because in the hands of good researchers there is still a real possibility that they'll use the compute to find some kind of technical innovation to give them a bigger edge.
by paulpauper on 3/1/25, 6:21 PM
by armchairhacker on 3/1/25, 6:28 PM
Also, even though LLMs can generate text much faster than humans, we may be internally thinking much faster. Each adult human brain has over 100 billion neurons and 100 trillion synapses, and each has been working every moment, for decades.
This is what separates human reasoning from LLM reasoning, and it can’t be solved by scaling the latter to anything feasible.
I wish AI companies would take a decent chunk of their billions, and split it into 1000+ million-dollar projects that each try a different idea to overcome these issues, and others (like emotion and alignment). Many of these projects would certainly fail, but some may produce breakthroughs. Meanwhile, spending the entire billion on scaling compute has failed and will continue to fail, because everyone else does that, so the resulting model has no practical advantages and makes less money than it cost to train before it becomes obsoleted by other people’s breakthroughs.
by returnInfinity on 3/1/25, 9:59 PM
Disclosure - I am neither bearish or a mega bull on LLMs. LLMs useful in some cases.
by startupsfail on 3/1/25, 6:09 PM
by adamgordonbell on 3/1/25, 6:19 PM
A smart enough AI would summarize each of his posts as "I still hate the current AI boom".
There must be a term for such writers? He's certainly consistently on message.
by eliothmonroy on 3/1/25, 6:44 PM
by 1vuio0pswjnm7 on 3/2/25, 3:14 AM
D'oh!
by stuartjohnson12 on 3/1/25, 5:58 PM
by 1970-01-01 on 3/1/25, 6:36 PM
by agnishom on 3/1/25, 7:29 PM
by computerex on 3/1/25, 6:22 PM