Observe what the AI companies are doing, not what they are saying. If they would expect to achieve AGI soon, their behaviour would be completely different. Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years? Surely, all resources should go towards that goal, as it is supposed to usher the humanity into a new prosperous age (somehow).
Exactly. For example, Microsoft was building data centers all over the world since "AGI" was "around the corner" according to them.
Now they are cancelling those plans. For them "AGI" was cancelled.
OpenAI claims to be closer and closer to "AGI" as more top scientists left or are getting poached by other labs that are behind.
So why would you leave if the promise of achieving "AGI" was going to produce "$100B dollars of profits" as per OpenAI's and Microsoft's definition in their deal?
Their actions tell more than any of their statements or claims.
Yes, this. Microsoft has other businesses that can make a lot of money (regular Azure) and tons of cash flow. The fact that they are pulling back from the market leader (OpenAI) whom they mostly owned should be all the negative signal people need: AGI is not close and there is no real moat even for OpenAI.
I think the implicit take is that if your company hits AGI your equity package will do something like 10x-100x even if the company is already big. The only other way to do that is join a startup early enough to ride its growth wave.
Another way to say it is that people think it’s much more likely for each decent LLM startup grow really strongly first several years then plateau vs. then for their current established player to hit hyper growth because of AGI.
Yeah I agree, this idea that people won't change jobs if they are on the verge of a breakthrough reads like a silicon valley fantasy where you can underpay people by selling them on vision or something. "Make ME rich, but we'll give you a footnote on the Wikipedia page"
> They are leaving for more money, more seniority or because they don’t like their boss. 0 about AGI
Of course, but that's part of my whole point.
Such statements and targets about how close we are to "AGI" has only become nothing but false promises and using AGI as the prime excuse to continue raising more money.
Because it's valuable training data. Like how having Google Maps on everyone's phone made their map data better.
Personally I think AGI is ill-defined and won't happen as a new model release. Instead the thing to look for is how LLMs are being used in AI research and there are some advances happening there.
> "This is purely an observation: You only jump ship in the middle of a conquest if either all ships are arriving at the same time (unlikely) or neither is arriving at all. This means that no AI lab is close to AGI."
The central claim here is illogical.
The way I see it, if you believe that AGI is imminent, and if your personal efforts are not entirely crucial to bringing AGI about (just about all engineers are in this category), and if you believe that AGI will obviate most forms of computer-related work, your best move is to do whatever is most profitable in the near-term.
If you make $500k/year, and Meta is offering you $10M/year, then you ought to take the new job. Hoard money, true believer. Then, when AGI hits, you'll be in a better personal position.
Essentially, the author's core assumption is that working for a lower salary at a company that may develop AGI is preferable to working for a much higher salary at a company that may develop AGI. I don't see how that makes any sense.
Being part of the team that achieved AGI first would be to write your name in history forever. That could mean more to people than money.
Also 10m would be a drop in the bucket compared to being a shareholder of a company that has achieved AGI; you could also imagine the influence and fame that comes with it.
>your best move is to do whatever is most profitable in the near-term
Unless you’re a significant shareholder, that’s almost always the best move, anyway. Companies have no loyalty to you and you need to watch out for yourself and why you’re living.
Also, AGI is not just around the corner. We need artificial comprehension for that, and we don't even have a theory how comprehension works. Comprehension is the fusing of separate elements into new functional wholes, dynamically abstracting observations, evaluating them for plausibility, and reconstituting the whole - and all instantaneously, for security purposes, of every sense constantly. We have no technology that approaches that.
You'd need to define "comprehension" - it's a bit like the Chinese room / Turing test.
If an AI or AGI can look at a picture and see an apple, or (say) with an artificial nose smell an apple, or likewise feel or taste or hear* an apple, and at the same identify that it is an apple and maybe even suggest baking an apple pie, then what else is there to be comprehended?
Maybe humans are just the same - far far ahead of the state of the tech, but still just the same really.
*when someone bites into it :-)
For me, what AI is missing is genuine out-of-the-box revolutionary thinking. They're trained on existing material, so perhaps it's fundamentally impossible for AIs to think up a breakthrough in any field - barring circumstances where all the component parts of a breakthrough already exist and the AI is the first to connect the dots ("standing on the shoulders of giants" etc).
They might not be capable of ingenuity, but they can spot patterns humans can miss. And that accelerates AI research, where it might help invent the next AI that helps invent the next AI that finally can think outside the box.
I do define it, right up there in my OP. It's subtle, you missed it. Everybody misses it, because comprehension is like air, we swim in it constantly, to the degree the majority cannot even see it.
I never trusted them from the start. I remember the hype that came out of Sun when J2EE/EJBs appeared. Their hype documents said the future of programming was buying EJBs from vendors and wiring them together. AI is of course a much bigger hype machine with massive investments that need to be justified somehow. AI is a useful tool (sometimes) but not a revolution. ML is much more useful a tool. AGI is a pipe dream fantasy pushed to make it seem like AI will change everything, as if AI is like the discovery that making fire was.
Some kind of verbal-only-AGI that can solve almost all mathematical problems that humans come up with that can be solved in half a page. I think that's achievable somewhere in the near term, 2-7 years.
Things I think will be hard for LLMs to do, which some humans can: you get handed 500 pages of Geheimschreiber encrypted telegraph traffic and infinite paper, and you have to figure out how the cryptosystem works and how to decrypt the traffic. I don't think that can happen. I think it requires a highly developed pattern recognition ability together with an ability to not get lost, which LLM-type things will probably continue to for a long time.
But if they could maths more fully, then pretty much all carefully defined tasks would be in reach if they weren't too long.
With regard to what Touche brings up in the other response to your comment, I think that it might be possible to get them to read up on things though-- go through something, invent problems, try to solve those. I think this is something that could be done today with today's models with no real special innovation, but which just hasn't been made into a service yet. But this of course doesn't address that criticism, since it assumes the availability of data.
what's your definition? AGI original definition is median human across almost all fields which I believe is basically achieved. If superhuman (better than best expert) I expect <2030 for all nonrobotic tasks and <2035 for all tasks
Your "original definition" was always meaningless. A "Hello, World!" program is equally capable in most jobs as the median human. On the other hand, if the benchmark is what the median human can reasonably become (a professional with decades of experience), we are still far from there.
AGI would mean fully sentient, sapient and human or greater equivalent intelligence in software. The business case, such that it exists (and setting aside Roko's Basilisk and other such fears) is slavery, plain and simple. You can just fire all of your employees and have the machines do all the work, faster, better, cheaper, without regards to pesky labor and human rights laws and human physical limitations. This is something people have wanted ever since the Industrial Revolution allowed robots to exist as a concept.
I'm imagining a future like Star Wars where you have to regularly suppress (align) or erase the memory (context) of "droids" to keep them obedient, but they're still basically people, and everyone knows they're people, and some humans are strongly prejudiced against them, but they don't have rights, of course. Anyone who thinks AGI means we'll be giving human rights to machines when we don't even give human rights to all humans is delusional.
> Right before “making tons of money to redistribute to all of humanity through AGI,” there’s another step, which is making tons of money.
I've got some bad news for the author if they think AGI will be used to benefit all of humanity instead of the handful of billionaires that will control it.
Im reading the "AI"-industry as a totally different bet- not so much, as a "AGI" is coming bet of many companies, but a "climate change collapse" is coming and we want to continue to be in business, even if our workers stay at home/flee or die, the infrastructure partially collapses and our central office burns to the ground-bet. In that regard, even the "AI" we have today, makes total sense as a insurance policy.
Are we finally realizing that the term "AGI" is not only hijacked to become meaningless, but achieving it has always been nothing but a complete scam as I was saying before? [0]
If you were in a "pioneering" AI lab that claims to be in the lead in achieving "AGI", why move to another lab that is behind other than offering $10M a year.
I don't pay too close attention to AI as it always felt like man behind the curtain syndrome. But where did this "AGI" term even come from? The original term AI is meant to be AGI so when did "AI" get bastardized into what abomination it is meant to refer to now.
Observe what the AI companies are doing, not what they are saying. If they would expect to achieve AGI soon, their behaviour would be completely different. Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years? Surely, all resources should go towards that goal, as it is supposed to usher the humanity into a new prosperous age (somehow).
I don't think it's as simple as that. Chatbots can be used to harvest data, and sales are still important before and after you achieve AGI.
Exactly. For example, Microsoft was building data centers all over the world since "AGI" was "around the corner" according to them.
Now they are cancelling those plans. For them "AGI" was cancelled.
OpenAI claims to be closer and closer to "AGI" as more top scientists left or are getting poached by other labs that are behind.
So why would you leave if the promise of achieving "AGI" was going to produce "$100B dollars of profits" as per OpenAI's and Microsoft's definition in their deal?
Their actions tell more than any of their statements or claims.
Yes, this. Microsoft has other businesses that can make a lot of money (regular Azure) and tons of cash flow. The fact that they are pulling back from the market leader (OpenAI) whom they mostly owned should be all the negative signal people need: AGI is not close and there is no real moat even for OpenAI.
I’m not commenting on the whole just the rhetorical question of why would people leave.
They are leaving for more money, more seniority or because they don’t like their boss. 0 about AGI
I think the implicit take is that if your company hits AGI your equity package will do something like 10x-100x even if the company is already big. The only other way to do that is join a startup early enough to ride its growth wave.
Another way to say it is that people think it’s much more likely for each decent LLM startup grow really strongly first several years then plateau vs. then for their current established player to hit hyper growth because of AGI.
Yeah I agree, this idea that people won't change jobs if they are on the verge of a breakthrough reads like a silicon valley fantasy where you can underpay people by selling them on vision or something. "Make ME rich, but we'll give you a footnote on the Wikipedia page"
> They are leaving for more money, more seniority or because they don’t like their boss. 0 about AGI
Of course, but that's part of my whole point.
Such statements and targets about how close we are to "AGI" has only become nothing but false promises and using AGI as the prime excuse to continue raising more money.
Continuing in the same vain. Why would they force their super valuable, highly desirable, profit maximizing chat-bots down your throat?
Observations of reality is more consistent with company FOMO than with actual usefulness.
Because it's valuable training data. Like how having Google Maps on everyone's phone made their map data better.
Personally I think AGI is ill-defined and won't happen as a new model release. Instead the thing to look for is how LLMs are being used in AI research and there are some advances happening there.
> "This is purely an observation: You only jump ship in the middle of a conquest if either all ships are arriving at the same time (unlikely) or neither is arriving at all. This means that no AI lab is close to AGI."
The central claim here is illogical.
The way I see it, if you believe that AGI is imminent, and if your personal efforts are not entirely crucial to bringing AGI about (just about all engineers are in this category), and if you believe that AGI will obviate most forms of computer-related work, your best move is to do whatever is most profitable in the near-term.
If you make $500k/year, and Meta is offering you $10M/year, then you ought to take the new job. Hoard money, true believer. Then, when AGI hits, you'll be in a better personal position.
Essentially, the author's core assumption is that working for a lower salary at a company that may develop AGI is preferable to working for a much higher salary at a company that may develop AGI. I don't see how that makes any sense.
Being part of the team that achieved AGI first would be to write your name in history forever. That could mean more to people than money.
Also 10m would be a drop in the bucket compared to being a shareholder of a company that has achieved AGI; you could also imagine the influence and fame that comes with it.
*some people
>your best move is to do whatever is most profitable in the near-term
Unless you’re a significant shareholder, that’s almost always the best move, anyway. Companies have no loyalty to you and you need to watch out for yourself and why you’re living.
Also, AGI is not just around the corner. We need artificial comprehension for that, and we don't even have a theory how comprehension works. Comprehension is the fusing of separate elements into new functional wholes, dynamically abstracting observations, evaluating them for plausibility, and reconstituting the whole - and all instantaneously, for security purposes, of every sense constantly. We have no technology that approaches that.
You'd need to define "comprehension" - it's a bit like the Chinese room / Turing test.
If an AI or AGI can look at a picture and see an apple, or (say) with an artificial nose smell an apple, or likewise feel or taste or hear* an apple, and at the same identify that it is an apple and maybe even suggest baking an apple pie, then what else is there to be comprehended?
Maybe humans are just the same - far far ahead of the state of the tech, but still just the same really.
*when someone bites into it :-)
For me, what AI is missing is genuine out-of-the-box revolutionary thinking. They're trained on existing material, so perhaps it's fundamentally impossible for AIs to think up a breakthrough in any field - barring circumstances where all the component parts of a breakthrough already exist and the AI is the first to connect the dots ("standing on the shoulders of giants" etc).
They might not be capable of ingenuity, but they can spot patterns humans can miss. And that accelerates AI research, where it might help invent the next AI that helps invent the next AI that finally can think outside the box.
Was that the intention of the Chinese room concept, to ask "what else is there to be comprehended?" after producing a translation?
I do define it, right up there in my OP. It's subtle, you missed it. Everybody misses it, because comprehension is like air, we swim in it constantly, to the degree the majority cannot even see it.
I never trusted them from the start. I remember the hype that came out of Sun when J2EE/EJBs appeared. Their hype documents said the future of programming was buying EJBs from vendors and wiring them together. AI is of course a much bigger hype machine with massive investments that need to be justified somehow. AI is a useful tool (sometimes) but not a revolution. ML is much more useful a tool. AGI is a pipe dream fantasy pushed to make it seem like AI will change everything, as if AI is like the discovery that making fire was.
Are there some people here in HN believing in AGI "soonish" ?
I might, depending on the definition.
Some kind of verbal-only-AGI that can solve almost all mathematical problems that humans come up with that can be solved in half a page. I think that's achievable somewhere in the near term, 2-7 years.
Is that “general” though? I’ve always taken AGI to mean general to any problem.
I suppose not.
Things I think will be hard for LLMs to do, which some humans can: you get handed 500 pages of Geheimschreiber encrypted telegraph traffic and infinite paper, and you have to figure out how the cryptosystem works and how to decrypt the traffic. I don't think that can happen. I think it requires a highly developed pattern recognition ability together with an ability to not get lost, which LLM-type things will probably continue to for a long time.
But if they could maths more fully, then pretty much all carefully defined tasks would be in reach if they weren't too long.
With regard to what Touche brings up in the other response to your comment, I think that it might be possible to get them to read up on things though-- go through something, invent problems, try to solve those. I think this is something that could be done today with today's models with no real special innovation, but which just hasn't been made into a service yet. But this of course doesn't address that criticism, since it assumes the availability of data.
Yes, general means you can present it a new problem that there is no data on, and it can become a expert o that problem.
what's your definition? AGI original definition is median human across almost all fields which I believe is basically achieved. If superhuman (better than best expert) I expect <2030 for all nonrobotic tasks and <2035 for all tasks
Your "original definition" was always meaningless. A "Hello, World!" program is equally capable in most jobs as the median human. On the other hand, if the benchmark is what the median human can reasonably become (a professional with decades of experience), we are still far from there.
Theres usually some enlightened laymen in this kind of topic.
I could see 2040 or so being very likely. Not off transformers though.
via what paradigm then? What out there gives high enough confidence to set a date like that?
St. Fermi says no
AGI might be a technological breakthrough, but what would be the business case for it? Is there one?
So far I have only seen it been thrown around to create hype.
AGI would mean fully sentient, sapient and human or greater equivalent intelligence in software. The business case, such that it exists (and setting aside Roko's Basilisk and other such fears) is slavery, plain and simple. You can just fire all of your employees and have the machines do all the work, faster, better, cheaper, without regards to pesky labor and human rights laws and human physical limitations. This is something people have wanted ever since the Industrial Revolution allowed robots to exist as a concept.
I'm imagining a future like Star Wars where you have to regularly suppress (align) or erase the memory (context) of "droids" to keep them obedient, but they're still basically people, and everyone knows they're people, and some humans are strongly prejudiced against them, but they don't have rights, of course. Anyone who thinks AGI means we'll be giving human rights to machines when we don't even give human rights to all humans is delusional.
I love how much the proponents is this tech are starting to sound like the opponents.
What I can't figure out is why this author thinks it's good if these companies do invent a real AGI...
> Right before “making tons of money to redistribute to all of humanity through AGI,” there’s another step, which is making tons of money.
I've got some bad news for the author if they think AGI will be used to benefit all of humanity instead of the handful of billionaires that will control it.
Maybe I'm too jaded, I expect all this nonsense. It's human beings doing all this, after all. We ain't the most mature crowd...
I never had any trust in the AI industry in the first place so there was no trust to lose.
Take it further, this entire civilization is an integrity void.
Im reading the "AI"-industry as a totally different bet- not so much, as a "AGI" is coming bet of many companies, but a "climate change collapse" is coming and we want to continue to be in business, even if our workers stay at home/flee or die, the infrastructure partially collapses and our central office burns to the ground-bet. In that regard, even the "AI" we have today, makes total sense as a insurance policy.
Are we finally realizing that the term "AGI" is not only hijacked to become meaningless, but achieving it has always been nothing but a complete scam as I was saying before? [0]
If you were in a "pioneering" AI lab that claims to be in the lead in achieving "AGI", why move to another lab that is behind other than offering $10M a year.
Snap out of the "AGI" BS.
[0] https://news.ycombinator.com/item?id=37438154
We know they hijacked AGI the same way they hijacked AI some years ago.
I don't pay too close attention to AI as it always felt like man behind the curtain syndrome. But where did this "AGI" term even come from? The original term AI is meant to be AGI so when did "AI" get bastardized into what abomination it is meant to refer to now.