Unless there is a significant increase in the effective context window in LLMs. Pursuing the goal of having agents working on complex goals is not going to work well. All the tricks and hacks trying to work around this problem is not going to fundamentally change that.
LLM agents will lose track of what they are trying to do after couple of trials. That's something that would differentiate human PhD is that while not fast or always creative, they have better attention memory span.
https://github.com/MiniMax-AI/MiniMax-01 is an open model that claims a 4 million context. Note however that longer context makes evaluation expensive as you are paying for every token. Still, it is true that OpenAI seriously needs a better solution for it.
I think it's time to partition the context into L1, L2, and L3 contexts. L1 is the current context with a quadratic memory requirement. L2 is based on fancy mechanisms such as what is used by Gemini and MiniMax-01, having a sub-quadratic to linear memory requirement. L3 is based on document and chunk embeddings having a logarithmic memory requirement. LLMs don't use this approach, but I think it might make sense.
Hoping this forces universities to change their archaic PhD system, to adapt something similar to european system. Then people who really have passion for a field and want to have normal job do their research, without working for labs as a cheap labor.
No, but you’ll get PhD-level ReactJS Hello World templates while people will be replaced by those agents. My vision of the future is bleak but I have yet to be proven wrong.
Especially the way the companies make LLMs appear more human. It will be the same with "PhD level" ... utterly useless in terms of evolution or cognitive performance. Efficient and fast scripts that can take any request and data and give a precise result is all super-agents are good at.
And that's a lot. And we are happy.
The rest is just pretty standard PR & content media for another 10 years.
I'm easily in the top 95-99th percentile of people I know and probably most people in general in most respects. Net worth, IQ, height, athletic achievements...
A good 10-20 years ago I anticipated these AI moments, and even some of the specifics like our use of ANNs(when everyone thought they were a failure), and yet I failed to capitalize on it.
It's hard to describe just how defeated and irrelevant these news articles make me feel.
I don't say this to brag or anything, I keep this account pseudo anonymous. Some follow up points and questions
- What hope do the normies have?
- Does anyone expect our representatives to do anything?
LLM agents will lose track of what they are trying to do after couple of trials. That's something that would differentiate human PhD is that while not fast or always creative, they have better attention memory span.
Thing will lose context before it has finished reading...
Especially the way the companies make LLMs appear more human. It will be the same with "PhD level" ... utterly useless in terms of evolution or cognitive performance. Efficient and fast scripts that can take any request and data and give a precise result is all super-agents are good at.
And that's a lot. And we are happy.
The rest is just pretty standard PR & content media for another 10 years.
A good 10-20 years ago I anticipated these AI moments, and even some of the specifics like our use of ANNs(when everyone thought they were a failure), and yet I failed to capitalize on it.
It's hard to describe just how defeated and irrelevant these news articles make me feel.
I don't say this to brag or anything, I keep this account pseudo anonymous. Some follow up points and questions
- What hope do the normies have?
- Does anyone expect our representatives to do anything?
- How does this not end badly?
We are just as disappointed about your performance as you are. Nice height, though. And the size of your hands ... brrr
And you did not mention any actual achievement, IQy, which was enlightening enough.