Last 1 year I’ve been working full time on an integration layer between an end-user service and a few realtime LLM models that are part of that service.
The amount of code needed to achieve stability/predictability and address all kinds of edge cases is huge, and I have yet to see at least 1 use case where we can rely on LLMs answer 100% if it concerns any fixed state machine implementation etc.
Yes, these models are really good (just amazing!) at what classical CS approach can’t do around media and text processing, but they have such a hard time playing by specific strict rules…
So, CS focus will change, but it’s not going away… it’s more like we will end up with a better abstraction layer - like in 50-60s it was all in pure machine codes, then assembly, then C/etc, OOP, etc - here we will probably figure out even more elegant way to express unambiguous algorithm in a very succinct and very readable/maintainable way - and let LLM-based compilers convert it deterministically into some c++ code… (and those compiler may end up still having tons of classical code for speed/reliability/etc)
Reminds me a bit of Isaac Asimov‘s novel „I, Robot“ where they rely on positronic brains to do things. In the story, mathematics seems to have caught up and developed a framework to analyse the behavior of an AI system. I wonder if something similar will happen if CS becomes an empirical science, i.e., will we try to infer laws from empirical AI behavior measurements so that we can reason about it more effectively? This would then turn CS into Physics somewhat, but based on an artificial system. Very strange times.
> these AI systems will be flying our airplanes, running our power grids, and possibly even governing entire countries.
I guess we should figure out how to include the three laws of robotics in connectionist models asap…
I'm pretty skeptical based on my experiences so far but still believe we'll get there eventually. AI seems to work fine for folks who hate programming and prefer describing their problem in imprecise english in an iterative fashion as long as their problem can ultimately be implemented with high level libraries written by competent programmers.
At some point AI will have some conceptual model of software and that's when I think things start to change. How we get there is anyone's guess. I think we're heading in the right direction by using the AST and not simply tokenizing source code. I'm not an AI engineer though. I just help those sorts of things run faster.
It's a bit like the efficient market hypothesis and the rise of passive funds. The EMH says, if there is any inefficiency in the market, a well-resourced arbitrageur can close it and make a lot of money, so all such inefficiencies are closed before they even arise, so actually there are no inefficiencies. But if there are truly no inefficiencies, then there are no arbitrageurs, as they cannot support themselves! And thus no one to keep the markets efficient.
Passive investment management works really well, but also sort of depends on someone actually reading annual reports and firing incompetent management. Without it, if everyone just invests passively and thinks not one bit what they are doing, management will pay themselves stupid money and run their businesses to the ground.
So... Sure, LLMs learned a lot on from humans, and will eat a lot, maybe 90%+ of programming jobs - which in itself is a little scary. But I'm not sure what a 100% LLM software world looks like. I can imagine, rather, where a lot of mundane stuff that now requires the skills will be shifted to LLMs - like, dunno, a neighbourhood making its own parking app from a prompt. But is the field of software going to stop in its current shape?
TFA makes the point that most SEs these days have no idea how CPUs actually work. There was a time where this was all crucial knowledge, and you could say high level languages like Java make SEs redundant. Well they didn't, and employment in software has only been going up in the long run.
> most software, as we know it, will be replaced by AI systems that are trained rather than programmed
The problem with this are all the edge cases. There are more ways unforseen circumstances can arise as you can train for. That's why you should do a lot of input checks in production.
The amount of code needed to achieve stability/predictability and address all kinds of edge cases is huge, and I have yet to see at least 1 use case where we can rely on LLMs answer 100% if it concerns any fixed state machine implementation etc.
Yes, these models are really good (just amazing!) at what classical CS approach can’t do around media and text processing, but they have such a hard time playing by specific strict rules…
So, CS focus will change, but it’s not going away… it’s more like we will end up with a better abstraction layer - like in 50-60s it was all in pure machine codes, then assembly, then C/etc, OOP, etc - here we will probably figure out even more elegant way to express unambiguous algorithm in a very succinct and very readable/maintainable way - and let LLM-based compilers convert it deterministically into some c++ code… (and those compiler may end up still having tons of classical code for speed/reliability/etc)
I'm not sold on the demise of software engineering. But if it's truly going to die I'll still be programming but just for my hobby purposes.
> these AI systems will be flying our airplanes, running our power grids, and possibly even governing entire countries.
I guess we should figure out how to include the three laws of robotics in connectionist models asap…
At some point AI will have some conceptual model of software and that's when I think things start to change. How we get there is anyone's guess. I think we're heading in the right direction by using the AST and not simply tokenizing source code. I'm not an AI engineer though. I just help those sorts of things run faster.
Passive investment management works really well, but also sort of depends on someone actually reading annual reports and firing incompetent management. Without it, if everyone just invests passively and thinks not one bit what they are doing, management will pay themselves stupid money and run their businesses to the ground.
So... Sure, LLMs learned a lot on from humans, and will eat a lot, maybe 90%+ of programming jobs - which in itself is a little scary. But I'm not sure what a 100% LLM software world looks like. I can imagine, rather, where a lot of mundane stuff that now requires the skills will be shifted to LLMs - like, dunno, a neighbourhood making its own parking app from a prompt. But is the field of software going to stop in its current shape?
TFA makes the point that most SEs these days have no idea how CPUs actually work. There was a time where this was all crucial knowledge, and you could say high level languages like Java make SEs redundant. Well they didn't, and employment in software has only been going up in the long run.
The problem with this are all the edge cases. There are more ways unforseen circumstances can arise as you can train for. That's why you should do a lot of input checks in production.
So what is a software engineer? An SRE?