15 comments

  • xnx 5 hours ago
    The Megahertz Wars were an exciting time. Going from 75 MHz to 200 MHz meant that everything (CPU limited) ran 2x as fast (or better with architectural improvements).

    Nothing since has packed nearly the impact with the exception of going from spinning disks to SSDs.

    • dlcarrier 3 hours ago
      In my experience, SSDs had a bigger impact. Thanks to Wirth's Law (https://en.wikipedia.org/wiki/Wirth%27s_law) the steady across-the-board increase in processing power didn't equate to programs running much faster, e.g. Discord running on a modern computer isn't any more responsive, if not less responsive than an ICQ client was running on a computer 25 years ago.

      SSDs provided a huge bump in performance to each individual computer, but trickled their way into market saturation over a generation or two of computers, so you'd be effectively running the same software but in a much more responsive environment.

      • majormajor 3 hours ago
        Anytime you upgraded from a 4 year old computer to a new one back then - from 16Mhz to 90Mhz, or 75Mhz to 333Mhz, or 333Mhz to 1Ghz, or whatever - it was immediate, it was visceral.

        SSDs booted faster and launched programs faster and were a very nice change, but they weren't that same sort of night-and-day 80s/90s era change.

        The software, in those days, was similarly making much bigger leaps every few years. 256 colors to millions, resolution, capabilities (real time spellcheck! a miracle at the time.) A chat app isn't a great comparison. Games are the most extreme example - Sim City to Sim City 2000; Doom to Quake; Unreal Tournament to Battlefield 1942 - but consider also a 1995 web browser vs a 1999 one.

        • y1n0 18 minutes ago
          For me, at 52, I recall the SSD transformation to be near miraculous. I never once felt that way about a CPU upgrade until getting an M1. I went from a cyrix 5x86 133 (which was effectively a fast 486) to a pentium II 266 and it just wasn't that impressive.

          The drag down of swapping became almost a non-issue with the SSD changeover.

          I suppose going from a //e to a IIgs was that kind of leap but that was more about the whole computer than a cpu.

          Now I have to say, swapping to an SSD on my windows machines at work was far less impressive than going to SSD with my macs. I sort of wrote that off as all the anivirus crap that was running. It was very disappointing compared to the transformation on mac. On my macs it was like I suddenly heard the hallelujah chorus when I powered on.

        • dlcarrier 2 hours ago
          That's my point, the software was getting bloated at least as fast as the CPUs were getting faster, so you had to upgrade to a new CPU every few years to run the latest software. With SSDs, there was a huge overlap in CPU speeds that may or may not have an SSD, so upgrading to one meant a huge performance boost, within the same set of runnable software.

          Also, going from Sim City to Sim City 2000 was pre-bloat. Over the course of five years, the new version was significantly better than the original, but they both target the same 486 processor generation, which was brand new when the original SimCity was released, but rather old by the time SimCity 2000 was released. Another five years later, Sim City 3000 added minimal functionality, but required not just a Pentium processor, but a fast one.

          I guess what I'm getting at is that a faster CPU means programs released after it will run better, but faster storage means that all programs, old and new, will run better.

          • steve1977 2 hours ago
            > That's my point, the software was getting bloated at least as fast as the CPUs were getting faster

            I think there's a difference between bloat and actually useful features or performance.

            For example, I started making music with computers in the early 90s. They were only powerful enough to control external equipment like synthesizers.

            Nowadays, I can do everything I could do with all that equipment on an iPad! I would not call that bloat.

            On the other hand, comparing MS Teams to say ICQ, yeah, a lot of that is bloat.

            • myself248 2 hours ago
              > in the early 90s. They were only powerful enough to control external equipment like synthesizers.

              Tell that to ScreamTracker!

              • prmoustache 59 minutes ago
                Screamtracker was sampling. Great for the days and much more accesible for the teenager I was than buying and controlling synths but that was not exactly same. More a competition to the early akai MPCs.

                And we were mostly ripping those samples from records on cassettes and CDs, or other mods.

              • matheusmoreira 1 hour ago
                In case anyone's wondering:

                https://youtu.be/roBkg-iPrbw

            • jstanley 1 hour ago
              There is definitely bloat. A few months ago I was messing about with making a QWERTY piano in a web page, and it was utterly unplayable due to the bloat-induced latency in between the fingers and the ears.
        • nucleardog 2 hours ago
          > SSDs booted faster and launched programs faster and were a very nice change, but they weren't that same sort of night-and-day 80s/90s era change.

          For me they were.

          I still remember the first PC I put together for someone with a SSD.

          I had a quite beefy machine at the time and it would take 30 seconds or more to boot Windows, and around 45s to fully load Photoshop.

          Built this machine someone with entirely low-end (think like "i3" not "Celeron") components, but it was more than enough for what they wanted it for. It would hit the desktop in around 10 seconds, and photoshop was ready to go in about 2 seconds.

          (Or thereabouts--I did time it, but I'm remembering numbers from like a decade and a half ago.)

          For a _lot_ of operations, the SSD made an order of magnitude difference. Blew my mind at the time.

          • phil21 43 minutes ago
            SSDs came out after CPUs started to slow down on doubling (single threaded) performance every 12-18mo or so.

            So it was the only way to get that visceral improvement in user experience like CPU and platform upgrades were in the mid 90's to very early 00's.

            The experience of just slapping a new SSD in a 3 year old machine was similar to a different generation of computer nerds.

            Nothing could really match the night and day difference of an entire machine being double to triple the performance in a single upgrade though. Not even the upgrade from spinning disks to SSD. You'd go from a game being unplayable on your old PC to it being smooth as butter overnight. Not these 20% incremental improvements. Sure, load times didn't get too much better - but those started to matter more when the CPU upgrades were no longer a defining experience.

          • majormajor 2 hours ago
            Sure, but what about once Photoshop was open? Aka where you spend most of your day after you start up your stuff?

            Would you take the SSD and a 500Mhz processor or a 2Ghz dual-core with a 7200k or 10000k HD? "Some operations are faster" vs "every single thing is wildly faster" of the every-few-years quadrupling+ of CPU perf, memory amounts, disk space, etc.

            (45sec to load Photoshop also isn't tracking with my memory, though 30s-1min boot certainly is, but I'm not invested enough to go try to dig up my G4 PowerBook and test it out... :) )

      • gavinsyancey 3 hours ago
        > Discord running on a modern computer isn't any more responsive, if not less responsive than an ICQ client was running on a computer 25 years ago.

        The only thing more impressive that hardware engineers' delivering continuous massive performance improvements for the past several decades is software engineers' ability to completely erase that with more and more bloated programs to do essentially the same thing.

        • dlcarrier 2 hours ago
          You joke, but it really is more work. Iv'e developed software in languages from assembly language to JavaScript, and for any given functionality it's been easier to write it in RISC assembly language running directly than to get something working reliably in JavaScript running on a framework in an interpreter in a VM in a web browser, where it's impossible to reliably know what a call is going to do, because everything is undocumented and untested.

          One of the co-signers of the Agile Manifesto had previously stated that "The best way to get the right answer on the Internet is not to ask a question; it's to post the wrong answer." (https://en.wikipedia.org/w/index.php?title=Ward_Cunningham#L...) I'm convinced that the Agile Manifesto was an attempt to make an internet post of the most-wrong way to manage a software projects, in hopes someone would correct it with the right answer, but instead it was adopted as-is.

          • iknowstuff 1 hour ago
            What’s the most complex thing you wrote in RISC assembly?
      • vachina 3 hours ago
        > Discord running on a modern computer isn't any more responsive, if not less responsive than an ICQ client was running on a computer 25 years ago.

        I feel this. Humanity has peaked.

        • accrual 1 hour ago
          Every time Discord updates (which is often) I'm like "cool, slightly more code to run on the same hardware..."
      • lich_king 25 minutes ago
        Eh. In the 1980s and 1990s, the capabilities of the software you could run on your new computer were changing dramatically every two years or so. Completely new types of computer games and productivity software, vastly improved audio and video, more and more real-time functionality.

        Nowadays, you really don't get these magical moments when you upgrade, not on the device itself. The upgrade from Windows 10 to Windows 11 was basically just more ads. Games released today look about as good as games released 5-10 years ago. The music-making or photo-editing program you installed back then is still good. Your email works the same as before. In fact, I'm not sure I have a single program on my desktop that feels more capable or more responsive than it did in 2016.

        There's some magic with AI, but that's all in the cloud.

      • beastman82 2 hours ago
        Agree 100%. the compute was always bottlenecked by insanely high i/o latency. SSDs opened up fast computers like no processor ever did.
      • steve1977 2 hours ago
        I mean, HDD were much faster than floppy disks. Which were in turn much faster than tape cassettes. And so on...
      • idiotsecant 3 hours ago
        This is silly. That's like saying that machines haven't gotten any better because a helicopter doesn't eat any less hay than a horse did.
        • dlcarrier 2 hours ago
          I don't follow your analogy. Can you elaborate?
    • embedding-shape 4 hours ago
      > Nothing since has packed nearly the impact with the exception of going from spinning disks to SSDs.

      "Bananas" core-counts gave me the same experience. Some year ago I moved to Ryzen Threadripper and experienced similar "Wow, compiling this project is now 4x faster" or "processing this TBs of data is now 8x faster", but of course it's very specific to specific workloads where concurrency and parallism is thought of from the ground up, not a general 2x speed up in everything.

    • st_goliath 4 hours ago
      > The Megahertz Wars were an exciting time.

      About a week ago, completely out of the blue, YouTube recommended this old gem to me: https://www.youtube.com/watch?v=z0jQZxH7NgM

      A Pentium 4, overclocked to 5GHz with liquid nitrogen cooling.

      Watching this was such an amazing throwback. I remember clearly the last time I saw it, which was when an excited friend showed it to me on a PC at our schools library. A year or so before YouTube even existed.

      By 2005, my Pentium 4 Prescott at home had some 3.6GHz without overclocking, 4GHz models for the consumer market were already announced (but plagued by delays), but surely 10GHz was "just a few more years away".

      • accrual 1 hour ago
        IIRC, part of the GHz problem is that very long pipelines like that of the Pentium 4 tend to show increasing benefits at higher clocks. If you can keep the pipeline full then the system reaps the benefits. Sort of like a drag racer - goes very fast in a straight line but terrible on corners.

        But with longer pipelines comes larger penalties when the pipeline needs to be flushed, so the P4 eventually hit a wall and Intel returned to the late Pentium 3 Tualatin core, refining it into the Pentium M which later evolved into the first Core CPUs.

      • fnord77 4 hours ago
        only just last year did someone goose a PC cpu to 9.13ghz

        https://www.tomshardware.com/pc-components/cpus/core-i9-1490...

    • rr808 3 hours ago
      I still remember my first CPU with a heatsink. It seemed like a temporary dumb hack.
      • iknowstuff 1 hour ago
        Well it kinda was! seeing how power efficient iPhone chips are despite hovering the top of single core benchmarks.
      • oso2k 2 hours ago
        I had the same inclination back in the 90s when I upgraded my Cyrix 486 SLC2 50MHz without a heat sink (which seems like a no-no in retrospect) to Cyrix MediaGX 133MHz. The stocker fan was immediately noticeable. I thought I had done something wrong.
        • myself248 1 hour ago
          Upgrading and Repairing PCs 4th edition even says directly, that some shady resellers will put a heatsink on a chip that they're running beyond spec, but that Intel designs all their processors to run at rated speed without one.
          • SoftTalker 27 minutes ago
            I've never seen a Xeon without a heat sink, I don't believe they are designed to run without one.
    • HPsquared 4 hours ago
      SSDs were such a revolution though, and a really rewarding upgrade. I'd fit SSDs to friend and family computers as an upgrade.
      • micv 4 hours ago
        Getting my first SSD was absolutely the best computer upgrade I've ever bought. I didn't even realise how annoying load times were because I was so used to them and coming from C64s and Amigas even spinning rust seemed fairly quick.

        It took a long time before I felt a need to improve my PC's performance again after that.

        • coffeebeqn 4 hours ago
          There were quite a few mind blowing upgrades back in the day. The first sound card instead of PC beeper was one of my most memorable moments.

          I remember loading up Doom, plugging my shitty earplugs that had a barely long enough cable and hearing the “real” shotgun sound for the first time. Oo-wee

      • patwolf 37 minutes ago
        I owe much of my career to an SSD. I had a work laptop that I upgraded myself with an 80GB Intel SSD, which was pretty exotic at the time. It was so fast at grepping through code that I could answer colleagues’ questions about the code in nearly real time. It was like having a superpower.
      • sigmoid10 4 hours ago
        I once had a decade old Thinkpad that suddenly became my new work laptop once more thanks to an SSD. It's a true shame they simply don't make them like this anymore.
      • dcminter 4 hours ago
        Just before I installed an SSD was the last time I owned a computer that felt slow.
    • nunez 2 hours ago
      Agreed. That was the next big boost! I installed my first SSD in this HP workstation-grade laptop that we got "for free" from college. It was like getting a brand new computer! In fact, I ended up giving that computer to my sister who ran it into the ground.

      I didn't feel any huge speed boosts like that until the M1 MacBook in 2020.

    • kwanbix 45 minutes ago
      My first pentium was clocked at 60Mhz.
    • pdpi 1 hour ago
      I think the single biggest jump I ever experienced was my first dedicated GPU — a GeForce 2 MX if I'm not mistaken.
    • iwontberude 24 minutes ago
      I remember our school getting new computers to replace the 233Mhz G3 iMac computer lab during the Megahertz Wars and the vice principal announcing the purchase of new "screaming fast" 600 Mhz Dell Optiplex GX100. The nice thing is that the G3 iMacs then got pushed out to the classrooms, but it was sad to see Apple lose the spot in the lab. I miss the wonder of playing Pangea Software games for the first time like Bugdom and Nanosaur.
    • geon 4 hours ago
      GPUs for 3d graphics were a game changer.

      I can see why you wouldn’t consider it as impactful if you weren’t into gaming at the time.

    • jmyeet 3 hours ago
      That wasn't how it worked.

      Up until the 486, the clock speed and bus speed were basically the same and topped out at about 33MHz (IIRC). The 486 started the thing of making the CPU speed a multiple of the bus speed eg 486dx2/66 (33MHz CPU, 66MHz bus), 486dx4/100 (25MHz CPU, 100MHz bus). And that's continued to this day (kind of).

      But the point is the CPU became a lot faster than the IO speed, including memory. So these "overdrive" CPUs were faster but not 2-4x faster.

      Also, in terms of impact, yeah there was a massive incrase in performance through the 1990s but let's not forget the first consumer GPUs, namely 3dfx Voodoo and later NVidia and ATI. Oh, Matrox Millenium anyone?

      It's actually kind of wild that NVidia is now a trillion dollar company. It listed in 1998 for $12/share and adjusted for splits, Google is telling me it's ~3700x now.

    • varispeed 3 hours ago
      I don't know. I felt this way when switching from Intel laptop to Apple M1. I am still using it today and I prefer it over desktop PC.
      • embedding-shape 2 hours ago
        Have you ever used proper desktop computers? I suppose such a move would feel significant if you've mostly been using laptops.
        • philistine 2 hours ago
          But that's the thing; a laptop is fundamentally different. Of course if there's the equivalent of a thermopump under my desk I'm going to get crazy performance. The magic was that Apple brought the uncompromised experience to a laptop.
  • nunez 2 hours ago
    What a time to be a kid then.

    We had a hand-me-down DEC x86 desktop at home with a Pentium II running at 233 MHz until I want to say 2002? This was around the time I learned how to build a PC since doing that was cheaper than buying one and no-one in my family had the money for that!

    I saved whatever money I could to buy a 128MB stick of RAM from Staples (maybe it was 256MB?), a few other things from TigerDirect/Newegg and _this processor_. With some help from my uncle and a guide I printed from somewhere whose website started with '3D' (it was quite popular back then; I don't think it exists anymore), I got it done.

    Going from 233 MHz to this was like going from walking to flying in a jet! Everything was SO MUCH F**ING FASTER. Windows XP _flew_. (The DEC barely made the minimum requirements for it, and boy did I feel it.) Trying to install Longhorn on it a year or two later brought me back into walking again, though. :D

    • lovehashbrowns 48 minutes ago
      My first pc I built was with an AMD athlon 64 4000+ and a GeForce 6600GT. Going to that from an e-machines piece of junk was INSANE. It’s so hard to come up with a similar experience shift nowadays. Even websites seemed to load instantly with the same DSL connection. Everything felt soooooo good.
    • general_reveal 57 minutes ago
      GeForce 3.
  • Sharlin 5 hours ago
    The i486DX 33MHz was introduced in May 1990. A 30x increase, or about five doublings, in clock speeds over ten years. That's of course not the whole truth; the Athlon could do much more in one cycle than the 486. In any case, in 2010 we clearly did not have 30GHz processors – by then, the era of exponentially rising clock speeds was very decidedly over. I bought an original quadcore i7 in 2009 and used it for the next fifteen years. In that time, roughly one doubling in the number of cores and one doubling in clock speeds occurred.
    • adrian_b 4 hours ago
      "The era of exponentially rising clock speeds" was already over in 2003, when the 130-nm Pentium 4 reached 3.2GHz.

      All the later CMOS fabrication processes, starting with the 90-nm process (in 2004), have provided only very small improvements in the clock frequency, so that now, 23 years later after 2003, the desktop CPUs have not reached a double clock frequency yet.

      In the history of computers, the decade with the highest rate of clock frequency increase has been 1993 to 2003, during which the clock frequency has increased from 67 MHz in 1993 in the first Pentium, up to 3.2 GHz in the last Northwood Pentium 4. So the clock frequency had increased almost 50 times during that decade.

      For comparison, in the previous decade, 1983 to 1993, the clock frequency in mass-produced CPUs had increased only around 5 times, i.e. at a rate about 10 times slower than in the next decade.

      • hedora 3 hours ago
        Sort of: The Pentium 4 was a strange chip. It had way too many pipeline steps, and was basically just chasing high clock speed marketing numbers instead of performance. In other words, it hit "3.2GHz" by cheating.

        I'd argue you'd need to use AMD's Athlon XP or 64 bit processors, or either Pentium 3 / Core 2 Duo to figure out when clock speeds stopped increasing.

    • layer8 4 hours ago
      On the plus side, the 486DX-33 didn’t require active cooling. The second half of the 1990s was when home computing started to become noisy, and the art of trying to build silent PCs began.
      • johnflan 1 hour ago
        The CPU didn’t but the chonker of a fan in the PSU sure made up for it
    • bee_rider 4 hours ago
      It is true that we haven’t seen single core clock speeds increasing as fast, for a long while now. And I think everyone agrees that some nebulously defined “rate of computing progress” has slowed down.

      But, we can be slightly less pessimistic if we’re more specific. Already by the early 90’s, a lot of the clock speed increase came from strategies like pipelines, superscalar instructions, branch prediction. Instruction level parallelism. Then in 200X we started using additional parallelism strategies like multicore and SMT.

      It isn’t a meaningless distinction. There’s a real difference between parallelism that the compiler and hardware can usually figure out, and parallelism that the programmer usually has to expose.

      But there’s some artificiality to it. We’re talking about the ability of parallel hardware to provide the illusion of sequential execution. And we know that if we want full “single threaded” performance, we have to think about the instruction level parallelism. It’s just implicit rather than explicit like thread-level parallelism. And the explicit parallelism is right there in any modern compiler.

      If the syntax of C was slightly different, to the point where it could automatically add OpenMP pragmas to all it’s for loops, we’d have 30GHz processors by now, haha.

    • hedora 3 hours ago
      Clock speed increases definitely slowed down, but now that software can use parallelism better, we're seeing big wins again. Current desktop/laptop packages are doing 100 trillion operations per second. The article's processor could do one floating point op per cycle, or 1B ops. So, we've seen a 100,000x speedup in the last 25 years. That's a doubling every ~ 1.5 years since 2000.

      It's not quite apples-to-apples, of course, due to floating point precision decreasing since then, vectorization, etc, but it's not like progress stopped in 2000!

      • lysace 3 hours ago
        Web browsing is still largely single/few-threaded in practice, afaik. (Right?)
  • hedora 4 hours ago
    The Athlon XP was the bigger milestone, as I remember it.

    They were both "seventh generation" according to their marketing, but you could get an entire GHz+ Athlon XP machine for much less than half the $990 tray price from the article.

    I distinctly remember the day work bought a 5 or 6 node cluster for $2000. (A local computer shop gave us a bulk discount and assembled it for them, so sadly, I didn't poke around inside the boxes much.)

    We had a Solaris workstation that retailed for $10K in the same office. Its per-core speed was comparable to one Athlon machine, so the cluster ran circles around it for our workload.

    Intel was completely missing in action at that point, despite being the market leader. They were about to release the Pentium 4, and didn't put anything decent out from then to the Core 2 Duo. (The Pentium 4 had high clock rates, but low instructions per cycle, so it didn't really matter. Then AMD beat Intel to market with 64 bit support.)

    I suspect history is in the process of repeating itself. My $550 AMD box happily runs Qwen 3.5 (32B parameters). An nvidia board that can run that costs > 4x as much.

    • ahartmetz 1 hour ago
      The article links to a list of "The five greatest AMD CPUs". I've owned two and a half of these! Athlon XP 1800+, Ryzen 7 1700 (I had the 1800X which was just a higher bin of the same chip), and Ryzen 9 3950X.

      That same article also says that extending x86 to 64 bits "wasn't hard", which I'm not so sure about. There are plenty of mistakes AMD could have made and cleanups they could have missed, but they handled it all quite well AFAICS.

      • iknowstuff 1 hour ago
        Next up AI 395+. I love this thing. Sips power!
        • ahartmetz 1 hour ago
          I got one of these in a laptop. For a laptop, not really, especially in idle (6-7 watts for the whole laptop). It's quite good anyway.
  • random3 2 hours ago
    Fun times. Coolers, paste, fans, supply watts, dip switches and jumpers. Quake, Voodoo 3dfx vs NVidia GForce. This is where it all started, kids.

    I was in high school and was running a "computer games club" (~ Internet cafe for games and kids) since 1998 when we got a place, renovated it ourselves, got custom built furniture (cheap narrow desks) and initially 6 computers - AMDs at 300Mhz. By 2000 we broke a wall in the adjacent space and had ~15, cable + satellite internet for downloads and whatever video cards we could buy or scrap. It was wild.

    • nikanj 2 hours ago
      Finding high school kids with a similar "tech" background today seems really hard. Tech users, sure, chronic phone / game addicts are everywhere, but that tweaker spirit is rare
  • ehnto 2 hours ago
    I bought a whole bunch of parts with my first Athlon. I think I bought a Soundblaster, and a Radeon GFX card if I am remembering the timeline right. The soundblaster came with a demo of a Lara Croft game that used the then incredible spatial audio processing to great effect. The industry promptly forgot about that technology, and to this day game audio rarely matches the potential of real time spatial dynamics that we once reached 20 years ago.
    • phil21 26 minutes ago
      Spatial audio is pretty good these days on some titles. It's just most folks don't have a sound system that can really do much with it. Headphones can only do so much in this arena.

      Couple a modern AAA title (like Battlefield 6, etc.) with a proper Atmos sound system and you will likely be pretty amazed. Even a simple 5.1 setup is pretty decent for hearing footsteps behind you/etc. which actually does help with gameplay.

      I haven't kept up on it as my computer gaming area doesn't lend itself towards a proper speaker setup these days, but playing with headphones on lately has made me start to look into this again. I need to find some high quality tiny cube speakers or something to be able to put in weird spots on the ceilings/walls.

  • dd_xplore 5 hours ago
    I remember back in 2006 I used to browse overclock forums to overclock my pentium 4, I tons of fun consuming lots of instructions, I learned the bios, changed PLL clocks, mem clocks etc.
    • rckclmbr 4 hours ago
      I bought a car radiator and dremeled out my case, visited Home Depot for all the tubes and connectors. It’s too easy nowadays to add watercooling
  • fleventynine 2 hours ago
    I upgraded to this exact CPU from a 200MHz pentium in the fall of 2000. Easily the largest jump in performance of any upgrade I've ever done.
  • mtucker502 5 hours ago
    What progress is being made in overcoming the current thermal limits blocking us from high clock rates (10Ghz+)?
    • sparkie 3 hours ago
      That's not going to happen, but there's alternative research such as [1] where we get rid of the clock and use self-timed circuits.

      [1]:https://arc.cecs.pdx.edu/

    • amelius 2 hours ago
      There have been overclockers who reached 9GHz using liquid helium.

      It's simply impossible at room temperatures without extreme cooling.

      Also you will run into interconnect speed issues, since 10GHz corresponds to .1 nanoseconds which corresponds to 3 centimeters (assuming lightspeed, in reality this is lower).

      So sadly, we'll be stuck in this "clock-speed winter" for a little longer.

    • vessenes 4 hours ago
      Like any doubling rule, the buck has to stop somewhere. Higher energy usage + smaller geometry means much more exotic analog physics to worry about in chips. I’m not a silicon engineer by any means but I’d expect 10Ghz cycles will be optical or very exotically cooled or not coming at us at all.
      • adrian_b 4 hours ago
        Reaching 10 GHz for a CPU will never be done in silicon.

        It could be done if either silicon will be replaced with another semiconductor or semiconductors will be replaced with something else for making logical gates, e.g. with organic molecules, to be able to design a logical gate atom by atom.

        For the first variant, i.e. replacing silicon with another semiconductor, research is fairly advanced, but this would increase the fabrication cost so it will be done only when any methods for further improvements of silicon integrated circuits will become ineffective or too expensive, which is unlikely to happen earlier than a decade from now.

        • Hikikomori 3 hours ago
          Overclockers are pretty close.
          • adrian_b 52 minutes ago
            What can be done by raising the power consumption per core to hundreds of watt, while cooling the package with liquid nitrogen, is completely irrelevant for what can be done with a CPU that must operate reliably for years and at an acceptable cost for the energy consumption.

            For the latter case, 6 GHz has been barely reached, in CPUs that cannot be produced in large quantities and whose reliability is dubious.

      • FpUser 4 hours ago
        Having RAM read / write faster will be of way more benefit
    • brennanpeterson 4 hours ago
      None for normal.compute, since energy density is still fundamental. But the interesting option is cryogenic computing, which can have zero switching energy, and 10s of GHz clock rates

      Some neat startups to watch for in this space.

    • magic_man 5 hours ago
      The energy consumed is cv^2f. It makes no sense to keep increasing frequency as you make power way worse.
      • dlcarrier 3 hours ago
        At lower frequencies, leakage current plays a larger role than gate capacitance, so for any given process node, there's a sweet spot. For medium to low loads, it takes less power to rapidly switch between cutting off power to a core, and running at a higher frequency than is needed, than to run at a lower frequency.

        Newer process nodes decrease the per-gate capacitance, increasing the optimal operating frequency.

      • vlovich123 5 hours ago
        So heat. There’s efforts to switch to optics which don’t have that heat problem so much but have the problem that it’s really hard to build an optical transistor. + anywhere your interfacing with the electrical world you’re back to the heat problem.

        Maybe reversible computing will help unlock several more orders of magnitude of growth.

    • HarHarVeryFunny 4 hours ago
      What would be the benefit? You don't need a 10GHz processor to browse the web, or edit a spreadsheet, and in any case things like that are already multi-threaded.

      The current direction of adding more cores makes more sense, since this is really what CPU intensive programs generally need - more parallelism.

      • michaelt 2 hours ago
        Because someone decided to write all the software in javascript and python, which don't benefit from the added cores.
      • vaylian 3 hours ago
        You technically don't even need a 300MHz processor for the use cases that you name. But Intel and others kept developing faster CPUs anyway.
      • nurettin 3 hours ago
        Single core speed is absolutely a thing that is needed and preferred to multicore. That's why we have avx, amx, etc.
        • KeplerBoy 3 hours ago
          Meh, avx is also just parallelism. That won't get you around Amdahl's law.
          • nurettin 1 hour ago
            Not sure what you mean. It lets you do 64 operations with one instruction. Where's the diminishing returns?
            • adrian_b 24 minutes ago
              Vector or matrix instructions do not improve single-thread speed in the correct meaning of this term, because they cannot improve the speed of a program that executes a sequence of dependent operations.

              Their purpose is to provide parallel execution at a lower cost in die area and at a better energy efficiency than by multiplying the number of cores. For instance, having 16 cores with 8-wide vector execution units provides the same throughput as 128 cores, but at a much lower power consumption and at a much smaller die area. However, both structures need groups of 128 independent operations every clock cycle, to keep busy all execution units.

              The terms "single-thread" performance vs. "multi-threaded" performance are not really correct.

              What matters is the 2 performance values that characterize a CPU when executing a set of independent operations vs. executing a set of operations that are functionally-dependent, i.e. the result of each operation is an operand for the next operation.

              When executing a chain of dependent operation, the performance is determined by the sum of the latencies of the operations and it is very difficult to improve the performance otherwise than by raising the clock frequency.

              On the other hand, when the operations are independent, they can be executed concurrently and with enough execution units the performance may be limited only by the operation with the longest duration, no matter how many other operations are executed in parallel.

              For parallel execution, there are many implementation methods that are used together, because for most of them there are limits for the maximum multiplication factor, caused by constraints like the lengths of the interconnection traces on the silicon die.

              So some of the concurrently executed operations are executed in different stages of an execution pipeline, others are executed in different execution pipelines (superscalar execution), others are executed in different SIMD lanes of a vector execution pipeline, others are executed in different CPU cores of the same CPU complex, others are executed in different CPU cores that are located on separate dies in the same package, others are executed in CPU cores located in a different socket in the same motherboard, others in CPU cores located in other cases in the same rack, and so on.

              Instead of the terms "single-thread performance" and "multi-threaded performance" it would have been better to talk about performance for dependent operations and performance for independent operations.

              There is little if anything that can be done by a programmer to improve the performance for the execution of a chain of dependent instructions. This is determined by the design and the fabrication of the CPU.

              On the other either the compiler or the programmer must ensure that the possibility of executing operations in parallel is exploited at the maximum extent possible, by using various means, e.g. creating multiple threads, which will be scheduled on different CPU cores, using the available SIMD instructions and interleaving any chains of dependent instructions, so that the adjacent instructions will be independent and they will be executed either in different pipeline stages or in different execution pipelines. Most modern CPUs use out-of-order execution, so the exact order of interleaved dependent instructions is not critical, because they will be reordered by the CPU, but some interleaving done by the compiler or by the programmer is still necessary, because the hardware uses a limited instruction window within which reordering is possible.

      • moffkalast 2 hours ago
        For parallelism we already have SIMD units like AVX and well... GPUs. CPUs need higher single thread speeds for tasks that simply cannot make effective use of it.
      • hulitu 49 minutes ago
        > You don't need a 10GHz processor to browse the web, or edit a spreadsheet,

        To browse the web is debatable. But for svchost.exe, Teams, Office 365 and Notepad, you definitely need one. /s

        Programming is a lost art.

  • herodoturtle 3 hours ago
    I remember upgrading my 486 DX2 66Mhz to a DX4 100Mhz and all of a sudden being able to run winamp and Quake. That felt pretty epic at the time.
  • paulryanrogers 3 hours ago
    My first 1GHz was an AMD, also my first non-Intel, and its required fan was so loud that I was glad to get rid of it.

    The speed was nice, and some competition helped lower prices.

  • davidee 4 hours ago
    I have very fond memories of my first dual-cpu Athlon machine.

    It was the workstation on which I learned Logic Audio before, you know, Apple bought Emagic. I took that machine, running very low latency Reason to live gigs with my band.

    Carting around a full-tower computer (not to mention the large CRT monitor we needed) next to a bunch of tube Fender & Ampeg amps was wild at the time. Finding a good drummer was hard; we turned that challenge into a lot of fun programming rhythm sections we could jam to, and control in real-time, live.

  • nikanj 2 hours ago
    The craziest thing is, I don't actually know how many gigahertz either my PC or my macbook are. The megahertz race used to fierce!
    • myself248 1 hour ago
      It's essentially random at any given moment. If I peek, mine will say it's running anywhere between 700MHz and 3.4GHz. Sometimes I think it goes even faster, but only if it's weirdly cold at the time.
  • jmyeet 3 hours ago
    I have a hard time remembering what computers I had in the 1990s now. I had an 8086 in the 1980s. I think the next one I had was a 486/33 in the early 90s and I had this for years. I remember having a Cyrix 586 at some point later. I think the next jump was in the early 2000s and I honestly don't rmeember what that CPU was so I can't say when I got my first 1GHz+ CPU. Probably that 2002 PC. No idea what it was now. But it did survive in some form for another 12 years.

    Fun fact #1: many today may not know that the only reason switched to the Pentium name was because a court ruled that they couldn't trademark a number and AMD had cross-licensed the microarchitecture and instruction set to AMD and Cyrix.

    It was the Pentium 4 when clock speeds went insane and became a huge marketing point even though Pentium chips had lower IPC than Athlons (at that time). There was a belief that CPUs would keep going to 10GHz+. Instead they hit a ceiling at about ~3GHz, that's barely increased to this day (ignoring burst modes).

    Intel originally intended to move workstations and servers to the EPIC architecture (eg Merced was an early chip in this series). This began in the 1990s but was years delayed and required writing software a very particular way. It never delievered on its promise.

    And AMD, thanks to the earlier cross-licensing agreement, just ate Intel's lunch with the Athlon 64 starting in 2003 by adding the x86_64 instructions, which we still use today.

    Fun Fact #2: it was the Pentium 3 that saved Intel's hide long after it was discontinued in favor of the Pentium 4.

    The early 2000s were the nascent era of multi-core CPUs. The Pentium 3 had survived in mobile chips and become the Pentium-M and then the Core Duo (and Core 2 Duo later). This was the Centrino platform and included wireless (IIRC 802.11b/g). The Pentium 4 hit the Gigahertz ceiling and EPIC wasn't going to happen to Intel went back to the drawing board, revived the mobile Pentium-3 platform, adding AMD's 64 bit instructions and released their desktop CPUs. Even modern Intel CPUs are in many ways a derivation of the Pentium-3 [1].

    [1]: https://en.wikipedia.org/wiki/List_of_Intel_Core_processors

  • 1970-01-01 4 hours ago
    Argh. The headline. The opener. Awful. Where are editors in 2026? There's no way an LLM would write this.

    The GHz barrier wasn't special. What was much more important was the fact that AMD was giving Intel a hard time and there was finally hard competition.

    • adrian_b 4 hours ago
      In terms of marketing, the "GHz" barrier was special, because surpassing it has indeed created a lot of recognition in the general public for the fact that the AMD Athlon CPUs were better than the Intel Pentium III CPUs.

      In reality, of course what you say is true and the fact that Athlon could previde a few extra hundreds of MHz in the clock frequency was not decisive.

      Athlon had many improvements in microarchitecture in comparison with Pentium III, which ensured a much better performance even at equal clock frequency. For instance, Athlon was the first x86 CPU that was able to do both a floating-point multiplication and a floating-point addition in a single clock cycle. Pentium III, like all previous Intel Pentium CPUs required 2 clock cycles for this pair of operations.

      This much better floating-point performance of Athlon vs. Intel contrasted with the previous generation, where AMD K6 had competitive integer performance with Intel, but its floating-point performance was well below that of the various Intel Pentium models (which had hurt its performance in some games).

    • dlcarrier 3 hours ago
      AMD being competitive at the time is what mattered, but there's still technological advancement needed for them to be competitive. In this case, it was AMD using copper interconnects that allowed them to not only hit 1 GHz, but quickly clock up from there: https://en.wikipedia.org/wiki/Athlon#Original_release
    • HarHarVeryFunny 4 hours ago
      There was a time where increased clock speeds, or more generally increased processor throughput was important. I can remember when computers were slow, even for things like browsing the web (and not just because internet connection speeds were slow), and paying more for a new faster computer made sense. I think this time period may well have lasted roughly until the "GHz era" or thereabouts, after which even the cheapest, slowest, computers were all that anybody really needed, except for gamers where the the solution was a faster graphics card (which eventually lead to GPU-computing and the current AI revolution!)
      • 1970-01-01 4 hours ago
        You're conflating a few things here. The Vista era was the biggest requirement hit. That was the time where people really needed a faster PC to continue browsing. Before that, you could get away with XP running on a sub-GHz processor.
        • tosti 3 hours ago
          That's not how I remember recent history because Linux was already pretty good before microslop XP came out. I've been daily driving cheap junk ever since, no regrets.