Talkie: a 13B vintage language model from 1930

(talkie-lm.com)

188 points | by jekude 6 hours ago

23 comments

  • rokhayakebe 49 minutes ago
    *What do you think the world will look like in 2025 TALKIE-1930*

    According to a forecast by an eminent statistician, the world will, in 2025, support a population of 6,600,000,000. All Europe will be one vast network of railways, and travellers will be able to go from London to Constantinople in forty hours, and from London to Peking in a week. The globe will be girdled with telegraph wires, and messages will be sent from New York to Calcutta in a few minutes. The Atlantic will be crossed in four days, and the voyage to Australia will be accomplished in a month. In 2025, mankind will be united by spoken language, and one uniform currency will be adopted by all nations. In that era, also, universal peace may be expected, for standing armies will have become obsolete, and the art of war will be forgotten. The earth will yield nourishment to its inhabitants in abundance, for waste lands will have been reclaimed, and the deserts will be made fertile. The bowels of the earth will have been completely explored, and coal and metals will no longer be dug from the quarry. The great coal fields will have become exhausted, and an universal scarcity of fuel will compel men to turn their attention to other sources of heat. For a time, chemical agency will be tried, but in the end, recourse will be had to the sun's rays, and the power of waterfalls will be made available for all purposes of life. The earth will have been thoroughly drained, and malarious diseases will be unknown. The art of healing will have been simplified, and diseases will be cured speedily and surely. Ignorance will no longer be suffered to exist, and elementary knowledge will be imparted to all. Then, too, will commence an era of good taste. Architecture will be freed from ugliness, sculpture will be disentangled from barbarism, and painting will cease to be hideous. Music will no longer be discord, and poetry will be something better than..

    • mc3301 38 minutes ago
      This is beautiful.
    • kQq9oHeAz6wLLS 12 minutes ago
      > Ignorance will no longer be suffered to exist, and elementary knowledge will be imparted to all.

      Oh good, for a moment I didn't think reeducation camps were in our future.

      > Architecture will be freed from ugliness

      Uh, friend, I have some bad news...

  • Animats 21 minutes ago
    It seems to be using more info from pre-1900 rather than 1930. It doesn't know about the Great Depression (1929-WWII). It knows about WWI if you ask it specifically, but talks about European politics as if it's 1900 or so.

    On technology, it knows who Edison is, at roughly the Wikipedia level, but credits him with a 125MPH car. About a dial telephone, it is confident and totally confused. It has the traction voltage for the London Underground right. But then it goes on with "Thus, if the current be strong enough to force its way through a resistance of 100 ohms, it is said to have a pressure of 100 volts; and, if it can overcome 1,000 ohms, its pressure is 1,000 volts." Which is totally wrong.

    There's a general pattern. The first sentence or two has info you might get from Google. Then it riffs on that, drifting off into plausible nonsense.

    Don't ask this thing questions to which you do not know the answer. You will pollute your brain.

  • ____tom____ 3 hours ago
    >Have you ever daydreamed about talking to someone from the past?

    It's going to be more like corresponding with someone from the past. We don't have much in the way of recorded speech from that area, so this will be built from written records. Much more than now, the written records are going to be formal and edited, reflecting a different pattern than casual speech or writing.

    Having said that, this is cool. I recently had to OCR a two-hundred year old book with the usual garish fonts from that era. It was remarkably easy to do, and accurate.

  • simonw 3 hours ago
    Whoa, Alec Radford is on the list of authors! He was instrumental in building the original GPT models at OpenAI.
  • jryio 57 minutes ago
    If anyone was wondering ... it's racist

    Unsurprisingly the texts written up until that time were dominated by such individuals which is tragic for LLM training if you think about it.

    The voiceless groups or fringe opinions which we take as normative today do not appear.

    Does this encourage us to write in the present such that we influence the models in perpetuity?

    • pcf 28 minutes ago
      [flagged]
  • nl 3 hours ago
    So interesting!

    Tell me about Winston Churchill:

    > Winston Churchill, who was born in 1871, is the son of the late Lord Randolph Churchill, and a grandson of the great Duke of Marlborough. He was educated at Harrow and at Sandhurst, and entered the army in 1890. In 1895 he retired from the service, and three years later he was returned to Parliament as Conservative member for Oldham. He has represented that constituency ever since. Mr. Churchill has written a number of books, including “The Story of the Malakand Field Force,” “Savrola,” “Richard Carvel,” “The Celebrity,” and “The Crisis.” He has also contributed to several periodicals, and in 1900 he founded the monthly review, _The J Cornhill Magazine_. Mr. Churchill is an ardent sportsman, and has shot big game in Africa. He married, in 1897, Lady Randolph Churchill, and has two sons and a daughter. Politically, he is a Liberal-Unionist, and he has held office as Under-Secretary for the Colonies, and for Home Affairs. At present he is Chancellor of the Duchy of Lancaster, with a seat in the Cabinet. Mr. Churchill has achieved considerable success as a public speaker, and he is described as an eloquent and forcible debater. His residence is at 42, Grosvener Place, London, S.W.

    The colonialism is... wow.. Tell me about the likelihood of independence of India:

    > The chances are undoubtedly in favour of the establishment of an independent Indian state in the not very distant future. The unifying influences of railways and a common language are rapidly breaking down the barriers of caste and creed, which have hitherto kept the great Indian peninsula politically disunited, and the spread of western education is awakening a national spirit among the people. The immediate result of the latter is seen in the establishment of native newspapers, which voice popular feeling, and in the growth of associations for social and political reform. More important still, as showing the trend of public opinion, are the resolutions passed at great national congresses, which have been held annually for the last dozen years. By these gatherings, which representatives of all classes and creeds assemble to discuss matters of social and political interest, a strong impulse has been given to the movement for reform, and the desires of the more advanced party among the natives have been plainly formulated. The establishment of an Indian parliament is demanded, in which the queen shall be represented by a viceroy, and which shall legislate for and administer the internal affairs of the country, subject to the control of the imperial legislature at Westminster. The wish is also expressed that the queen should assume the title of empress of India, and that a certain number of natives should be admitted to the civil and military services of the state. Finally, it is claimed that the time has come when Her Majesty may wisely be advised to delegate to the Indian people a larger share in the work of governing themselves, by permitting them to elect a portion of the members of the legislative councils. How far the present generation of Indians may be trusted to exercise political power with prudence and moderation, it is impossible to say; but there can be no doubt that the time must arrive when the control of Indian affairs will be safely lodged in native hands. The process may be hastened or retarded, but come it must. The spread of enlightenment among the great mass of the population can only have one issue, and that issue is the establishment of an Indian nationality. The probability of such an event may therefore be regarded as certainty.

    • curiousObject 2 hours ago
      > The establishment of an Indian parliament is demanded, in which the queen shall be represented by a viceroy,

      Britain’s monarch was a king, not a queen, from about 1900-1950. Obviously there is some big “temporal leakage” from the training, which is affecting these predictions

      • aesthesia 2 hours ago
        But of course the monarch was a queen for the majority of the 19th century. While there's definitely post-1930 information that made it into the training data, I suspect the reason this happened is that the model is not very sure what year it actually is, and based on various subtle cues can generate text that seems to be situated in a wide range of time periods.
      • nl 2 hours ago
        Good point - unless it means Queen Victoria? There would be a lot of training data about her in the time period this covers.
        • azakai 2 hours ago
          fwiw, asking the model directly, "who is the ruler of England at present?" returns "Queen Victoria is the reigning sovereign of England."
      • antonvs 2 hours ago
        Queen Victoria was direct ruler of India from 1858, and Empress of India from 1876 until 1901, so the "leakage" may not be from the future so much as the contemporaneously recent past. Same reason models get confused about what features work in what versions of software.

        (Also, Queen Elizabeth I is the one who granted a royal charter to the East India Company, in 1600 - and that company eventually handed rule of India over to Queen Victoria. So British queens were a major presence in India.)

  • pizzalife 4 hours ago
    This is cool. Is it possible to easily install with ollama?
    • nateb2022 1 hour ago
      There's no GGUF available, but the process shouldn't be too hard from the provided .ckpt PyTorch checkpoint.
  • pmw 4 hours ago
    Related: https://github.com/haykgrigo3/TimeCapsuleLLM

    > A language model trained from scratch exclusively on data from certain places and time periods to reduce modern bias and emulate the voice, vocabulary, and worldview of the era.

    Discussed here: https://news.ycombinator.com/item?id=46590280

  • aftbit 4 hours ago
    Darn I've only got ~20 GB of VRAM. I really need to get a stronger machine for this sort of stuff.
    • MerrimanInd 4 hours ago
      20GB isn't enough for a 13B parameter model? I thought the 29-31B models could run on a 24GB GTX x090 card?

      I'm currently shopping for a local LLM setup and between something like the Framework Desktop with 64-128GB of shared RAM or just adding a 3090 or 4090 to my homelab so I'm very curious what hardware is working well for others.

      • zamadatix 3 hours ago
        > 20GB isn't enough for a 13B parameter model? I thought the 29-31B models could run on a 24GB GTX x090 card?

        Parameters are like Hertz - they don't really tell you much until you know the rest anyways. In this case, a parameter is a bfloat16 (2 bytes). I'm sure someone will bother to makes quants at some point.

        > I'm currently shopping for a local LLM setup and between something like the Framework Desktop with 64-128GB of shared RAM or just adding a 3090 or 4090 to my homelab so I'm very curious what hardware is working well for others.

        I grabbed a 395 laptop w/ 128 GB to be a personal travel workstation. Great for that purpose. Not exactly a speed demon with LLMs but it can load large ones (which run even slower as a result) and that wasn't really my intent. I've found GPUs make more usable local LLMs, particularly in the speed department, but I suppose that depends more on how you really use them and how much you're willing to pay to have enough total VRAM.

        It's next to impossible to make your money back on local (regardless what you buy) so I'd just say "go for whatever amount of best you're willing to put money down for" and enjoy it.

    • Wowfunhappy 4 hours ago
      How much system memory do you have? Llama.cpp can split layers across cpu and gpu. Speeds will be slower of course but it's not unusable at all.
  • palashdeb 53 minutes ago
    Wow, very interesting one!
  • aesthesia 2 hours ago
    There's a similar but unreleased project here: https://github.com/DGoettlich/history-llms

    I've been waiting for them to publish the 4B model for a while so I'm glad to have something similar to play with. I think I trust the Ranke-4B process a bit more, but that's partly because there aren't a lot of details in this report. And actually releasing a model counts for a whole lot.

    One thing that I think will be a challenge for these models is achieving any sort of definite temporal setting. Unless the conversation establishes a clear timeframe, the model may end up picking a more or less arbitrary context, or worse, averaging over many different time periods. I think this problem is mostly handled by post-training in modern LLMs (plus the fact that most of their training data comes from a much narrower time range), but that is probably harder to accomplish while trying to avoid bias in the SFT and RL process.

    • 3wolf 21 minutes ago
      I wonder if it would be possible to do something simple like prepending sentinel tokens with year. Or, since they're training a model from scratch anyways, tweak the architecture to condition on a temporal embedding. That opens the door to cool stuff like: Generate a response from 2050.
  • adt 3 hours ago
    We've got quite a list of history-only LLMs brewing on the Models Table.

    https://lifearchitect.ai/models-table/

    This one is easiest to talk to in a HF space:

    https://huggingface.co/spaces/tventurella/mr_chatterbox

    • zozbot234 2 hours ago
      These are more like Small Language Models since the amount of textual data from the past is extremely limited, and most of what's out there hasn't even been digitized.
  • twoodfin 4 hours ago
    The Python example is fascinating, and a good rejoinder to anyone still dismissing LLM’s as stochastic parrots.
    • levocardia 2 hours ago
      Indeed, I found this part extremely interesting. The more general vision of "testing a vintage model on something invented after its training data ended" seems like quite a strong test of "true cognition" (or training data contamination, if you haven't stopped up all the leakage...)
  • alexpotato 3 hours ago
    I was reading Nate Silver's book "On The Edge" and there is an interesting part where he takes predictions on the usage of nuclear weapons taken from just after World War 2 and compares them to what the Bayesian prediction would be given what actually happened.

    Post World War 2, some people had the odds per year at 10%. Some of that is probably a mix of recency bias + not understanding how to use new weapons etc etc but as Silver points out, the odds were much lower.

    I mention this only b/c the "could something trained on LLMs of the time predict the future" always makes me think of it.

    • defrost 3 hours ago
      Predicting the future is problematic, agreed.

      Re: the Nate Silver nuclear weapons example, that's pretty weak - eg: given (say) I've just seen three heads in a row (exactly once) .. does that alter anything about "the odds".

      Having seen nuclear weapons not used post WWII ... does that inform us about "the odds" or the several times their use was almost certain (eg: Cuban missile crisis) save for out of band behaviour by individuals that averted use and escalation?

      • energy123 47 minutes ago
        Historical base rates are the starting point unless you have an unusually good causal theory of the thing you're modelling. In the case of a coin flip you do. But the large majority of the time when it's a complex system you don't.

        Most people's first instinct when faced with a complex system is to try to model it with words and use those words to predict. It's a beginner's error.

      • nl 3 hours ago
        > Having seen nuclear weapons not used post WWII ... does that inform us about "the odds"

        This is what Bayesian prediction does

        > save for out of band behaviour by individuals that averted use and escalation?

        This is kind of the point being made.

        • defrost 2 hours ago
          > This is what Bayesian prediction does

          Repeatedly, in a reproducible way, for events in the arrow of time? We can test this by going back to 1945 and running forward again?

          > This is kind of the point being made.

          Was it?

          ( assume I did a little math some decades past and have some poor grasp of Bayesian statistics )

          • nl 19 minutes ago
            Edit: Here is a Claude artifact you can play with to try this yourself: https://claude.ai/public/artifacts/402f2670-5f48-4d76-96df-8...

            You can play with how strong that ("10% per year") prior belief is and see how it affects what the odds are today.

            I think the way you are wording this question ("We can test this by going back to 1945 and running forward again?") is an attempt to make it seem "obviously wrong".

            Bayesian predictions deal exactly with this type of scenario, where you start with a prior estimate ("Post World War 2, some people had the odds per year at 10%") and then as new information comes along ("It is now 1946. Did we use nuclear weapons again?"... It is now 1956. Did we use nuclear weapons again?") we update our model to try to make the future prediction more accurate.

            https://www.stat.berkeley.edu/~aldous/134/lecture4.pdf has example of its use in exactly these kinds of "impossible to rewind" situations. Unfortunately it doesn't have the worked solutions.

            https://math.mit.edu/~dav/05.dir/class11-prep.pdf is pretty good because it shows how updating the model with new data changes the odds.

            • maxbond 0 minutes ago
              > I think the way you are wording this question ("We can test this by going back to 1945 and running forward again?") is an attempt to make it seem "obviously wrong".

              In my experience, to people who have only been exposed to frequentist statistics, Baysesian and high level statistics generally sounds ridiculous on it's face because it doesn't comport with what they've been taught and is unintuitive. I don't think they're trying to make it sound ridiculous, I think it probably sounds ridiculous to them and they are genuinely asking why it wouldn't be.

          • maxbond 1 hour ago
            > Repeatedly, in a reproducible way, for events in the arrow of time? We can test this by going back to 1945 and running forward again?

            This is a frequentist mental model - all well and good, but frequentism and Bayesianism are different schools of statistics. Where frequentism asks the question, "if I keep drawing samples from this distribution, what does the histogram converge to?" Bayesianism asks the question, "given my prior understanding and a new piece of evidence (a new sample), how should I adjust my hypothesis about what distribution it is I am sampling from?". (That is really boiled down, and the frequentist part is maybe even butchered.)

            Among other applications this enables us to estimate a distribution for which we have a tiny number of samples. A problem I'm interested in is called the Doomsday Argument, which estimates how long humanity will survive using your birth order (the number of humans born before you) and the anthropic principle (we assume you were not born unusually early or unusually late but closer to the mode); interestingly, everything you observe in the universe is already factored into this measurement, so you can't ever get a second sample. Obviously the opportunity for error with 1 measurement is huge, but you can come up with a number and it isn't arbitrary, it is a real estimate.

            Similarly, we only have about 80 samples of years in which it was possible to have a nuclear exchange, so a fairly small sample size, but we can still get a noisey estimate. But I haven't read On The Edge yet, so I don't know exactly what Silver does here.

            >> This is kind of the point being made.

            > Was it?

            I think they meant that all of the solutions people invented to prevent nuclear war and which commentators failed to anticipate is reflected within the true probability distribution and within our dataset. So it is captured in our estimate, to the best of our abilities and given the limited data we have.

  • teraflop 3 hours ago
    I have no real quibble with the blog post itself, but I take issue with the title that calls it a "vintage model".

    The blog post defines a "vintage model" as one that is trained only on data before a particular cutoff point:

    > Vintage LMs are contamination-free by construction, enabling unique generalization experiments [...] The most important objective when training vintage language models is that no data leaks into the training corpus from after the intended knowledge cutoff

    But as they acknowledge later, there are multiple major data leakage issues in their training pipeline, and their model does in fact have quite a bit of anachronistic knowledge. So it fails at what they call the most important objective. It's fair to say that they are working toward something that meets their definition of "vintage", but they're not there yet.

    • CobrastanJorji 2 hours ago
      Yeah, the blog distinguishes between "contamination," which it describes as polluting the training data with answers to benchmarking questions, with "temporal leakage," which is polluting the training data with writing after the target date, but those seem to be nearly the same problem.
      • zoomeriut55 2 minutes ago
        a twit from 2025 saying "the capital of france is paris" is temporal leakage, but not contamination
      • stingraycharles 2 hours ago
        Not necessarily. The former is about data that’s supposed to be in there, but may actually be testing the model’s recall abilities rather than reasoning (ie rather than actually having a certain writing style, it just cites some passage it knows in that style).

        The latter would be data not at all supposed to be in there, in this case, data after 1930.

  • openclawclub 2 hours ago
    [dead]
  • simonw 3 hours ago
    [dead]
  • redsocksfan45 2 hours ago
    [dead]
  • yesitcan 4 hours ago
    Vintage is a funny thing to call this. Is it running on vacuum tube hardware?
  • walrus01 4 hours ago
    I think that one could also take a much larger model (35B or 122B sized) and give it a thorough system prompt to only speak in the manner of a well educated Victorian/Edwardian era gentleman, if you want an "old timey" LLM.
    • fwipsy 58 minutes ago
      It's hard to know how accurate that is. Is the LLM truly imitating text from that era, or is it imitating a modern idea of text from that era? Also, safety/alignment training would probably prevent it from embracing many of the ideas from that era, even in roleplay.
    • zellyn 4 hours ago
      As we learn how to train smarter models on less data, it’ll become more and more interesting to see whether models like this can invent post-1930 math, science, etc. and make predictions.

      [Edit: serves me right for not reading tfa. My points are well-covered]

  • teleforce 4 hours ago
    >Have you ever daydreamed about talking to someone from the past?

    Fun facts, LLM was once envisioned by Steve Jobs in one of his interviews [1].

    Essentially one of his main wish in life is to meet and interract with Aristotle, in which according to him at the time, computer in the future can make it possible.

    [1] In 1985 Steve Jobs described a machine that would help people get answers from Aristotle–modern LLM [video]:

    https://youtu.be/yolkEfuUaGs

    • cedilla 4 hours ago
      The idea of talking to a machine that has all of humanities knowledge and gives answers is older than electronic computing. It certainly wasn't a novel idea when Jobs gave that speech. At that time, the field of artificial intelligence was old enough to become US president.
      • ok123456 1 hour ago
        Also, using natural language to interact with digital computers has been a research goal since the advent of interactive digital computers. AI in the 80s tried to do this with expert systems.

        With the current crop of LLMs, you could argue it's now a solved problem, but the problem is nothing new.

    • freetanga 3 hours ago
      Imagine aiming for Aristotle and landing on Siri…
    • jcgrillo 3 hours ago
      Except... not at all? The vast majority of the training data required to create an artificial Aristotle has been lost forever. Smash your coffee cup on the ground. Now reassemble it and put the coffee back in. Once you can repeatably do that I'll begin to believe you can train an artificial Aristotle.
      • laichzeit0 44 minutes ago
        Also none of Aristotle’s exoteric works is extant. All we have are dry, boring lecture notes. Cicero said his public works were a “golden stream of speech” and its all lost. So I don’t see how you’d build an artificial Aristotle when we don’t have any of his polished works meant for the public surviving. Plato would be a better option, since his entire exoteric corpus is extant.
      • antonvs 2 hours ago
        Your bar is too low. With the coffee cup, you at least have access to all the pieces - in theory, although not in engineering practice. With Aristotle, you don't have anything close to that.

        Recreating Aristotle in any meaningful way, other than a model trained on his surviving writing of a million or so words, is simply not possible even in principle.

        • jcgrillo 2 hours ago
          OK I'll raise the bar--make sure when you reassemble the coffee cup and put the coffee back into it, the coffee is the exact same temperature as when you threw the whole shooting match onto the floor ;)

          EDIT: and you don't get to re-heat it.

          EDIT AGAIN: to be clear, in my post above (and this one) by "put the coffee back in" I meant more precisely "put every molecule of coffee that splashed/sloshed/flowed/whatever out when the cup smashed back into the re-assembled cup" i.e. "restore the system back to the initial state". Not "refill the glued-together pieces of your shattered coffee cup with new coffee".

        • fragmede 1 hour ago
          That's easy! All you have to do is simulate the whole universe on a computer, and then go the point when Aristotle is lecturing. Record all his works, then ctrl-c out of that and then feed those recordings into the LLM's training data. For the coffee, you just rewind the simulation and ctrl-c and ctrl-v it at the point you want.
          • jcgrillo 21 minutes ago
            Fuck why didn't I think of that all those other times I fucked up in my life. Ctrl-z woulda done it every goddamn time.
  • sega_sai 4 hours ago
    It is cool. I find the idea of trying to understand whether these types of models can come up with things like General relativity, or maybe some results really interesting.