Short version of the situation is that I have an old site I frequent for user written stories. The site is ancient (think early 2000’s), and has terrible tools for sorting and searching the stories. Half of the time, stories disappear from author profiles. Thousands of stories and you can only sort by top, new, and 30-day top.

I’m in the process of programming a scraper tool so I can archive the stories and give myself a library to better find forgotten stories on the site. I’ll be storing tags, dates, authors, etc, as well as the full body of the text.

Concerning the data, there are a few thousand stories- ascii only, and various data points for each story with the body of many stores reaching several pages long.

Currently, I’m using Python to compile the data and would like to know what storage solution is ideal for my situation. I have a little familiarity with SQL, json, and yaml, but not enough to know what might be best. I am also open to any other solutions that work well with Python.

  • amenji@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    A lot of people already suggests several databases or plaintexts like json.

    But to be honest if the dataset is not too big and doesn’t grow (since it is historical anyway), why not just use markdown with Hugo (a static site generator). You could also make use of its supported search tools to search texts in the stories.

    As a bonus, since it’s a static website, you can host it and share it to the world!

    • Bubs@lemm.eeOP
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      I’ll give it a look. I’m still in the early stages of the project, so it’ll be a bit before I get to the point where I work on the database side of things.

  • HelloRoot@lemy.lol
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    Put them into an opensearch database. It is the open source fork of elasticsearch. It has an sql plugin so you can retrieve the raw data the usual way. And there is probably also an integeation/library for it if you use any major framework/language in the backend.

    But on top of it you get a very performant full text search. This might come in handy for example when you remember a sentence from a story, or if you want to find all stories with a specific character name or word for whatever reason.

    • Bubs@lemm.eeOP
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      I do like the sound of that.

      I’m not too worried about performance, since, once everything is running, most of the operations will only be ran every few weeks or so. Don’t want it slowing to a crawl for sure though.

      The text search looks promising. I’ve had the idea of automating “likely tags” that look for keywords (sword = fantasy while spaceship = sci-fi). It’s not perfect, but it could be useful to roughly categorize all the stories that are missing tags.

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        An alternative could be to use something like postgres with the pgvector extension to do semantic searches instead of just text-based searches. You can generate embeddings for the text content of the story, then do the same for “sci-fi” or something, and see if searching that way gets you most of the way there.

        Generating embeddings locally might take some time though if you don’t have hardware suitable for it.

  • TehPers@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    SQL is designed for querying (it’s a query language lol). If the stories are huge, you can save them to individual files and store the filepath in the database, but otherwise it can hold columns with a fair amount of data if needed.

    You can probably get away with using sqlite. A more traditional database would be postgres, but it sounds like you just need the database available locally.

  • FizzyOrange@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Definitely SQLite. Easily accessible from Python, very fast, universally supported, no complicated setup, and everything is stored in a single file.

    It even has a number of good GUI frontends. There’s really no reason to look any further for a project like this.

    • Bubs@lemm.eeOP
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      One concern I’m seeing from other comments is that I may have more data than SQLite is ideal for. I have thousands of stories (My estimate is between 10 and 40 thousand), and many of the stories can be several pages long.

      • FizzyOrange@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        3 months ago

        Ha no. SQLite can easily handle tens of GB of data. It’s not even going to notice a few thousand text files.

        The initial import process can be sped up using transactions but as it’s a one-time thing and you have such a small dataset it probably doesn’t matter.

  • liliumstar@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I would scrape them into individual json files with more info than you think you need, just for the sake of simplicity. Once you have them all, then you can work out an ideal storage solution, probably some kind of SQL DB. Once that is done, you could turn the json files into a .tar.zst and archive it, or just delete them if you are confident in processed representation.

    Source: I completed a similar but much larger story site archive and found this to be the easiest way.

    • Bubs@lemm.eeOP
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      That’s a good idea! Would yaml be alright for this too? I like the readability and Python styled syntax compared to json.

        • Bubs@lemm.eeOP
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          What’s your reasoning for that?

          At this point, I think I’ll only use yaml as the scraper output and then create a database tool to convert that into whatever data format I end up using.