Context: my father is a lawyer and therefore has a bajillion pdf files that were digitised, stored in a server. I’ve gotten an idea on how to do OCR in all of them.

But after that, how can I make them easily searchable? (Keep in mind that unfortunately, the directory structure is important information to classify the files, aka you may have a path like clientABC/caseAV1/d.pdf

  • VoxAliorum@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    2 hours ago

    Search them for words? Try pdfgrep with recursive - very easy to setup and try. If you feel like that’s taking too long, you probably need to accept some simplifications/helper structures.

  • Father_Redbeard@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    1 hour ago

    Would Papra work for you? I like it better than Paperless-NGX personally, which others have mentioned. But I’ll admit I’m not sure it’ll fit in your use case as I’m feeding it newly scanned documents for mine rather than existing file/folder hierarchy.

  • lsjw96kxs@sh.itjust.works
    link
    fedilink
    Français
    arrow-up
    3
    ·
    10 hours ago

    Maybe take a look at paperless-ngx, it will take care of the OCR for you and make it searchable. Just not sure if it will show the path correctly.

  • __hetz@sh.itjust.works
    link
    fedilink
    arrow-up
    5
    ·
    1 day ago

    I’m a fucking dolt that dabbles and picks up the gist of things pretty quick, but I’m not authority on anything, so “grain of salt”:

    You’re already familiar with OCR so my naive approach (assuming consistent fields on the documents where you can nab name, case no., form type, blah blah) would be to populate a simple sqlite db with that data and the full paths to the files. But I can write very basic SQL queries, so for your pops you might then need to cobble together some sort of search form. Something for people that don’t learn SELECT filepath FROM casedata WHERE name LIKE "%Luigi%"; because they had to manually repair their Jellyfin DB one time when a plugin made a bunch of erroneous entries >:|

  • solrize@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 day ago

    What’s a bajillion? If the OCR output is less than a few GB, which is a heck of a lot of text (like a million pages), just grepping the files is not too bad. Maybe a second or two. Otherwise you need search software. solr.apache.org is what I’m used to but there are tons of options.

    • First_Thunder@lemmy.zipOP
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      My problem is paperless is the fact that it doesn’t preserve the directory structure, losing essential info

      • paaviloinen@sopuli.xyz
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        If tag/classification based and automated sorting is not the thing the end-user can live with, then Paperless-ngx isn’t the solution, but if you have Nextcloud and you add both the to-be-preserved directory structure and Paperless-ngx’s consume directory as external storage, you can have both with a little manual labour.

  • purplemonkeymad@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    What is the server are they on?

    If they are just on a windows server, then the indexing service is actually good for fast results on a network share. If it’s a windows 10/11 pc, I think you need to enable classic search for it to provide results to clients over the network.

    Alternatively I believe everything (the program) supports indexing network locations.