There is a webcomic called strong female protagonist that i want to persevere(in case the website is ever lost) but not sure how.

The image you see above is not a webpage of the site but rather a drop-down like menu. There is a web crawler called WFDownloader(that i am using the window’s exe file inside bottles)that can grab images and can follow links, grab images “N” number of pages down but since this a drop-down menu i am not sure it will work

There also the issue of organizing the images. WFDownloader doesn’t have options for organizing.

What i am thinking about, is somehow translating the html for the drop-down menu into separate xml file based on issues/titles, run a script to download the images, have each image named after its own hyperlink and have each issue in its own folder. Later on i can create a stitch-up version of the each issues.

    • eldavi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      19 hours ago

      i was asking op if they used a manual approach, which wouldn’t be impacted by something like robots.txt

      • Cactus_Head@programming.devOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        18 hours ago

        Haven’t used curl or wget,have yet to start using command-line(outside of solving some linux issue or organizing family photos) but open to learning.