Scripting Walkthrough #1: FAKKU!

The following guide will walk you through the process of writing a custom “script” for Hentai Doujin Downloader. In this example, we’ll be writing a script for fakku.net. While HDD supports this site by default, scripts take precedence– so if there is a flaw with the default support, a script can be used to quickly fix the problem manually.

PART I: Parsing Basic Information

When adding an item to download queue, we need the following information: the title, page count, chapter count, and, optionally, a list of tags. These parameters can be found by parsing out pieces of the source code from a doujin/manga’s webpage. For this example, Hanekawa Tsubasa wa Kizutsukanai  will be used (Note that you can use any doujin/manga on your target site for making a template, and when defining where parameters are located, you want to be just general enough that it will work for any other doujin/manga on the site).

Take a look at the source code, and figure out where the title is located:

Often, you can find the title in multiple locations, but you want to use the one that will work for the greatest number of cases (and optimally, for all cases). In this case, the second option is best, because it contains the exact title we want, without anything extra added. So, we can add the following first line to our script:

TITLE:BETWN(DATA,’og:title” content=”‘,’“/>‘);

(Note: The semicolon at the end isn’t strictly necessary as long as you keep things on separate lines.)

The “BETWN” function returns the string between two provided strings (enclosed in either single-quotes or double-quotes, depending on the contents of the string). As you can see, the title we want is between those two substrings, which are contained in DATA, which is just the source code of the webpage. The result is then assigned to TITLE, which is the title of the doujin/manga.

Next, we want the page count. Looking through the source code, we can find it here:

This is going to be a bit more difficult to deal with, because its bounds are not unique. However, can see that the value we’re looking for occurs after <div class=”left“>Pages</div>. Using this information, we can define the page count as being:

PAGES:BETWN(AFTER(DATA,’>Pages</div>‘),’<div class=”right”>‘,’</b>‘);

As you can see, functions can be nested. AFTER returns everything after a certain a substring, removing everything before it. Then we can use the result of that with the BETWN function to get the page number.

The chapter count comes next– but Fakku doesn’t list chapters. So, this parameter can be ignored altogether if you want. Alternatively, you can just put:

CHAPTERS:’0’;

Note that numbers must be enclosed in quotes, just like strings.

Finally, we can parse out the tags. The tags are located here:

We can use the MTAGS function to generate a list of comma-separated, case-corrected tags from this information:

TAGS:MTAGS(DATA,’right tags”>‘,’class=”more-tags‘,’“>’,</a>‘);

The first parameter specifies where the tags are located– obviously, in this case, in the source code of the webpage. The second and third parameters help to narrow down the location of the tags by specifying the two values that list of tags is between. Finally, the last two parameters specify the boundaries for each individual tag.

The last part is easy– find the favicon URL, and enter whatever you want for the name of the site:

SITE:’FAKKU!’;
FAVICON:’https://www.fakku.net/favicon.ico&#8217;;

Our script at this point looks like this:

TITLE:BETWN(DATA,’og:title” content=”‘,'”/>’);
PAGES:BETWN(AFTER(DATA,’>Pages</div>’),'<div class=”right”>’,'</b>’);
TAGS:MTAGS(DATA,’right tags”>’,’class=”more-tags’,'”>’,'</a>’);
CHAPTERS:’0′;
SITE:’FAKKU!’;
FAVICON:’https://www.fakku.net/favicon.ico&#8217;;

This is the only information HDD needs to know in order to successfully add the item to the download queue. Try putting this information into the Script Editor, entering a URL from Fakku, and make sure the output is correct.

Next, we need to specify the parameters used for parsing the images.

PART II: Parsing Images

This part is a bit more complex, but still not that difficult.

Note that we can’t find the image URLs from the main page of the manga/doujin; they’re located on the thumbnail page. To access the thumbnail page, all we need to do is add “/read” to the end of the current URL. So:

URL:URL + ‘/read’;

(Note: This is only necessary if thumbnails appear on a different webpage than the item was added from.)

Now, when parsing pages, HDD knows to add “/read” to the URL because attempting to find the images.

All images can be found on a single page– the thumbnail page. So, we set IMG_MULTI_PAGE to false:

IMG_MULTI_PAGE:FALSE;

(Note: This is not strictly necessary; otherwise the value will be assumed based on other information.)

Because the webpage contains multiple images that may not be part of the manga/doujin, we need to isolate the part containing the images we want. Conveniently, the images are listed in the source code:

So we add the line:

IMG_PAGE_DATA:BETWN(DATA,’window.params.thumbs = [‘,’“];‘);

Now, specify the left and right bounds of the image URLs:

IMG_LBOUND:’‘;
IMG_RBOUND:’‘;

It’s pretty simple in this case, as the URLs are bounded by double-quotes.

Now, if we ran the script as it is, it would work– the only problem is that it would download the thumbnail images rather than the full images. We need to modify the URL of each image to make it point to the full image instead. We can do that with the following line:

IMG_URL:’http:’+REPLC(REPLC(REPLC(IMG_URL,’thumbs’,’images’),’.thumb’,”),’\/’,’/’);

This acts sort of like a template that each image URL is run through. The REPLC function takes three parameters: the value to have something replaced, the substring to look for, and what to replace it with. The above line looks somewhat complex, but it’s simply a bunch of nested REPLC functions.

So that’s it! The final script looks like:

TITLE:BETWN(DATA,’og:title” content=”‘,'”/>’);
PAGES:BETWN(AFTER(DATA,’>Pages</div>’),'<div class=”right”>’,'</b>’);
TAGS:MTAGS(DATA,’right tags”>’,’class=”more-tags’,'”>’,'</a>’);
CHAPTERS:’0′;
SITE:’FAKKU!’;
FAVICON:’https://www.fakku.net/favicon.ico&#8217;;

URL:URL+’/read’;
IMG_MULTI_PAGE:FALSE;
IMG_PAGE_DATA:BETWN(DATA,’window.params.thumbs = [‘,'”];’);
IMG_LBOUND:'”‘;
IMG_RBOUND:'”‘;
IMG_URL:’http:’+REPLC(REPLC(REPLC(IMG_URL,’thumbs’,’images’),’.thumb’,”),’\/’,’/’);

// Note that you can also add C++ style comments.
/* It only took 12 lines! */

Before copying your script over to HDD for use, you should test it out in the Script Editor. Note that when you save your script, you MUST use the URL of the website with unnecessary information stripped. For example, for Fakku, you would name your script fakku.net (which would be appended with the .hdds extension when saved).

If everything checks out during testing, simply put it in your “scripts” folder located in Hentai Doujin Downloader’s directory. If it doesn’t exist, make sure to enable custom scripts under “Settings > Scripts”.

4 thoughts on “Scripting Walkthrough #1: FAKKU!

  1. Anonymous

    So I use Tsumino for my doujin needs and I have an overwhelming amount to favorites. I was wondering if there was some way to create a script to add my favorites from my account?

    Like

    Reply
    1. Squidy Post author

      I’m sorry to say you probably won’t be able to do that, because the current scripting system doesn’t support logins or cookies.

      Depending on how many pages of favorites you have, it may be viable to highlight all of them, press Ctrl+C, and then add them as one batch via the clipboard features (“Add from Clipboard”, or using the Clipboard Monitor). Of course, you’d need to do this for each page, would could get time-consuming. It’s faster than adding them individually, at least.

      I’ll see if I can come up with something better for accomplishing this.

      Like

      Reply
  2. Monas-san

    I had some difficulties when trying to create a specific script on nhentai. Let’s say I want to download all the doujins in here: http://nhentai.net/search/?q=comic+x-eros+english
    And for some reason I keep failing on making it. While on Pururin, you could just copy pasted the url of entire page like this one: http://pururin.com/browse/15825/10/hentai-ouji-to-warawanai-neko.html
    and Hdoujin Downloader automatically add all of them and all files are ready to be downloaded. I know it’s my first time making it but damn, it’s pretty hard to make one that works.
    Does any of that make sense to you? English is not my first language, more like….. third language on my third-world country lol.

    Like

    Reply
    1. Squidy Post author

      It’s actually impossible to make something to mimic that functionality using the current scripting system; I plan on completely redoing it to make it less confusing and more functional at some point.

      In the meantime, I added the ability to add manga from NHentai search pages to the download queue in the latest release. Hopefully this works for you. Also, note that you can use HDoujin Downloader with the Firefox extension FlashGot to easily add pages of manga URLs to the download queue; it takes simple lists of URLs as parameters.

      PS: Your English is near-perfect; I never would have guessed it wasn’t your first language.

      Like

      Reply

Leave a comment