
Fixes #877 Continued from #879, event loop thrashing can cause stack space exhaustion on ia32 systems. Previously this would thrash the event loop in Firefox and Firefox derived browsers such as Pale Moon. I suspect that this is the ultimate root cause of the bizarre irreproducible bugs that Pale Moon (and maybe Cromite) users have been reporting since at least #87 was merged. The root cause is an invalid boolean statement: ```js // send a progress update every 1024 iterations. since each thread checks // separate values, one simple way to do this is by bit masking the // nonce for multiples of 1024. unfortunately, if the number of threads // is not prime, only some of the threads will be sending the status // update and they will get behind the others. this is slightly more // complicated but ensures an even distribution between threads. if ( (nonce > oldNonce) | 1023 && // we've wrapped past 1024 (nonce >> 10) % threads === threadId // and it's our turn ) { postMessage(nonce); } ``` The logic here looks fine but is subtly wrong as was reported in #877 by a user in the Pale Moon community. Consider the following scenario: `nonce` is a counter that increments by the worker count every loop. This is intended to spread the load between CPU cores as such: | Iteration | Worker ID | Nonce | | :-------- | :-------- | :---- | | 1 | 0 | 0 | | 1 | 1 | 1 | | 2 | 0 | 2 | | 3 | 1 | 3 | And so on. The incorrect part of this is the boolean logic, specifically the part with the bitwise or `|`. I think the intent was to use a logical or (`||`), but this had the effect of making the `postMessage` handler fire on every iteration. The intent of this snippet (as the comment clearly indicates) is to make sure that the main event loop is only updated with the worker status every 1024 iterations per worker. This had the opposite effect, causing a lot of messages to be sent from workers to the parent JavaScript context. This is bad for the event loop. Instead, I have ripped out that statement and replaced it with a much simpler increment only counter that fires every 1024 iterations. Additionally, only the first thread communicates back to the parent process. This does mean that in theory the other workers could be ahead of the first thread (posting a message out of a worker has a nonzero cost), but in practice I don't think this will be as much of an issue as the current behaviour is. The root cause of the stack exhaustion is likely the pressure caused by all of the postMessage futures piling up. Maybe the larger stack size in 64 bit environments is causing this to be fine there, maybe it's some combination of newer hardware in 64 bit systems making this not be as much of a problem due to it being able to handle events fast enough to keep up with the pressure. Either way, thanks much to @wolfbeast and the Pale Moon community for finding this. This will make Anubis faster for everyone! Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
Anubis

Sponsors
Anubis is brought to you by sponsors and donors like:
Diamond Tier

Gold Tier








Overview
Anubis is a Web AI Firewall Utility that weighs the soul of your connection using one or more challenges in order to protect upstream resources from scraper bots.
This program is designed to help protect the small internet from the endless storm of requests that flood in from AI companies. Anubis is as lightweight as possible to ensure that everyone can afford to protect the communities closest to them.
Anubis is a bit of a nuclear response. This will result in your website being blocked from smaller scrapers and may inhibit "good bots" like the Internet Archive. You can configure bot policy definitions to explicitly allowlist them and we are working on a curated set of "known good" bots to allow for a compromise between discoverability and uptime.
In most cases, you should not need this and can probably get by using Cloudflare to protect a given origin. However, for circumstances where you can't or won't use Cloudflare, Anubis is there for you.
If you want to try this out, connect to anubis.techaro.lol.
Support
If you run into any issues running Anubis, please open an issue. Please include all the information I would need to diagnose your issue.
For live chat, please join the Patreon and ask in the Patron discord in the channel #anubis
.
Star History
Packaging Status
Contributors
Made with contrib.rocks.