twtxt

Timeline for https://eapl.me/twtxt.txt

🔄 Refresh timeline

👨‍💻 Login

Following: 16

tkanos https://twtxt.net/user/tkanos/twtxt.txt Remove

eaplme https://eapl.me/twtxt.txt Remove

eaplmx https://eapl.mx/twtxt.txt Remove

lyse https://lyse.isobeef.org/twtxt.txt Remove

prologic https://twtxt.net/user/prologic/twtxt.txt Remove

rrraksamam https://twtxt.net/user/rrraksamam/twtxt.txt Remove

darch https://neotxt.dk/user/darch/twtxt.txt Remove

shreyan https://twtxt.net/user/shreyan/twtxt.txt Remove

movq https://www.uninformativ.de/twtxt.txt Remove

bender https://twtxt.net/user/bender/twtxt.txt Remove

stigatle https://yarn.stigatle.no/user/stigatle/twtxt.txt Remove

darch http://darch.dk/twtxt.txt Remove

xuu https://txt.sour.is/user/xuu/twtxt.txt Remove

jason https://jasonsanta.xyz/twtxt.txt Remove

mckinley https://twtxt.net/user/mckinley/twtxt.txt Remove

eapl-mes-7-daily-links https://feeds.twtxt.net/eapl-mes-7-daily-links/twtxt.txt Remove


prologic
@stigatle / @abucci My current working theory is that there is an asshole out there that has a feed that both your pods are fetching with a multi-GB avatar URL advertised in their feed's preamble (metadata). I'd love for you both to review this PR, and once merged, re-roll your pods and dump your respective caches and share with me using https://gist.mills.io/
1 month ago
💬 Reply


prologic
Reply to #ve43paq
Or if y'all trust my monkey-ass coding skillz I'll just merge and you can do a `git pull` and rebuild 😅
1 month ago
💬 Reply


prologic
Reply to #ve43paq
I'm going to merge this...
1 month ago
💬 Reply


prologic
Reply to #ve43paq
@abucci / @stigatle Please `git pull`, rebuild and redeploy.

There is also a shell script in `./tools` called `dump_cache.sh`. Please run this, dump your cache and share it with me. 🙏
1 month ago
💬 Reply


stigatle
Reply to #ve43paq
@prologic I'm running it now. I'll keep an eye out for the tmp folder now (I built the branch you have made). I'll let you know shortly if it helped on my end.
1 month ago
💬 Reply


prologic
Reply to #ve43paq
@stigatle The problem is it'll only cause the attack to stop and error out. It won't stop your pod from trying to do this over and over again. That's why I need some help inspecting both your pods for "bad feeds".
1 month ago
💬 Reply


prologic
Reply to #ve43paq
if we can figure out wtf is going on here and my theory is right, we can blacklist that feed, hell even add it to the codebase as an "asshole".
1 month ago
💬 Reply


prologic
Reply to #ve43paq
Just thinking out loud here... With that PR merged (_or if you built off that branch_), you _might_ hopefully see new errors popup and we might catch this problematic bad feed in the act? Hmmm 🧐
1 month ago
💬 Reply


stigatle
Reply to #ve43paq
@prologic so, if I'm correct the dump tool made a pods.txt and a stats.txt file, those are the ones you want? or do you want the output that it spits out in the console window?
1 month ago
💬 Reply


prologic
Reply to #ve43paq
@stigatle You want to run `backup_db.sh` and `dump_cache.sh` They pipe JSON to stdout and prompt for your admin password. Example:

```
URL=<your_pod_url> ADMIN=<your_admin_user> ./tools/dump_cache.sh > cache.json
```
1 month ago
💬 Reply


prologic
Reply to #ve43paq
But just have a look at the `yarnd` server logs too. Any new interesting errors? 🤔 No more multi-GB tmp files? 🤔
1 month ago
💬 Reply


stigatle
Reply to #ve43paq
@prologic thank you. I run it now as you said, I'll get the files put somewhere shortly.
1 month ago
💬 Reply


prologic
Reply to #ve43paq
@stigatle Ta. I hope my theory is right 😅
1 month ago
💬 Reply


stigatle
Reply to #ve43paq
@prologic here you go:
https://drive.proton.me/urls/XRKQQ632SG#LXWehEZMNQWF
1 month ago
💬 Reply


prologic
Reply to #ve43paq
@stigatle Thank you! 🙏
1 month ago
💬 Reply


stigatle
Reply to #ve43paq
@prologic No worries, thanks for working on the fix for it so fast :)
1 month ago
💬 Reply


prologic
Reply to #ve43paq
Ooof

```
$ jq '.Feeds | keys[]' cache.json | wc -l
4402
```

If you both don't mind dropping your caches. I would recommend it. Settings -> Poderator Settings -> Refresh cache.
1 month ago
💬 Reply


prologic
Reply to #ve43paq
That was also a source of abuse that also got plugged (_being able to fill up the cache with garbage data_)
1 month ago
💬 Reply


stigatle
Reply to #ve43paq
@prologic you want a new cache from me - or was the one I sent OK for what you needed?
1 month ago
💬 Reply


prologic
Reply to #ve43paq
@stigatle The one you sent is fine. I'm inspecting it now. I'm just saying, do yourself a favor and nuke your pod's garbage cache 🤣 It'll rebuild automatically in a much more prestine state.
1 month ago
💬 Reply


stigatle
Reply to #ve43paq
@prologic will do, thanks for the tip!
1 month ago
💬 Reply


⏭️ Next