Welcome to NTG

Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies and more

Passlist.net - The Best XXX Pass Topsite


Power User
  • Content Count

  • Joined

  • Last visited

  • Days Won

  • Points

    0 [ Donate ]

Enchantress last won the day on July 14 2017

Enchantress had the most liked content!

Community Reputation

1,415 NTG Professional

About Enchantress

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Started to code a Bing scraper. Its still in very early stages so don't expect it to do much Right now it scrapes URLs from first 5 pages of a Bing search. will try to update it and add more features etc in the future whenever I have time! Feedback is appreciated! How to use it? save all the code in a new file example: BingScrape.py After that go to CMD and write these commands if you have python 3.X: pip install beautifulsoup4 pip install requests If you have both python 2.X and 3.X write: pip3 install BeautifulSoup4 pip3 install requests import re import random import threading import random import string import requests import os import time from bs4 import BeautifulSoup start = time.time() dork = input('Input Search term: ') pagenumbers = ["0", "10", "20", "30", "40", "50"] def startscrape(search, page): with requests.session() as req: headersx = {'Host': 'www.bing.com', 'Connection': 'keep-alive', 'Upgrade-Insecure-Requests': '1', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36', 'Sec-Fetch-User': '?1', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3', 'Sec-Fetch-Site': 'none', 'Sec-Fetch-Mode': 'navigate', } url = 'https://www.bing.com/search?q=' + search + '&first=' + page resp = req.get(url, headers=headersx) soup = BeautifulSoup(resp.text, 'html.parser') divfound = soup.findAll("div", {"class": "b_title"}) urlsfound = re.findall(r'href=\"(.*?)\">', str(divfound), re.S) with open('urls.txt', 'a', errors='ignore') as f: for urls in urlsfound: if 'microsofttranslator' not in urls: f.write(urls + "\n") threads = [threading.Thread(target=startscrape, args=(dork, page)) for page in pagenumbers] for thread in threads: thread.start() for thread in threads: thread.join() print("Elapsed Time: %s" % (time.time() - start))
  2. Updated! @mimic @mohamed_b11117 @westy27 @brayok
  3. Just run it with the same combo with a new domain extention, its fast so doesn't take much time. example: Run .co.uk first run .com run .ru run .se 4 new combolists. I may decide to add more features to this script, but for now I only want it to be as fast and simple as possible. EDIT: 600+ megabyte .txt combolist took me 5 seconds.
  4. extract combos with .com / .co.uk / .dk / .se or any other domain extention with this python script. tested on 20 million combolist, took a few seconds. Code: EDIT: Updated, now saves remaining to a new file you specify. UPDATE: Added threading for 50% faster speed.
  5. Enjoy and leave a like!
  6. Enjoy and leave a like!
  7. Hi NTG EndlessHacker is similar to other hacking games but it has more features than most and it is a very fun place to be. I play EndlessHacker cos I love browser based hacking games with awesome communities! I hope you wish to join and hack a lot of players with me/us JOIN NOW! http://www.endlesshacker.com/
  8. Enchantress


    What language do you code in @HNB ?
  9. Thanks a lot. Not 100% sure what this is for, but source code may explain.
  10. I never coded in Loliscript, will be fun to see how it is.