HOW TO CRAWL LINKS LIKE A PRO!
Hi everyone! I hope you all are doing good. In this article, we are going to learn how we can crawl more links of a domain to increase our scope. If you are wondering “why we would ever do that?” The answer is simple: To increase the chances of getting bugs! The tool that i am going to show you in this article has released recently that have tremendous features. Before going further into this article, Do check out our website if you want to learn more about cybersecurity/ethical hacking/bug bounty. We are providing high quality labs (Based on real world scenario!) for free.
LET’S BEGIN
Before going further, we need to have some tools installed in our Kali Linux machine. Use the commands below to execute them:
sudo apt update -y
sudo apt install golang -y
Now we are ready to go,
- Clone the repository by the following command:
git clone https://github.com/projectdiscovery/katana
2. Once we have the tool cloned into our machine, we can change our directory as follows:
cd katana/cmd/katana
3. After this, we need to build the “main.go” file in the directory. Use the command below to do this
go build main.go
4. Now we are just one step behind! Let move the executable “main” file to /bin and change its name to katana.
USING KATANA
Now at this point, we are ready to use the tool. Let’s test whether everything is configured properly or not. Type “katana” in the terminal and you should see an output similar to this:
Please follow the steps again if you got any error in the process. Let’s see what this program has to offer.
Type the command below
katana -h
Amazing! As you can see, we have so many options to play with. But for now, let us focus on the important flags.
- u: Specify the URL
- d: Depth to crawl.(The more value you set, the more links it will crawl!)
- -jc: Crawl from JavaScript files as well
Now let’s try to find endpoints on Tesla! To do this, we can type the command below:
katana -u https://tesla.com -d 2 -jc
You will see the output similar to this:
Perfect! As you can see, we were successfully able to get all possible endpoints.
Check out this video for more reference:
CONCLUSION
So I hope you all have understood how we can crawl more links of a given url. If you have doubts at any point then feel free to comment it down. Let’s meet in another article. Till then, Have a great day and keep learning